[
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13690136#comment-13690136
]
Luke Lu commented on HADOOP-9421:
---------------------------------
bq. There is nothing in the protocol that would prevent SCRAM being supported.
I meant you'll be SOL to make the token optimization work. Your protocol
*requires* an extra round trip to support SCRAM.
bq. Guessing a supported auth/mechanism
For most common hadoop auth work load: distributed containers/tasks, you don't
need to guess, it's the delegation token with digest-md5/scram, as it's a
framework internal token bootstrapped by other public facing mechanisms. For
other use cases, you can use cached values. For the remaining use cases, the
client can send an empty INITIATE and use NEGOTIATE and REINITIATE with the
same total round-trip cost as yours in all cases. With the optional client
initiate, my protocol gives the choices to the practicing system designers
instead of the original protocol designers.
bq. Dealing with the mishaps when the client blows itself up trying an auth the
server doesn't even support
INITIATE contains the same extra info like protocol and serverId for preferred
auth. Server can simply send a NEGOTIATE if it decides that it cannot support
the preferred auth choice, the client can then decide to REINITIATE or abort.
bq. if it even attempts kerberos with a non-kerberos server. It won't even
succeed far enough to send the INITIATE
For this contrived case, the client can catch exceptions for the preferred auth
when generating the initial token, which would apply to fetching service ticket
for non-kerberos server, and send an empty INITIATE to NEGOTIATE and
REINITIATE. Again for integration clients that need to talk to multiple servers
with different auths it can simply use empty INITIATE to NEGOTIATE and cache
the server auth/mechs for later use to save a round-trip later.
Imagine a busy interactive web console that talk to multiple back-end Hadoop
servers, it's not feasible to keep a connection per user open to all these
servers, you often need to constantly creating new connections to the back-end
servers (a connection pool helps), my protocol allows the web console to save a
mandatory round-trip from yours, which can make the interactive user experience
much better due to lowered latencies.
In summary, my protocol gives that choice to real system designers. Your
protocol takes away that choice because you could not possibly think of all use
cases, where auth latency matters.
> Convert SASL to use ProtoBuf and add lengths for non-blocking processing
> ------------------------------------------------------------------------
>
> Key: HADOOP-9421
> URL: https://issues.apache.org/jira/browse/HADOOP-9421
> Project: Hadoop Common
> Issue Type: Sub-task
> Affects Versions: 2.0.3-alpha
> Reporter: Sanjay Radia
> Assignee: Daryn Sharp
> Priority: Blocker
> Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch,
> HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch,
> HADOOP-9421.patch, HADOOP-9421-v2-demo.patch
>
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira