[
https://issues.apache.org/jira/browse/HADOOP-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130783#comment-15130783
]
Anu Engineer commented on HADOOP-12758:
---------------------------------------
Hi [~cnauroth] and [~lmccay],
Thanks for your comments. To make sure we are all on the same page, I would
like to summarize what I think is the issue from my point of view.
# It introduces an inconsistent behavior among various clients. That is curl
and perl would work, but the same HTTP request will not work when you use
python and Ruby. That is the *biggest* concern.
Most developers will not even realize that we are reading the user agent string
on server side to behave differently. This creates subtle behavior differences
in a REST protocol like WebHDFS.
# Making the first feature a default. Relying on the user-agent so that some
older clients can work without modification of code should be a feature that
gets enabled by the administrator on a case by case. I think that it should
*not* be a default behavior. The point about curl string or any user agent
being easily spoof-able was to reinforce this point. We should not be
introducing subtle behavior changes to a REST protocol based on what client it
is. We should by all means provide that feature if it makes the life of admins
easy. But I don't think we should make it a default.
Just to re-cap: Issue one is *inconsistency* and second is making that
*inconsistency a default choice*. I am all for shipping this feature without
any default agents pre-baked into code and letting Admins make that choice.
That way, The default behavior of WebHDFS is consistent whether you use curl,
ruby , python, Java, Perl or JavaScript, And Admins always have the option to
modify the settings if they deem that they cannot modify any of the older
client code.
Ps. Even if admins enable this feature, it will still *break* older webHDFS
clients that are written in python or ruby. So we will still need to document
that fact with this new configuration parameter.
> Extend CSRF Filter with UserAgent Checks
> ----------------------------------------
>
> Key: HADOOP-12758
> URL: https://issues.apache.org/jira/browse/HADOOP-12758
> Project: Hadoop Common
> Issue Type: Bug
> Components: security
> Reporter: Larry McCay
> Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-12758-001.patch, HADOOP-12758-002.patch
>
>
> To protect against CSRF attacks, HADOOP-12691 introduces a CSRF filter that
> will require a specific HTTP header to be sent with every REST API call. This
> will affect all API consumers from web apps to CLIs and curl.
> Since CSRF is primarily a browser based attack we can try and minimize the
> impact on non-browser clients.
> This enhancement will provide additional configuration for identifying
> non-browser useragents and skipping the enforcement of the header requirement
> for anything identified as a non-browser. This will largely limit the impact
> to browser based PUT and POST calls when configured appropriately.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)