[ 
https://issues.apache.org/jira/browse/HDFS-17668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17899668#comment-17899668
 ] 

Istvan Toth commented on HDFS-17668:
------------------------------------

{quote}Both the SASL security provider and conf are set up by the admin.
 - When the admin set up the SASL security provider which does not support QOP, 
it means that the admin does not want QOP.{quote}
Check
{quote} - If the admin likes to have QOP, they must use a provider supporting 
QOP.{quote}
Check
{quote}We have to document it and make it clear to the admins.
{quote}
That is not enough.

Security should not rely on the admin having a full understanding of SASL 
intricacies, and the Hadoop bugs around them.
If the admin sets an mechanism that does not support QOP, and then sets a 
requirement for a non-auth QOP, then Hadoop must not silently ignore the QOP 
setting.

In this case Hadoop shoul fail early and loudly, which will let the admin fix 
the issue. 
{quote}Currently, the default mechanism is DIGEST-MD5. Unfortunately, MD5 is 
considered as insecure 20+ years ago. Similarly, DES is also considered as 
insecure. Even a mechanism such as DIGEST-MD5 supporting QOP, it may only give 
us a false sense of security.{color:#7a869a}
{color}
{quote}
I do not contest that.

All I am saying is that Hadoop should fail in a non-ambigious manner when the 
requested security parameters cannot be met, instead of silently downgrading 
security.

> Treat null SASL negotiated QOP as auth in 
> DataTransferSaslUtil#checkSaslComplete()
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-17668
>                 URL: https://issues.apache.org/jira/browse/HDFS-17668
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.5.0
>            Reporter: Istvan Toth
>            Assignee: Istvan Toth
>            Priority: Major
>              Labels: pull-request-available
>
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.checkSaslComplete(SaslParticipant,
>  Map<String, String>) used to throw an NPE when the SASL.getNegotiatedQop() 
> returned null. This was not ideal, but it erred on the side of caution, as it 
> kept mechanisms that did not set the negotiated QOP property at all from 
> working with Hadoop.
> However, it was recently changed to skip the verification if the negotiated 
> QOP value is null.
> This is a bug, as according to the docs, a null negotiated QOP value should 
> be treated as "auth" 
> [https://docs.oracle.com/en/java/javase/23/security/java-sasl-api-programming-and-deployment-guide1.html#GUID-762BDD49-6EE8-419C-A45E-540462CB192B]
> For native SASL encryption (SaslInputStream), this is bad, because Hadoop 
> will think that it uses encryption, but it in fact uses cleartext.
> I did not analyze the Hadoop-managed encryption (CryptoInputStream) case 
> fully, that  one might even negotiate and use encryption correctly, since it 
> does not rely on SASL for any of that, but it still depends on a bug.
> At first glance, the Hadoop-managed encryption shouldn't even ask for or 
> check for "auth-conf", as it doesn't seem to use the SASL crypto 
> functionality at all, which would enabled it to work with mechanisms that do 
> not support QOP correcetly.
> These problems only trigger when a Mechanism without QOP support is used. 
> Mechanisms that do support QOP will return the negotiated QOP, the null check 
> will not take effect, and encryption will work normally.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to