[ 
https://issues.apache.org/jira/browse/HDFS-17668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated HDFS-17668:
-------------------------------
    Description: 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.checkSaslComplete(SaslParticipant,
 Map<String, String>) used to throw an NPE when the SASL.getNegotiatedQop() 
returned null. This was not ideal, but it erred on the side of caution, as it 
kept mechanisms that did not set the negotiated QOP property at all from 
working with Hadoop.

However, it was recently changed to skip the verification if the negotiated QOP 
value is null.
This is a bug, as according to the docs, a null negotiated QOP value should be 
treated as "auth" 
[https://docs.oracle.com/en/java/javase/23/security/java-sasl-api-programming-and-deployment-guide1.html#GUID-762BDD49-6EE8-419C-A45E-540462CB192B]

For native SASL encryption (SaslInputStream), this is bad, because Hadoop will 
think that it uses encryption, but it in fact uses cleartext.

I did not analyze the Hadoop-managed encryption (CryptoInputStream) case fully, 
that  one might even negotiate and use encryption correctly, since it does not 
rely on SASL for any of that, but it still depends on a bug.

At first glance, the Hadoop-managed encryption shouldn't even ask for or check 
for "auth-conf", as it doesn't seem to use the SASL crypto functionality at 
all, which would enabled it to work with mechanisms that do not support QOP 
correcetly.

These problems only trigger when a Mechanism without QOP support is used. 
Mechanisms that do support QOP will return the negotiated QOP, the null check 
will not take effect, and encryption will work normally.

  was:
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.checkSaslComplete(SaslParticipant,
 Map<String, String>) used to throw an NPE when the SASL.getNegotiatedQop() 
returned null. This was not ideal, but it erred on the side of caution, as it 
kept mechanisms that did not set the negotiated QOP property at all from 
working with Hadoop.

However, it was recently changed to skip the verification if the negotiated QOP 
value is null.
This is a bug, as according to the docs, a null negotiated QOP value should be 
treated as "auth" 
[https://docs.oracle.com/en/java/javase/23/security/java-sasl-api-programming-and-deployment-guide1.html#GUID-762BDD49-6EE8-419C-A45E-540462CB192B]

For native SASL encryption (SaslInputStream), this is bad, because Hadoop will 
think that it uses encryption, but it in fact uses cleartext.

I did not analyze the Hadoop-managed encryption (CryptoInputStream) case fully, 
that  one might even negotiate and use encryption correctly, since it does not 
rely on SASL for any of that, but it still depends on a bug.

At first glance, the Hadoop-managed encryption shouldn't even ask for or check 
for "auth-conf", as it doesn't seem to use the SASL crypto functionality at 
all, which would enabled it to work with mechanisms that do not support QOP 
correcetly.


> Treat null SASL negotiated QOP as auth in 
> DataTransferSaslUtil#checkSaslComplete()
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-17668
>                 URL: https://issues.apache.org/jira/browse/HDFS-17668
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.5.0
>            Reporter: Istvan Toth
>            Priority: Major
>              Labels: pull-request-available
>
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.checkSaslComplete(SaslParticipant,
>  Map<String, String>) used to throw an NPE when the SASL.getNegotiatedQop() 
> returned null. This was not ideal, but it erred on the side of caution, as it 
> kept mechanisms that did not set the negotiated QOP property at all from 
> working with Hadoop.
> However, it was recently changed to skip the verification if the negotiated 
> QOP value is null.
> This is a bug, as according to the docs, a null negotiated QOP value should 
> be treated as "auth" 
> [https://docs.oracle.com/en/java/javase/23/security/java-sasl-api-programming-and-deployment-guide1.html#GUID-762BDD49-6EE8-419C-A45E-540462CB192B]
> For native SASL encryption (SaslInputStream), this is bad, because Hadoop 
> will think that it uses encryption, but it in fact uses cleartext.
> I did not analyze the Hadoop-managed encryption (CryptoInputStream) case 
> fully, that  one might even negotiate and use encryption correctly, since it 
> does not rely on SASL for any of that, but it still depends on a bug.
> At first glance, the Hadoop-managed encryption shouldn't even ask for or 
> check for "auth-conf", as it doesn't seem to use the SASL crypto 
> functionality at all, which would enabled it to work with mechanisms that do 
> not support QOP correcetly.
> These problems only trigger when a Mechanism without QOP support is used. 
> Mechanisms that do support QOP will return the negotiated QOP, the null check 
> will not take effect, and encryption will work normally.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to