[ 
https://issues.apache.org/jira/browse/HDFS-17668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated HDFS-17668:
-------------------------------
    Description: 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.checkSaslComplete(SaslParticipant,
 Map<String, String>) used to throw an NPE when the SASL.getNegotiatedQop() 
returned null. This was not ideal, but it erred on the side of caution, as it 
kept mechanisms that did not set the negotiated QOP property at all from 
working with Hadoop.

However, it was recently changed to skip the verification if the negotiated QOP 
value is null.
This is a bug, as according to the docs, a null negotiated QOP value should be 
treated as "auth" 
[https://docs.oracle.com/en/java/javase/23/security/java-sasl-api-programming-and-deployment-guide1.html#GUID-762BDD49-6EE8-419C-A45E-540462CB192B]

For native SASL encryption (SaslInputStream), this is bad, because Hadoop will 
think that it uses encryption, but it in fact uses cleartext.

I did not analyze the Hadoop-managed encryption (CryptoInputStream) case fully, 
that  one might even negotiate and use encryption correctly, since it does not 
rely on SASL for any of that, but it still depends on a bug.

At first glance, the Hadoop-managed encryption shouldn't even ask for or check 
for "auth-conf", as it doesn't seem to use the SASL crypto functionality at 
all, which would enabled it to work with mechanisms that do not support QOP 
correcetly.

  was:
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.checkSaslComplete(SaslParticipant,
 Map<String, String>) used to throw an NPE when the SASL.getNegotiatedQop() 
returned null. This was not ideal, but it erred on the side of caution, as it 
kept mechanisms that did not set the negotiated QOP property at all from 
working with Hadoop.

However, it was recently changed to skip the verification if the negotiated QOP 
value is null.
This is a bug, as according to the docs, a null negotiated QOP value should be 
treated as "auth" 
[https://docs.oracle.com/en/java/javase/23/security/java-sasl-api-programming-and-deployment-guide1.html#GUID-762BDD49-6EE8-419C-A45E-540462CB192B]

The current checkSaslComplete() method will allow a null negotiated QOP value 
when auth-conf QOP value was specified, which means that it the SASL 
initialization will succeed, but all other Hadoop transfer methods will 
(correctly) interpret the null QOP value as "auth" will not wrap the messages 
with SASL, and use plain text, even though "auth-conf" was explicitly 
requested. This is a bad thing.

This can happen if _dfs.encrypt.data.transfer_ is enabled, but the SASL 
mechanism doesn't support QOP/encryption, and ignores the QOP value, and 
returns null for the negotiated value.
In this case, Hadoop will negotiate encryption with the server, and treat the 
null QOP value as the the requested QOP value.


> Treat null SASL negotiated QOP as auth in 
> DataTransferSaslUtil#checkSaslComplete()
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-17668
>                 URL: https://issues.apache.org/jira/browse/HDFS-17668
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.5.0
>            Reporter: Istvan Toth
>            Priority: Major
>              Labels: pull-request-available
>
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.checkSaslComplete(SaslParticipant,
>  Map<String, String>) used to throw an NPE when the SASL.getNegotiatedQop() 
> returned null. This was not ideal, but it erred on the side of caution, as it 
> kept mechanisms that did not set the negotiated QOP property at all from 
> working with Hadoop.
> However, it was recently changed to skip the verification if the negotiated 
> QOP value is null.
> This is a bug, as according to the docs, a null negotiated QOP value should 
> be treated as "auth" 
> [https://docs.oracle.com/en/java/javase/23/security/java-sasl-api-programming-and-deployment-guide1.html#GUID-762BDD49-6EE8-419C-A45E-540462CB192B]
> For native SASL encryption (SaslInputStream), this is bad, because Hadoop 
> will think that it uses encryption, but it in fact uses cleartext.
> I did not analyze the Hadoop-managed encryption (CryptoInputStream) case 
> fully, that  one might even negotiate and use encryption correctly, since it 
> does not rely on SASL for any of that, but it still depends on a bug.
> At first glance, the Hadoop-managed encryption shouldn't even ask for or 
> check for "auth-conf", as it doesn't seem to use the SASL crypto 
> functionality at all, which would enabled it to work with mechanisms that do 
> not support QOP correcetly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to