[ 
https://issues.apache.org/jira/browse/HDFS-17668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated HDFS-17668:
-------------------------------
    Description: 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.checkSaslComplete(SaslParticipant,
 Map<String, String>) used to throw an NPE when the SASL.getNegotiatedQop() 
returned null. This was not ideal, but it erred on the side of caution, as it 
kept mechanisms that did not set the negotiated QOP property at all from 
working with Hadoop.

However, it was recently changed to skip the verification if the negotiated QOP 
value is null.
This is a bug, as according to the docs, a null negotiated QOP value should be 
treated as "auth" 
[https://docs.oracle.com/en/java/javase/23/security/java-sasl-api-programming-and-deployment-guide1.html#GUID-762BDD49-6EE8-419C-A45E-540462CB192B]

The current checkSaslComplete() method will allow a null negotiated QOP value 
when auth-conf QOP value was specified, which means that it the SASL 
initialization will succeed, but all other Hadoop transfer methods will 
(correctly) interpret the null QOP value as "auth" will not wrap the messages 
with SASL, and use plain text, even though "auth-conf" was explicitly 
requested. This is a bad thing.

For a fully compliant SASL method this shouldn't matter, as it would either 
fail to complete the negotiation if it cannot satisfy the required QOP, OR it 
would return the negotiated QOP value if it successfully negotiated a non-auth 
QOP value. However, for a broken one which successfully neotiates auth-conf, 
but doesn't return as a negotiated QOP (or just plain ignores the requested 
QOP) this can result in bad things.

While Hadoop cannot prepare for every broken SASL implementation, it should at 
lease behave according to the spec, and refuse to work wit the SASL provider if 
it is obviously broken.

  was:
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.checkSaslComplete(SaslParticipant,
 Map<String, String>) used to throw an NPE when the SASL.getNegotiatedQop() 
returned null. This was not ideal, but it erred on the side of caution, as it 
kept mechanisms that did not set the negotiated QOP property at all from 
working with Hadoop.

However, it was recently changed to skip the verification if the negotiated QOP 
value is null.
This is a bug, as according to the docs, a null negotiated QOP value should be 
treated as "auth" 
[https://docs.oracle.com/en/java/javase/23/security/java-sasl-api-programming-and-deployment-guide1.html#GUID-762BDD49-6EE8-419C-A45E-540462CB192B]

The current checkSaslComplete() method will allow a null negotiated QOP value 
when ANY QOP value was specified, which means that it the SASL initialization 
will succeed, and the connection will use plain text even when "auth-conf" was 
explicitly requested. This is a bad thing.

This only happens with broken SASL provides, as a compliant SASL provider will 
either fail to complete negotiation if the requested non-auth QOP cannot be 
met, OR it will return the negotiated QOP value if it successfully negotiated a 
non-auth QOP value.

However, for broken SASL methods we'd better consider a null QOP as "auth", 
otherwise Hadoop may try to negotiate encryption with SASL methods that don't 
even support it, or did not negotiate anything over "auth" (though in this case 
they arguably shoud set "auth" explicitly), or it will blindly turn encryption 
on when a buggy SASL method does successfully negotiate encryption, but does 
not set the required negotiated QOP value.


> Treat null SASL negotiated QOP as auth in 
> DataTransferSaslUtil#checkSaslComplete()
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-17668
>                 URL: https://issues.apache.org/jira/browse/HDFS-17668
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.5.0
>            Reporter: Istvan Toth
>            Priority: Major
>
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.checkSaslComplete(SaslParticipant,
>  Map<String, String>) used to throw an NPE when the SASL.getNegotiatedQop() 
> returned null. This was not ideal, but it erred on the side of caution, as it 
> kept mechanisms that did not set the negotiated QOP property at all from 
> working with Hadoop.
> However, it was recently changed to skip the verification if the negotiated 
> QOP value is null.
> This is a bug, as according to the docs, a null negotiated QOP value should 
> be treated as "auth" 
> [https://docs.oracle.com/en/java/javase/23/security/java-sasl-api-programming-and-deployment-guide1.html#GUID-762BDD49-6EE8-419C-A45E-540462CB192B]
> The current checkSaslComplete() method will allow a null negotiated QOP value 
> when auth-conf QOP value was specified, which means that it the SASL 
> initialization will succeed, but all other Hadoop transfer methods will 
> (correctly) interpret the null QOP value as "auth" will not wrap the messages 
> with SASL, and use plain text, even though "auth-conf" was explicitly 
> requested. This is a bad thing.
> For a fully compliant SASL method this shouldn't matter, as it would either 
> fail to complete the negotiation if it cannot satisfy the required QOP, OR it 
> would return the negotiated QOP value if it successfully negotiated a 
> non-auth QOP value. However, for a broken one which successfully neotiates 
> auth-conf, but doesn't return as a negotiated QOP (or just plain ignores the 
> requested QOP) this can result in bad things.
> While Hadoop cannot prepare for every broken SASL implementation, it should 
> at lease behave according to the spec, and refuse to work wit the SASL 
> provider if it is obviously broken.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to