On 12/07/18 07:14, Michael Paquier wrote:
On Wed, Jul 11, 2018 at 03:01:03PM +0300, Heikki Linnakangas wrote:
I started digging into this more closely, and ran into a little problem. If
channel binding is not used, the client sends a flag to the server to
indicate if it's because the client doesn't support channel binding, or
because it thought that the server doesn't support it. This is the
gs2-cbind-flag. How should the flag be set, if the server supports channel
binding type A, while client supports only type B? The purpose of the flag
is to detect downgrade attacks, where a man-in-the-middle removes the PLUS
variants from the list of supported mechanisms. If we treat incompatible
channel binding types as "client doesn't support channel binding", then the
downgrade attack becomes possible (the attacker can replace the legit PLUS
variants with bogus channel binding types that the client surely doesn't
support). If we treat it as "server doesn't support channel binding", then
we won't downgrade to the non-channel-binding variant, in the legitimate
case that the client and server both support channel binding, but with
incompatible types.

What we'd really want, is to include the full list of server's supported
mechanisms as part of the exchange, not just a boolean "y/n" flag. But
that's not what the spec says :-(.

Let's not break the spec :)  I understand from it that the client is in
charge of the choice, so we are rather free to choose the reaction the
client should have.  In the second phase of the exchange, the client
communicates back to the server the channel binding it has decided to
choose, it is not up to the server to pick up one if the client thinks
that it can use multiple ones.

The case where the client and the server both support the same channel binding type, we're OK. The problematic case is when e.g. the server only supports tls-unique and the client only supports tls-server-end-point. What we would (usually) like to happen, is to fall back to not using channel binding. But it's not clear how to make that work, and still protect from downgrade attacks. If the client responds "y", meaning "the client supports channel binding, but it looks like the server doesn't", then the server will reject the authentication, because it did actually support channel binding. Just not the same one that the client did. If the client could reply "y", and also echo back the server's list of supported channel bindings in the same message, that would solve the problem. But the spec doesn't do that.

I guess we should err on the side of caution, and fail the authentication in
that case. That's unfortunate, but it's better than not negotiating at all.
At least with the negotiation, the authentication will work if there is any
mutually supported channel binding type.

I think that in this case the client should throw an error
unconditionally if it wants to use channel A but server supports only B.
It is always possible for the client to adjust the channel binding type
it wants to use by using the connection parameter scram_channel_binding,
or to recompile.  If there is incompatibility between channel binding
types, it would actually mean that the client and the server are not
compiled with the same SSL implementation, would that actually work? Say
a server has only tls-unique with gnu's library and the client uses JDBC
which only has tls-server-end-point..

We would like to fall back to not using channel binding at all in that scenario, assuming the configuration doesn't require channel binding. But yeah, rejecting the connection seems to be the best we can do, given what the protocol is.

So, to keep things simple, it seems to me that we should just make the
server send the list, and then check at client-level if the list sent by
server includes what the client wants to use, complaining if the option
is not present.

Yep.

It seems that all implementations can support tls-server-end-point, after all, so I'm not too worried about this anymore. The spec says that it's the default, but I don't actually see any advantage to using it over tls-server-end-point. I think the main reason for tls-unique to exist is that it doesn't require the server to have a TLS certificate, but PostgreSQL requires that anyway.

Actually, I wonder if we should just rip out the tls-unique support, after all? We can add it back at a later version, if there is a need for it, but I don't think we will. With security-related code, simpler is better.

- Heikki

Reply via email to