> Le 21 juil. 2016 à 15:08, Salz, Rich <rs...@akamai.com> a écrit :
> 
>> By raising the limit, you don’t suddenly put every application at risk of a 
>> DoS,
>> because these applications won’t suddenly use a 16k RSA key.
> 
> Yes we do, because the other side could send a key, not local config.

Server A code is modified to accept client key exchange messages up to 4kB.
Server A is configured with a 2048bits RSA key.

Client A connects to Server A, initiates the TLS handshake, receives the 
certificate, properly generates a 2048bits client key exchange message (using 
RSA key exchange), sends the message (about 260 octets); Server A accepts it 
and will do the job.

Client B connects to Server A, initiates the TLS handshake, receives the 
certificate, but for whatever reason decides to send a client key exchange 
message composed of what could be a 16kb RSA block (about 2056 bytes); Server A 
will accept the message, but will refuse to perform the RSA decryption (because 
it’s larger than the modulus length).

I don’t see where the harm can hide. There’s no more CPU eaten, network 
transfer has already happened at this moment, memory is already allocated.

Again, I’m not saying using a 16kRSA key is a good idea. It’s not a good idea, 
one should really consider ECC instead (both for performance and network 
reasons). But keeping this 2048 bytes limit is not a security decision. It’s 
the result of a trade-off choice, right. And that doesn't make it a bad 
decision either.

-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users

Reply via email to