On 28/09/17 14:38, Ma chunhui wrote:
> Hi, Matt
> <sorry I  repied this mail and copied replies from daily digest>
> Thanks for your quickly response. 
> 
> And yes, with this option SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS in client
> side  the result can be get from one decryption. But the problem is,
> sometimes we can't control client's behavior.  Maybe the client is
> openssl s_client, or maybe it's a python script or some other client,
> and the option is not that safe.
> 
> Another interesting thing is,  if server is using OpenSSL 1.0.2 or1.0.1,
> the result can be get from just one decryption(one tls1_enc method) with
> protocol TLSv1, and the client didn't add that option either(In fact,
> I'm using client OpenSSL1.1.0f). So it seems the process is changed in
> some version of OpenSSL1.1.0.  
> Could you please explain a bit more on why openSSL 1.1.0f made this
> change? (I mean, the SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS option is added
> since 0.9.6d but openSSL1.0.1 and 1.0.2 can get result in one tls1_enc,
> while OpenSSL1.1.0f needs two tls1_enc)

Well I can't replicate that result. If using an OpenSSL 1.0.2 server I
still see two calls tls1_enc (with both a 1.1.0 client and a 1.0.2
client). Note - that doesn't necessarily translate to two SSL_read()
calls (see below).


> The reason why I'm focus on this is because:  I'm using JNI call
> OpenSSL, my usage is like this(Just like tomcat native ):  first, use
> BIO_write to write data to openssl, and then use SSL_read to read 0
> length to trigger decryption, and then use SSL_pending to check how much
> data there is. If the data length is not put in result buffer in one
> decryption. then SSL_pending will get a 0 length result. and the whole
> process needs to be changed ,which is not we want.

If OpenSSL reads an empty record then it will immediately try to read
the next record if one is available without returning control back to
the calling application. Therefore a single SSL_read() call can result
in multiple tls1_enc calls. However this is highly dependent on timing.
If the empty record and the following non-empty record arrive at the
destination slightly separated by time then when OpenSSL reads the first
empty record it will attempt to read the next record. This will fail
because it has not arrived yet and control will return to the calling
application. So sometimes you will have to call SSL_read() twice and
sometimes you will have to call it once. This is possibly a reason why
you see different behaviour between 1.0.2 and 1.1.0, i.e. because this
is very timing sensitive.

Basically what you are doing is wrong. You cannot rely on the fact that
calling SSL_read() will definitely result in readable data being
decrypted. It might do - it might not. Another scenario where this could
occur is if a record arrives that is split across multiple TCP packets.
You call SSL_read() when the first TCP packet arrives - but because a
full record isn't there yet you get no readable application data back.
Yet another scenario is if the client attempts a renegotiation: network
packets arrive but when decrypted they don't actually contain any
application data - just handshake data.

All of this is very reliant on timing, how the client behaves (which you
cannot control) and how the network behaves. If this was working for you
before then it sounds like you've been lucky so far.

Matt


> 
> Thanks.
> 
> 
>>From: Matt Caswell <m...@openssl.org <mailto:m...@openssl.org>>
>>To: openssl-dev@openssl.org <mailto:openssl-dev@openssl.org>
>>Subject: Re: [openssl-dev] why TLSv1 need two tls1_enc to get
>>        decrypted data while TLSv1.1/TLSv1.2 need one in OpenSSL1.1.0f?
>>Message-ID: <0890716d-2a3c-a659-f74e-7f2a5a89e...@openssl.org
> <mailto:0890716d-2a3c-a659-f74e-7f2a5a89e...@openssl.org>>
>>Content-Type: text/plain; charset=utf-8
>>
>>
>>
>>On 27/09/17 15:44, Ma chunhui wrote:
>>> Hi,?
>>>
>>> I met one problem when using OpenSSL1.1.0f with protocol TLSv1.
>>> In brief, when using TLSv1, ?after server side received encrypted data,
>>> and after function tls1_enc finished, the decrypted data is not put in
>>> result buffer, after another tls1_enc, the decrypted data is put in
>>> result buffer. While TLSv1.1/TLSv1.2 needs only one tls1_enc.
>>>
>>>
>>> The way to reproduce it is quite simple:
>>>
>>> 1.some preparation: openssl req -x509 -newkey rsa:2048 -keyout key.pem
>>> -out cert.pem -days 365 -nodes
>>> 2.start server: openssl s_server -key key.pem -cert cert.pem -accept
>>> 44330 -www
>>> ? ? it's better to start server with gdb, and set breakpoints at
>>> tls1_enc, then continue to run.?
>>> 3.openssl s_client -connect localhost:44330 -tls1 -debug
>>>
>>> After the client is started, ?the server side will stop at breakpoint,
>>> do several "c" to make it continue to run to wait for client's messages
>>> Then at client side, type a simple "hello" message and press Enter. Then
>>> server side will stop at tls1_enc, the input data is same as encrypted
>>> data from client side, but after Evp_Cipher and some pad removing, the
>>> decrypted data length is 0. After another tls1_enc, the decrypted data
>>> "hello" is put in the result buffer.
>>>
>>> But if client use -tls11 or -tls12, the decrypted "hello" is put in the
>>> result buffer after the first tls1_enc.
>>>
>>> Could anyone explains why the behavior of decryption is different
>>> between TLSv1 and TLSv1.1/TLSv1.2?
>>
>>In TLSv1 and below the CBC IV is the previous record's last ciphertext
>>block. This can enable certain types of attack where an attacker knows
>>the IV that will be used for a record in advance. The problem was fixed
>>in the specification of TLSv1.1 and above where a new IV is used for
>>each record. As a counter measure to this issue OpenSSL (in TLSv1) sends
>>an empty record before each "real" application data record to
>>effectively randomise the IV and make it unpredictable so that an
>>attacker cannot know it in advance.
>>
>>Therefore a TLSv1 OpenSSL client will send two records of application
>>data where a TLSv1.1 or above OpenSSL client will just send one. This
>>results in tls1_enc being called twice on the server side.
>>
>>This behaviour can be switched off by using the option
>>SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS - but since this is considered
>>insecure that would probably be unwise.
>>
>>Matt
> 
> 
> On Wed, Sep 27, 2017 at 10:44 PM, Ma chunhui <mch....@gmail.com
> <mailto:mch....@gmail.com>> wrote:
> 
>     Hi, 
> 
>     I met one problem when using OpenSSL1.1.0f with protocol TLSv1.
>     In brief, when using TLSv1,  after server side received encrypted
>     data, and after function tls1_enc finished, the decrypted data is
>     not put in result buffer, after another tls1_enc, the decrypted data
>     is put in result buffer. While TLSv1.1/TLSv1.2 needs only one tls1_enc.
> 
> 
>     The way to reproduce it is quite simple:
> 
>     1.some preparation: openssl req -x509 -newkey rsa:2048 -keyout
>     key.pem -out cert.pem -days 365 -nodes
>     2.start server: openssl s_server -key key.pem -cert cert.pem -accept
>     44330 -www
>         it's better to start server with gdb, and set breakpoints at
>     tls1_enc, then continue to run. 
>     3.openssl s_client -connect localhost:44330 -tls1 -debug
> 
>     After the client is started,  the server side will stop at
>     breakpoint, do several "c" to make it continue to run to wait for
>     client's messages
>     Then at client side, type a simple "hello" message and press Enter.
>     Then server side will stop at tls1_enc, the input data is same as
>     encrypted data from client side, but after Evp_Cipher and some pad
>     removing, the decrypted data length is 0. After another tls1_enc,
>     the decrypted data "hello" is put in the result buffer.
> 
>     But if client use -tls11 or -tls12, the decrypted "hello" is put in
>     the result buffer after the first tls1_enc.
> 
>     Could anyone explains why the behavior of decryption is different
>     between TLSv1 and TLSv1.1/TLSv1.2?
> 
>     Thanks.
> 
> 
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

Reply via email to