Re: SSLEngine weird behavior in 11+21?
I filed a bug for this improvement. https://bugs.openjdk.java.net/browse/JDK-8209840 We may fix it in a near release. Thanks, Xuelei On 8/22/2018 3:29 AM, Simone Bordet wrote: Hi, On 7/12/2018 1:17 PM, Simone Bordet wrote: Currently: - Server wraps 160, 6 and 907 in 3 wraps. - Server *sends* the 1073 bytes in 1 TCP write - Client unwraps 160, then goes into NEED_WRAP - Client wraps 6, then goes again into NEED_UNWRAP to finish with the 6 and 907 received by the server. What I'm suggesting: - Server wraps 160, 6 and 907 in 3 wraps. - Server *sends* the 1073 (160+6+907) bytes in 1 TCP write - Client unwraps 160, stays in NEED_UNWRAP, unwraps the 6, unwraps the 907, then goes into NEED_WRAP - Client wraps 6 and 58 - Client *sends* the 64 (6+58) bytes in 1 TCP write. The 6 bytes of the dummy change_cipher_spec are *sent* just after receiving the ServerHello in both cases, they are just *generated* at different times. By having all the unwraps() consecutive and all the wraps() consecutive you can enable a number of optimizations I described earlier. Imagine a client that would perform a TCP *send* every time the state moves away from NEED_WRAP. Currently it would: 1. TCP send for ClientHello 2. Reads 1073 from server 3. Unwrap 160, then NEED_WRAP 4. Generate 6, then NEED_UNWRAP 5. TCP send the 6 bytes 6. Unwrap 6 and 907, then NEED_WRAP 7. Generate 58 then FINISHED 8 TCP send the 58 bytes. You can see that the client issues 3 TCP sends. With what I am suggesting, the client would only issue 2 TCP sends, which seems more in line with the efforts in TLS 1.3 to be more efficient. It would just be a matter of *generating* those 6 bytes a bit later, *after* having fully unwrapped the ServerHello. On Fri, Jul 13, 2018 at 3:45 PM Xuelei Fan wrote: It's a good idea! I just built: $ hg id -i 76072a077ee1 and I can still see the (very) inefficient behavior above. Are there plans to fix it? Thanks!
Re: SSLEngine weird behavior in 11+21?
Hi, > > On 7/12/2018 1:17 PM, Simone Bordet wrote: > > Currently: > > - Server wraps 160, 6 and 907 in 3 wraps. > > - Server *sends* the 1073 bytes in 1 TCP write > > - Client unwraps 160, then goes into NEED_WRAP > > - Client wraps 6, then goes again into NEED_UNWRAP to finish with the > > 6 and 907 received by the server. > > > > What I'm suggesting: > > - Server wraps 160, 6 and 907 in 3 wraps. > > - Server *sends* the 1073 (160+6+907) bytes in 1 TCP write > > - Client unwraps 160, stays in NEED_UNWRAP, unwraps the 6, unwraps the > > 907, then goes into NEED_WRAP > > - Client wraps 6 and 58 > > - Client *sends* the 64 (6+58) bytes in 1 TCP write. > > > > The 6 bytes of the dummy change_cipher_spec are *sent* just after > > receiving the ServerHello in both cases, they are just *generated* at > > different times. > > > > By having all the unwraps() consecutive and all the wraps() > > consecutive you can enable a number of optimizations I described > > earlier. > > > > Imagine a client that would perform a TCP *send* every time the state > > moves away from NEED_WRAP. > > Currently it would: > > 1. TCP send for ClientHello > > 2. Reads 1073 from server > > 3. Unwrap 160, then NEED_WRAP > > 4. Generate 6, then NEED_UNWRAP > > 5. TCP send the 6 bytes > > 6. Unwrap 6 and 907, then NEED_WRAP > > 7. Generate 58 then FINISHED > > 8 TCP send the 58 bytes. > > > > You can see that the client issues 3 TCP sends. > > > > With what I am suggesting, the client would only issue 2 TCP sends, > > which seems more in line with the efforts in TLS 1.3 to be more > > efficient. > > > > It would just be a matter of *generating* those 6 bytes a bit later, > > *after* having fully unwrapped the ServerHello. > > > On Fri, Jul 13, 2018 at 3:45 PM Xuelei Fan wrote: > It's a good idea! I just built: $ hg id -i 76072a077ee1 and I can still see the (very) inefficient behavior above. Are there plans to fix it? Thanks! -- Simone Bordet --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz
Re: SSLEngine weird behavior in 11+21?
Hi, On Tue, Jul 31, 2018 at 9:56 PM Xuelei Fan wrote: > Hm, I see your point. Because of the half-close policy, I think the > server could keep sending messages other than the close_notify alert. Exactly. > My concern of the CLOSED+NOT_HANDSHAKING is that it does not indicate > the following operations. Sure it does. CLOSED while unwrapping tells the application that the other side has closed. Now the application has a choice: it can wrap() more data, or call closeOutbound(). >From there, it follows SSLEngine instructions via the states. Having to take into consideration not only the Result, not only the HandshakeStatus, but also bytes produced seems weird to me. I'd prefer to just rely only on Result and HandshakeStatus. > Yes, but it is the desired behavior to me. For the TLS 1.3 half close > policy, it is not expected to generate close_notify reply unless the > server close its outbound. So the server can keep sending the > application data after the read close, as it write side is open. Exactly. > The TLS 1.3 spec says: >"Each party MUST send a "close_notify" alert before closing its write > side of the connection, unless it has already sent some error alert. > This does not have any effect on its read side of the connection." > > And the SSLEngine.closeOutbound() spec says: > "Signals that no more outbound application data will be sent on > this SSLEngine." > > I think it is fine to keep the write side open while the read side closed. > > In this update, for TLS 1.2 connection, the duplex-close behavior is > reserved: as the server received the close_notify, it moving to > NEED_WRAP; and the a close_notify will be delivered, and then duplex > close the connection. Exactly, and without the need for the receiver of the close_notify to call closeOutbound(). That will be a difference between TLS 1.2 and TLS 1.3 that applications should take into consideration. > However, for TLS 1.3 connection, if there is no call to > server.closeOutbound(), the server write side keeps open even the read > side has been closed. > > In this update, for TLS 1.3, I tried to support half-close for: > 1. SSLEngine.closeIn/Outbound() > 2. SSLSocket.shutdownIn/Output() > > and duplex-close for: > 3. SSLSocket.close() > > Unfortunately, it does introduce a compatibility issue that an existing > application may not close both inbound and outbound if the connection is > desired to be duplex-closed. In order to mitigate the impact, a new > System Property in introduced, "jdk.tls.acknowledgeCloseNotify". This > compatibility issues could be mitigated by closing both inbound and > outbound explicitly, or set the property to "true". If the property > value is "true", a corresponding close_notify alert will be sent when > receiving a close_notify alert, and therefore the connection will be > duplex closed. > > Does it make sense to you? Yes. Although I think application will need to make adjustments for TLS 1.3 anyway. That system property will smooth things, but won't guarantee that an application written for SSLEngine TLS 1.2 will work for SSLEngine TLS 1.3. Thanks! -- Simone Bordet --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz
Re: SSLEngine weird behavior in 11+21?
On 7/31/2018 12:13 PM, Simone Bordet wrote: Hi, On Tue, Jul 31, 2018 at 5:10 PM Xuelei Fan wrote: Hi, On 7/31/2018 6:43 AM, Xuelei Fan wrote: Current jdk11 tip with your patch: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NEED_UNWRAP. 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. Server wraps 0 bytes and stays in NEED_WRAP (?) In my testing (OpenJDK, test/jdk/javax/net/ssl/TLSv1/TLSEnginesClosureTest.java), for #4, the server could wrap the close_notify alert message for TLS 1.2 and prior versions (CLOSED/NOT_HANDSHAKING); and wrap data for TLS 1.3 (OK/NOT_HANDSHAKING, half-close). Are you using TLS 1.3 with no data in your test case in #4? Because of the half-close policy, it may be the expected behavior if no application data can be delivered. The problem with 4 between TLS 1.2 and your latest patch is that before there was no need to call server.closeOutbound(): as the server received the close_notify from the client, it was moving to NEED_WRAP and if wrap() was called it would generate the close_notify reply. With your latest patch, you _have_ to call server.closeOutbound() otherwise 4 will always generate 0 bytes and spin loop. That is why I prefer 2 to go into CLOSED+NOT_HANDSHAKING. When it goes into CLOSE+NEED_UNWRAP, the application will follow the instructions of SSLEngine and attempt an unwrap() immediately, while instead it should stop wrapping/unwrapping and write the close_notify to the server. Hm, I see your point. Because of the half-close policy, I think the server could keep sending messages other than the close_notify alert. My concern of the CLOSED+NOT_HANDSHAKING is that it does not indicate the following operations. Applications may call unwrap(), and run into problems. It looks that it is tolerable to use wrap() with no application data (generate 0 bytes). For TLS 1.3 Trying to close engines from Client to Server Client wrapped 24 bytes. Client handshake status is NEED_UNWRAP Result is CLOSED Server unwrapping 24 bytes... Server handshake status is NEED_WRAP Result is CLOSED Server wrapped 16406 bytes. Server handshake status is NEED_WRAP Result is OK The above tells me that the server did not generate yet the close_notify reply because it is still in NEED_WRAP. Yes, but it is the desired behavior to me. For the TLS 1.3 half close policy, it is not expected to generate close_notify reply unless the server close its outbound. So the server can keep sending the application data after the read close, as it write side is open. The TLS 1.3 spec says: "Each party MUST send a "close_notify" alert before closing its write side of the connection, unless it has already sent some error alert. This does not have any effect on its read side of the connection." And the SSLEngine.closeOutbound() spec says: "Signals that no more outbound application data will be sent on this SSLEngine." I think it is fine to keep the write side open while the read side closed. In this update, for TLS 1.2 connection, the duplex-close behavior is reserved: as the server received the close_notify, it moving to NEED_WRAP; and the a close_notify will be delivered, and then duplex close the connection. However, for TLS 1.3 connection, if there is no call to server.closeOutbound(), the server write side keeps open even the read side has been closed. In this update, for TLS 1.3, I tried to support half-close for: 1. SSLEngine.closeIn/Outbound() 2. SSLSocket.shutdownIn/Output() and duplex-close for: 3. SSLSocket.close() Unfortunately, it does introduce a compatibility issue that an existing application may not close both inbound and outbound if the connection is desired to be duplex-closed. In order to mitigate the impact, a new System Property in introduced, "jdk.tls.acknowledgeCloseNotify". This compatibility issues could be mitigated by closing both inbound and outbound explicitly, or set the property to "true". If the property value is "true", a corresponding close_notify alert will be sent when receiving a close_notify alert, and therefore the connection will be duplex closed. Does it make sense to you? Xuelei Just to repeat myself I would prefer this: Client called closeOutbound() status is NEED_WRAP Client wrapped 24 bytes. Client handshake status is NOT_HANDSHAKING Result is CLOSED Server unwrapping 24 bytes... Server handshake status is NOT_HANDSHAKING Result is CLOSED Server wrapped 16406 bytes. Server handshake status is NOT_HANDSHAKING Result is OK Server called closeOutbound() status is NEED_WRAP Server wraps 24 bytes Server handshake status is NOT_HANDSHAKING Result is CLOSED Thanks!
Re: SSLEngine weird behavior in 11+21?
Hi, On Tue, Jul 31, 2018 at 5:10 PM Xuelei Fan wrote: > > Hi, > > On 7/31/2018 6:43 AM, Xuelei Fan wrote: > > Current jdk11 tip with your patch: > > 1. client.closeOutbound() then goes into NEED_WRAP. > > 2. Client wraps 24 bytes, result is CLOSED, then goes into NEED_UNWRAP. > > 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. > > 4. Server wraps 0 bytes and stays in NEED_WRAP (?) > > In my testing (OpenJDK, > test/jdk/javax/net/ssl/TLSv1/TLSEnginesClosureTest.java), for #4, the > server could wrap the close_notify alert message for TLS 1.2 and prior > versions (CLOSED/NOT_HANDSHAKING); and wrap data for TLS 1.3 > (OK/NOT_HANDSHAKING, half-close). > > Are you using TLS 1.3 with no data in your test case in #4? Because of > the half-close policy, it may be the expected behavior if no > application data can be delivered. The problem with 4 between TLS 1.2 and your latest patch is that before there was no need to call server.closeOutbound(): as the server received the close_notify from the client, it was moving to NEED_WRAP and if wrap() was called it would generate the close_notify reply. With your latest patch, you _have_ to call server.closeOutbound() otherwise 4 will always generate 0 bytes and spin loop. That is why I prefer 2 to go into CLOSED+NOT_HANDSHAKING. When it goes into CLOSE+NEED_UNWRAP, the application will follow the instructions of SSLEngine and attempt an unwrap() immediately, while instead it should stop wrapping/unwrapping and write the close_notify to the server. > For TLS 1.3 > > Trying to close engines from Client to Server > Client wrapped 24 bytes. > Client handshake status is NEED_UNWRAP Result is CLOSED > Server unwrapping 24 bytes... > Server handshake status is NEED_WRAP Result is CLOSED > Server wrapped 16406 bytes. > Server handshake status is NEED_WRAP Result is OK > The above tells me that the server did not generate yet the close_notify reply because it is still in NEED_WRAP. Just to repeat myself I would prefer this: > Client called closeOutbound() status is NEED_WRAP > Client wrapped 24 bytes. > Client handshake status is NOT_HANDSHAKING Result is CLOSED > Server unwrapping 24 bytes... > Server handshake status is NOT_HANDSHAKING Result is CLOSED > Server wrapped 16406 bytes. > Server handshake status is NOT_HANDSHAKING Result is OK > Server called closeOutbound() status is NEED_WRAP > Server wraps 24 bytes > Server handshake status is NOT_HANDSHAKING Result is CLOSED Thanks! -- Simone Bordet --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz
Re: SSLEngine weird behavior in 11+21?
Hi, On 7/31/2018 6:43 AM, Xuelei Fan wrote: Current jdk11 tip with your patch: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NEED_UNWRAP. 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. Server wraps 0 bytes and stays in NEED_WRAP (?) In my testing (OpenJDK, test/jdk/javax/net/ssl/TLSv1/TLSEnginesClosureTest.java), for #4, the server could wrap the close_notify alert message for TLS 1.2 and prior versions (CLOSED/NOT_HANDSHAKING); and wrap data for TLS 1.3 (OK/NOT_HANDSHAKING, half-close). Are you using TLS 1.3 with no data in your test case in #4? Because of the half-close policy, it may be the expected behavior if no application data can be delivered. Thanks, Xuelei For TLS 1.2: Trying to close engines from Client to Server Client wrapped 31 bytes. Client handshake status is NEED_UNWRAP Result is CLOSED Server unwrapping 31 bytes... Server handshake status is NEED_WRAP Result is CLOSED Server wrapped 31 bytes. Server handshake status is NOT_HANDSHAKING Result is CLOSED Client unwrapping 31 bytes... Client handshake status is NOT_HANDSHAKING Result is CLOSED Client wrapped 0 bytes. Client handshake status is NOT_HANDSHAKING Result is CLOSED Server unwrapping 0 bytes... Server handshake status is NOT_HANDSHAKING Result is CLOSED Successful closing from Client to Server For TLS 1.3 Trying to close engines from Client to Server Client wrapped 24 bytes. Client handshake status is NEED_UNWRAP Result is CLOSED Server unwrapping 24 bytes... Server handshake status is NEED_WRAP Result is CLOSED Server wrapped 16406 bytes. Server handshake status is NEED_WRAP Result is OK
Re: SSLEngine weird behavior in 11+21?
Hi, Looks like we are on the same page now. For the data transportation (wrap/unwrap), I agree we'd better use OK so that applications can make the right decision. Thanks, Xuelei On 7/31/2018 7:23 AM, Simone Bordet wrote: Hi, On Tue, Jul 31, 2018 at 4:13 PM Xuelei Fan wrote: The Status.CLOSED specification is defined as "The operation just closed this side of the SSLEngine, or the operation could not be completed because it was already closed.". My reading of the spec, the CLOSED status means half-close. If wrap() status is CLOSED, it means write side close; and unwrap() CLOSED is for read side close. I may prefer to: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NEED_UNWRAP. 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. server.closeOutbound() then goes into NEED_WRAP. 5. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 6. Client unwraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. Yes, we agreed that at step 2 and especially step 3 result must be CLOSED. Please consider the case where data is sent before the close_notify reply, and what would be good for you. Thanks!
Re: SSLEngine weird behavior in 11+21?
Hi, On Tue, Jul 31, 2018 at 4:13 PM Xuelei Fan wrote: > > The Status.CLOSED specification is defined as "The operation just closed > this side of the SSLEngine, or the operation could not be completed > because it was already closed.". My reading of the spec, the CLOSED > status means half-close. If wrap() status is CLOSED, it means write > side close; and unwrap() CLOSED is for read side close. > > I may prefer to: > 1. client.closeOutbound() then goes into NEED_WRAP. > 2. Client wraps 24 bytes, result is CLOSED, then goes into NEED_UNWRAP. > 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. > 4. server.closeOutbound() then goes into NEED_WRAP. > 5. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. > 6. Client unwraps 24 bytes, result is CLOSED, then goes into > NOT_HANDSHAKING. Yes, we agreed that at step 2 and especially step 3 result must be CLOSED. Please consider the case where data is sent before the close_notify reply, and what would be good for you. Thanks! -- Simone Bordet --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz
Re: SSLEngine weird behavior in 11+21?
Hi, On Tue, Jul 31, 2018 at 3:43 PM Xuelei Fan wrote: > > 1. client.closeOutbound() then goes into NEED_WRAP. > > 2. Client wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. > It might be a problem for application to make a right decision if using > CLOSED status and NOT_HANDSHAKING handshake status together. > > As the write side is close, the client now only be able to receiving > data from peer. I think NEED_UNWRAP may be more appropriate for > application development. I think the key point here is that SSLEngine tells the application what needs to be done. Because you implemented half-close, NEED_UNWRAP is not _strictly_ needed here. By staying in NOT_HANDSHAKING the normal flow will happen: data to read from the server, pass to unwrap(), it may be data or close_notify. Let's imagine it is data; after unwrapping data, do you stay in NEED_UNWRAP until the close_notify arrives? I'm fine with either NOT_HANSHAKING or NEED_UNWRAP, but I prefer NOT_HANDSHAKING. > However, the CLOSED status may be confusing as the client may still want > to read something later. > > I may prefer to use OK/NEED_UNWRAP. What do you think? I prefer CLOSED/NOT_HANDSHAKING, but OK/NEED_UNWRAP works too. > > 3. Server unwraps 24 bytes, result is CLOSED, then goes into > > NOT_HANDSHAKING. > Same reason as above, I may prefer to use OK/NEED_WRAP. Here I strongly disagree. It is fundamental for the application to know that it received the close_notify from the client, because the application needs to call closeOutbound(), eventually. The CLOSED result is the only thing that tells the application to call closeOutbound(). I think this is also good for backwards compatibility: with TLS 1.2 I bet most application were relying on the CLOSED state. I prefer CLOSED/NOT_HANDSHAKING; CLOSED/NEED_WRAP is ok too. > > 4. server.closeOutbound() then goes into NEED_WRAP. > > 5. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. > > 6. Client unwraps 24 bytes, result is CLOSED, then goes into > > NOT_HANDSHAKING. > > > Agreed. > > If no objections, I will make the update: > > 1. client.closeOutbound() then goes into NEED_WRAP. > 2. Client wraps 24 bytes, result is OK, then goes into NEED_UNWRAP. > 3. Server unwraps 24 bytes, result is OK, then goes into NEED_WRAP. Nope: CLOSED, then NEED_WRAP, see above. > 4. server.closeOutbound() then goes into NEED_WRAP. > 5. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. > 6. Client unwraps 24 bytes, result is CLOSED, then goes into > NOT_HANDSHAKING. Let's make another case with data, and what I prefer: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is OK, then goes into NOT_HANDSHAKING. 3. Server unwraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 4. Server wraps data bytes, result is OK, stays in NOT_HANDSHAKING. 5. Client unwraps data bytes result is OK, stays in NOT_HANDSHAKING. 6. server.closeOutbound() then goes into NEED_WRAP. 7. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 8. Client unwraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. I'm fine for 2 and 5 to be NEED_UNWRAP; I'm fine for 3 and 4 to be NEED_WRAP. Thanks! -- Simone Bordet --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz
Re: SSLEngine weird behavior in 11+21?
The Status.CLOSED specification is defined as "The operation just closed this side of the SSLEngine, or the operation could not be completed because it was already closed.". My reading of the spec, the CLOSED status means half-close. If wrap() status is CLOSED, it means write side close; and unwrap() CLOSED is for read side close. I may prefer to: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NEED_UNWRAP. 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. server.closeOutbound() then goes into NEED_WRAP. 5. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 6. Client unwraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. Xuelei On 7/31/2018 6:43 AM, Xuelei Fan wrote: Hi Simone, Thanks for the quick feedback. Read more in-lines, please. On 7/31/2018 3:10 AM, Simone Bordet wrote: Hi, On Tue, Jul 31, 2018 at 11:39 AM Simone Bordet wrote: Hi, On Mon, Jul 30, 2018 at 8:08 PM Xuelei Fan wrote: Would you mind look at the code I posted in the following thread: http://mail.openjdk.java.net/pipermail/security-dev/2018-July/017708.html JDK 11+21: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING (?) 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 5. Client unwraps 0 bytes (?) Current jdk11 tip with your patch: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NEED_UNWRAP. 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. Server wraps 0 bytes and stays in NEED_WRAP (?) I don't think this is right. While I previously complained about step 2 going into NOT_HANDSHAKING, if you now support full half close, then I think this may be reasonable, as the server may still send data and only later issue a close_notify. However, NEED_UNWRAP like it is now is also reasonable. At step 3, after the server unwraps the close_notify, the server should either stay in NOT_HANDSHAKING *and* require a call to closeOutbound() (which will move the state to NEED_WRAP), or it should go into NEED_WRAP *and* produce the close_notify. As it is now, SSLEngine tells the application to wrap(), but it wraps 0 bytes, but tells again the application to wrap(), but still produces 0 bytes, so it's going to be a tight spin loop - not good. For completeness, calling server.closeOutbound() at step 4. correctly moves SSLEngine into NEED_WRAP and a subsequent wrap() produces the 24 bytes of the close_notify and result CLOSED, then goes into NOT_HANDSHAKING. I think the current behavior (with your patch) needs to be fixed. Since you implemented half-close, my preference would be this: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. It might be a problem for application to make a right decision if using CLOSED status and NOT_HANDSHAKING handshake status together. As the write side is close, the client now only be able to receiving data from peer. I think NEED_UNWRAP may be more appropriate for application development. However, the CLOSED status may be confusing as the client may still want to read something later. I may prefer to use OK/NEED_UNWRAP. What do you think? 3. Server unwraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. Same reason as above, I may prefer to use OK/NEED_WRAP. 4. server.closeOutbound() then goes into NEED_WRAP. 5. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 6. Client unwraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. Agreed. If no objections, I will make the update: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is OK, then goes into NEED_UNWRAP. 3. Server unwraps 24 bytes, result is OK, then goes into NEED_WRAP. 4. server.closeOutbound() then goes into NEED_WRAP. 5. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 6. Client unwraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. Thanks, Xuelei
Re: SSLEngine weird behavior in 11+21?
Hi Simone, Thanks for the quick feedback. Read more in-lines, please. On 7/31/2018 3:10 AM, Simone Bordet wrote: Hi, On Tue, Jul 31, 2018 at 11:39 AM Simone Bordet wrote: Hi, On Mon, Jul 30, 2018 at 8:08 PM Xuelei Fan wrote: Would you mind look at the code I posted in the following thread: http://mail.openjdk.java.net/pipermail/security-dev/2018-July/017708.html JDK 11+21: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING (?) 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 5. Client unwraps 0 bytes (?) Current jdk11 tip with your patch: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NEED_UNWRAP. 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. Server wraps 0 bytes and stays in NEED_WRAP (?) I don't think this is right. While I previously complained about step 2 going into NOT_HANDSHAKING, if you now support full half close, then I think this may be reasonable, as the server may still send data and only later issue a close_notify. However, NEED_UNWRAP like it is now is also reasonable. At step 3, after the server unwraps the close_notify, the server should either stay in NOT_HANDSHAKING *and* require a call to closeOutbound() (which will move the state to NEED_WRAP), or it should go into NEED_WRAP *and* produce the close_notify. As it is now, SSLEngine tells the application to wrap(), but it wraps 0 bytes, but tells again the application to wrap(), but still produces 0 bytes, so it's going to be a tight spin loop - not good. For completeness, calling server.closeOutbound() at step 4. correctly moves SSLEngine into NEED_WRAP and a subsequent wrap() produces the 24 bytes of the close_notify and result CLOSED, then goes into NOT_HANDSHAKING. I think the current behavior (with your patch) needs to be fixed. Since you implemented half-close, my preference would be this: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. It might be a problem for application to make a right decision if using CLOSED status and NOT_HANDSHAKING handshake status together. As the write side is close, the client now only be able to receiving data from peer. I think NEED_UNWRAP may be more appropriate for application development. However, the CLOSED status may be confusing as the client may still want to read something later. I may prefer to use OK/NEED_UNWRAP. What do you think? 3. Server unwraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. Same reason as above, I may prefer to use OK/NEED_WRAP. 4. server.closeOutbound() then goes into NEED_WRAP. 5. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 6. Client unwraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. Agreed. If no objections, I will make the update: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is OK, then goes into NEED_UNWRAP. 3. Server unwraps 24 bytes, result is OK, then goes into NEED_WRAP. 4. server.closeOutbound() then goes into NEED_WRAP. 5. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 6. Client unwraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. Thanks, Xuelei
Re: SSLEngine weird behavior in 11+21?
Hi, On Tue, Jul 31, 2018 at 11:39 AM Simone Bordet wrote: > > Hi, > On Mon, Jul 30, 2018 at 8:08 PM Xuelei Fan wrote: > > Would you mind look at the code I posted in the following thread: > > http://mail.openjdk.java.net/pipermail/security-dev/2018-July/017708.html > > JDK 11+21: > 1. client.closeOutbound() then goes into NEED_WRAP. > 2. Client wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING (?) > 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. > 4. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. > 5. Client unwraps 0 bytes (?) > > Current jdk11 tip with your patch: > 1. client.closeOutbound() then goes into NEED_WRAP. > 2. Client wraps 24 bytes, result is CLOSED, then goes into NEED_UNWRAP. > 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. > 4. Server wraps 0 bytes and stays in NEED_WRAP (?) > > I don't think this is right. > > While I previously complained about step 2 going into NOT_HANDSHAKING, > if you now support full half close, then I think this may be > reasonable, as the server may still send data and only later issue a > close_notify. > However, NEED_UNWRAP like it is now is also reasonable. > > At step 3, after the server unwraps the close_notify, the server > should either stay in NOT_HANDSHAKING *and* require a call to > closeOutbound() (which will move the state to NEED_WRAP), or it should > go into NEED_WRAP *and* produce the close_notify. > As it is now, SSLEngine tells the application to wrap(), but it wraps > 0 bytes, but tells again the application to wrap(), but still produces > 0 bytes, so it's going to be a tight spin loop - not good. For completeness, calling server.closeOutbound() at step 4. correctly moves SSLEngine into NEED_WRAP and a subsequent wrap() produces the 24 bytes of the close_notify and result CLOSED, then goes into NOT_HANDSHAKING. I think the current behavior (with your patch) needs to be fixed. Since you implemented half-close, my preference would be this: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 3. Server unwraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 4. server.closeOutbound() then goes into NEED_WRAP. 5. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 6. Client unwraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. Thanks! -- Simone Bordet --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz
Re: SSLEngine weird behavior in 11+21?
Hi, On Mon, Jul 30, 2018 at 8:08 PM Xuelei Fan wrote: > Would you mind look at the code I posted in the following thread: > http://mail.openjdk.java.net/pipermail/security-dev/2018-July/017708.html JDK 11+21: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING (?) 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 5. Client unwraps 0 bytes (?) Current jdk11 tip with your patch: 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NEED_UNWRAP. 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. Server wraps 0 bytes and stays in NEED_WRAP (?) I don't think this is right. While I previously complained about step 2 going into NOT_HANDSHAKING, if you now support full half close, then I think this may be reasonable, as the server may still send data and only later issue a close_notify. However, NEED_UNWRAP like it is now is also reasonable. At step 3, after the server unwraps the close_notify, the server should either stay in NOT_HANDSHAKING *and* require a call to closeOutbound() (which will move the state to NEED_WRAP), or it should go into NEED_WRAP *and* produce the close_notify. As it is now, SSLEngine tells the application to wrap(), but it wraps 0 bytes, but tells again the application to wrap(), but still produces 0 bytes, so it's going to be a tight spin loop - not good. Thanks! -- Simone Bordet --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz
Re: SSLEngine weird behavior in 11+21?
Hi Simone, Would you mind look at the code I posted in the following thread: http://mail.openjdk.java.net/pipermail/security-dev/2018-July/017708.html I'm trying to address the close concerns of yours. I appreciate if you could comment by the end of this week. Note that with this update, a complete TLS connection should close both inbound and outbound explicitly. However, existing applications may not did this way. If the source code update is not available, please consider to use the "jdk.tls.acknowledgeCloseNotify" System Property as a workaround. Thanks, Xuelei On 7/13/2018 9:19 AM, Simone Bordet wrote: Hi, On Fri, Jul 13, 2018 at 3:45 PM Xuelei Fan wrote: Thank you very much, Simone. I find at least two improvements we can take. It's really good! Great! Let know when they land in a 11+X release and we'll try them out. Thanks!
Re: SSLEngine weird behavior in 11+21?
Hi, On Fri, Jul 13, 2018 at 3:45 PM Xuelei Fan wrote: > Thank you very much, Simone. I find at least two improvements we can > take. It's really good! Great! Let know when they land in a 11+X release and we'll try them out. Thanks! -- Simone Bordet --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz
Re: SSLEngine weird behavior in 11+21?
On 7/12/2018 1:17 PM, Simone Bordet wrote: Hi, On Thu, Jul 12, 2018 at 9:29 PM Xuelei Fan wrote: Per the TLS 1.3 specification: - The server sends a dummy change_cipher_spec record immediately after its first handshake message. This may either be after a ServerHello or a HelloRetryRequest. and - If not offering early data, the client sends a dummy change_cipher_spec record (see the third paragraph of Section 5.1) immediately before its second flight. This may either be before its second ClientHello or before its encrypted handshake flight. If offering early data, the record is placed immediately after the first ClientHello. My read of the spec is that the dummy change_cipher_spec record is generated immediately after the ServerHello in server side, and before the 2nd flight in client side. The spec is about *sending*, while I'm about *generating*. Currently: - Server wraps 160, 6 and 907 in 3 wraps. - Server *sends* the 1073 bytes in 1 TCP write - Client unwraps 160, then goes into NEED_WRAP - Client wraps 6, then goes again into NEED_UNWRAP to finish with the 6 and 907 received by the server. What I'm suggesting: - Server wraps 160, 6 and 907 in 3 wraps. - Server *sends* the 1073 (160+6+907) bytes in 1 TCP write - Client unwraps 160, stays in NEED_UNWRAP, unwraps the 6, unwraps the 907, then goes into NEED_WRAP - Client wraps 6 and 58 - Client *sends* the 64 (6+58) bytes in 1 TCP write. The 6 bytes of the dummy change_cipher_spec are *sent* just after receiving the ServerHello in both cases, they are just *generated* at different times. By having all the unwraps() consecutive and all the wraps() consecutive you can enable a number of optimizations I described earlier. Imagine a client that would perform a TCP *send* every time the state moves away from NEED_WRAP. Currently it would: 1. TCP send for ClientHello 2. Reads 1073 from server 3. Unwrap 160, then NEED_WRAP 4. Generate 6, then NEED_UNWRAP 5. TCP send the 6 bytes 6. Unwrap 6 and 907, then NEED_WRAP 7. Generate 58 then FINISHED 8 TCP send the 58 bytes. You can see that the client issues 3 TCP sends. With what I am suggesting, the client would only issue 2 TCP sends, which seems more in line with the efforts in TLS 1.3 to be more efficient. It would just be a matter of *generating* those 6 bytes a bit later, *after* having fully unwrapped the ServerHello. It's a good idea! We may consider a option to turn off the middlebox compatibility mode if it helps Jetty. No need to. But the implementation code and the application should be ready to accept the fact that third party's implementation may be implemented in middlebox compatibility mode, and the change_cipher_spec record may still come in. See above, it's not about processing the dummy change_cipher_spec, it's just about generating it a little later. After #7 (FINISHED), if the client only send application data, but never call read again, it still works, right? The application don't have to read something, I think. Sure, it just can't take advantage of session resumption though. The client never knows that there are post-handshake messages to read because SSLEgine did not tell. SSLEngine just said FINISHED and NOT_HANDSHAKING. However, I see your point. Handshake messages could come at any time, just that this one seems more part of the initial handshake, especially if it's about session resumption. Yes. Applications using SSLEngine may want to monitor the network incoming, and call unwrap() when there is something comes. Otherwise, if the SSLEngine client does not try to read any more, there is no chance to get the post-handshake message. I know some application might never read again after the full handshake, but it may not be able to work for some user cases (session resumption, etc), even in the TLS 1.2 context. As SSLEngine knows nothing about the underlying transportation, there is not much it can do here. in #10, you said, "Client MUST unwrap ...". Do you mean that the client cannot send application data without read/unwrap something? It can send data, but won't be good to leave unprocessed bytes around. The application would not know that there is stuff to read from the network and to process as post-handshake messages. It may never read from the network again. See above. Thank you very much, Simone. I find at least two improvements we can take. It's really good! Regards, Xuelei Half-close means that the server may not send the close_notify when it receive the client close_notify. It's a very tricky policy. The client close_notify only means the client want to close its writing side. The server may not send the close_notify if it doesn't want to close. If we have the client goes into NEED_UNWRAP, there is a potential dead waiting. Good point. Thanks!
Re: SSLEngine weird behavior in 11+21?
On 7/12/2018 2:39 PM, Simone Bordet wrote: Hi, On Thu, Jul 12, 2018 at 11:10 PM Xuelei Fan wrote: In TLS 1.3 closure, receiving of the close_notify is not required: "... Both parties need not wait to receive a "close_notify" alert before closing their read side of the connection, though doing so would introduce the possibility of truncation." TLS 1.2 also has similar specification: "It is not required for the initiator of the close to wait for the responding close_notify alert before closing the read side of the connection." My reading the spec, the peer MUST send the close_notify, but it is not required to close the connection. If the socket cannot be closed until close_notify get received, the local may never have a chance to close the socket unless the peer agrees. It does not sound like to me. The compatibility impact could be serious as previous application work in a duplex-close manner. 1. start and establish a TLS connection. 2. client call close(), and the server will response with a close_notify in TLS 1.2. the TLS connection can be closed gracefully. While for TLS 1.3, #2 does not work if the client calling of close() closes the writing side only, and leave the reading side open. I see the benefits of half-close. I'm open to hear a better solution to take the benefit of half-close and avoid the compatibility issue. Even in TLS 1.2 you cannot be sure that the server will send the close_notify (e.g. bad server). Yes, but normally, the server MUST respond with a close_notify. The half close problem is present in TCP, WebSocket, etc. and it's typically solved with (idle) timeouts. Client sends the close_notify, then reads, hoping to see the close_notify from the server; if it does not see it, it closes the raw socket after a timeout. The timeout could account for receiving data (i.e. it is reset if application data arrives), or not. I will think more about this proposal. Thanks, Xuelei
Re: SSLEngine weird behavior in 11+21?
Hi, On Thu, Jul 12, 2018 at 11:10 PM Xuelei Fan wrote: > In TLS 1.3 closure, receiving of the close_notify is not required: >"... Both parties need not wait to receive a > "close_notify" alert before closing their read side of the > connection, though doing so would introduce the possibility of > truncation." > > TLS 1.2 also has similar specification: > "It is not required for the initiator > of the close to wait for the responding close_notify alert before > closing the read side of the connection." > > My reading the spec, the peer MUST send the close_notify, but it is not > required to close the connection. > > If the socket cannot be closed until close_notify get received, the > local may never have a chance to close the socket unless the peer > agrees. It does not sound like to me. The compatibility impact could > be serious as previous application work in a duplex-close manner. > 1. start and establish a TLS connection. > 2. client call close(), and the server will response with a close_notify > in TLS 1.2. the TLS connection can be closed gracefully. > > While for TLS 1.3, #2 does not work if the client calling of close() > closes the writing side only, and leave the reading side open. > > I see the benefits of half-close. I'm open to hear a better solution to > take the benefit of half-close and avoid the compatibility issue. Even in TLS 1.2 you cannot be sure that the server will send the close_notify (e.g. bad server). The half close problem is present in TCP, WebSocket, etc. and it's typically solved with (idle) timeouts. Client sends the close_notify, then reads, hoping to see the close_notify from the server; if it does not see it, it closes the raw socket after a timeout. The timeout could account for receiving data (i.e. it is reset if application data arrives), or not. Thanks! -- Simone Bordet --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz
Re: SSLEngine weird behavior in 11+21?
Hi, On Thu, Jul 12, 2018 at 11:15 PM Xuelei Fan wrote: > > On 7/12/2018 1:09 PM, Simone Bordet wrote: > > Hi, > > On Thu, Jul 12, 2018 at 10:05 PM David Lloyd wrote: > >> But more importantly, I think this > >> represents a chance to fix this long-standing bad behavior of the JDK. > > > > Indeed! > > > > Another thing I was about to complain is that > > SSLSocket.shutdownOutput() still throws: > > UnsupportedOperationException("The method shutdownOutput() is not > > supported in SSLSocket"); > > > This happens before JDK 11 B20, right? No, it still happens on 11+21. Cheers -- Simone Bordet http://cometd.org http://webtide.com Developer advice, training, services and support from the Jetty & CometD experts.
Re: SSLEngine weird behavior in 11+21?
On 7/12/2018 1:09 PM, Simone Bordet wrote: Hi, On Thu, Jul 12, 2018 at 10:05 PM David Lloyd wrote: But more importantly, I think this represents a chance to fix this long-standing bad behavior of the JDK. Indeed! Another thing I was about to complain is that SSLSocket.shutdownOutput() still throws: UnsupportedOperationException("The method shutdownOutput() is not supported in SSLSocket"); This happens before JDK 11 B20, right? Thanks, Xuelei
Re: SSLEngine weird behavior in 11+21?
On 7/12/2018 1:04 PM, David Lloyd wrote: On Thu, Jul 12, 2018 at 2:30 PM Xuelei Fan wrote: On 7/12/2018 10:47 AM, Simone Bordet wrote: On Thu, Jul 12, 2018 at 4:34 PM Xuelei Fan wrote: [...] In TLS 1.3, half-close policy is used. The server may not response with the close_notify for client close_notify request. See the "Closure Alerts" section for more details: https://tools.ietf.org/html/draft-ietf-tls-tls13-28#section-6.1 In that section I read: "Each party MUST send a "close_notify" alert before closing its write side of the connection, unless it has already sent some error alert." Given that, I expect that for close_notify, at step #2, the client goes into NEED_UNWRAP, as the server MUST send a close_notify too. [...] Per the TLS 1.3 specification, the local can request to close its writing side, but it cannot to request to close the peer writing side. It means it is not defined about how to close the read side. It could lead to issues in practice. I had a question in the IETF TLS email list. But unfortunately, I did not get a ideal solution: https://www.ietf.org/mail-archive/web/tls/current/msg25581.html In order to countermeasure the issues, JDK will try to close the underlying socket if either side sends the close_notify message. Please forgive my interjection - does this not explicitly violate the half-close policy? The socket should not be closed until close_notify is both received (which means that read-shutdown is allowed) and sent (after which write-shutdown may be performed), should it not? In TLS 1.3 closure, receiving of the close_notify is not required: "... Both parties need not wait to receive a "close_notify" alert before closing their read side of the connection, though doing so would introduce the possibility of truncation." TLS 1.2 also has similar specification: "It is not required for the initiator of the close to wait for the responding close_notify alert before closing the read side of the connection." My reading the spec, the peer MUST send the close_notify, but it is not required to close the connection. If the socket cannot be closed until close_notify get received, the local may never have a chance to close the socket unless the peer agrees. It does not sound like to me. The compatibility impact could be serious as previous application work in a duplex-close manner. 1. start and establish a TLS connection. 2. client call close(), and the server will response with a close_notify in TLS 1.2. the TLS connection can be closed gracefully. While for TLS 1.3, #2 does not work if the client calling of close() closes the writing side only, and leave the reading side open. I see the benefits of half-close. I'm open to hear a better solution to take the benefit of half-close and avoid the compatibility issue. I raised a similar question in the TLS mailing list: https://www.ietf.org/mail-archive/web/tls/current/msg25581.html A new close_request alert might be able to solve the issue. But I had no strong user case in practice about why we cannot close the underlying TCP socket if we don't want to read. There was on census for the discussion in the IETF TLS mailing list, and the request was deferred. Thanks, Xuelei At this point, my gut feeling is that having a single codebase handling TLS 1.2 and TLS 1.3 would be difficult. Is this feeling due to the half-closed situation? Because it's my firm belief that an TLS 1.2 implementation which treats half-closed in the manner specified by TLS 1.3 cannot be shown (from the perspective of an outside observer) to be in violation of the specification, and furthermore, _not_ doing so may represent a theoretical existent security risk. Put another way, imagine that my endpoint receives data and then sends data. Now imagine that the receive side immediately receives close_notify after its data is processed but before the send (response) begins. This obviously will terminate the connection before the data is output. Now imagine that the close_notify is completely contained in a single TCP packet, and that packet is (for whatever reason) delayed on the network until my send is complete. The OpenJDK TLS stack closes the socket at the right time (from my perspective), after both receive and send completed correctly. This delay has caused a material behavior change of my application, which is not a desirable effect. Two different behaviors, with the only input difference being one of timing. Imagine again now that the TLS 1.2 stack would not immediately send a close_notify, but instead would follow the half-close model as specified in TLS 1.3. As far as the *peer* is aware, my endpoint is conformant, because the peer cannot possibly differentiate between the TLS stack deferring shutdown to the application versus the close_notify message being delayed on the network. And, a bad actor can no longer affect the behavior of the application by delaying packets.
Re: SSLEngine weird behavior in 11+21?
Hi, On Thu, Jul 12, 2018 at 9:29 PM Xuelei Fan wrote: > Per the TLS 1.3 specification: >- The server sends a dummy change_cipher_spec record immediately > after its first handshake message. This may either be after a > ServerHello or a HelloRetryRequest. > > and >- If not offering early data, the client sends a dummy > change_cipher_spec record (see the third paragraph of Section 5.1) > immediately before its second flight. This may either be before > its second ClientHello or before its encrypted handshake flight. > If offering early data, the record is placed immediately after the > first ClientHello. > > My read of the spec is that the dummy change_cipher_spec record is > generated immediately after the ServerHello in server side, and before > the 2nd flight in client side. The spec is about *sending*, while I'm about *generating*. Currently: - Server wraps 160, 6 and 907 in 3 wraps. - Server *sends* the 1073 bytes in 1 TCP write - Client unwraps 160, then goes into NEED_WRAP - Client wraps 6, then goes again into NEED_UNWRAP to finish with the 6 and 907 received by the server. What I'm suggesting: - Server wraps 160, 6 and 907 in 3 wraps. - Server *sends* the 1073 (160+6+907) bytes in 1 TCP write - Client unwraps 160, stays in NEED_UNWRAP, unwraps the 6, unwraps the 907, then goes into NEED_WRAP - Client wraps 6 and 58 - Client *sends* the 64 (6+58) bytes in 1 TCP write. The 6 bytes of the dummy change_cipher_spec are *sent* just after receiving the ServerHello in both cases, they are just *generated* at different times. By having all the unwraps() consecutive and all the wraps() consecutive you can enable a number of optimizations I described earlier. Imagine a client that would perform a TCP *send* every time the state moves away from NEED_WRAP. Currently it would: 1. TCP send for ClientHello 2. Reads 1073 from server 3. Unwrap 160, then NEED_WRAP 4. Generate 6, then NEED_UNWRAP 5. TCP send the 6 bytes 6. Unwrap 6 and 907, then NEED_WRAP 7. Generate 58 then FINISHED 8 TCP send the 58 bytes. You can see that the client issues 3 TCP sends. With what I am suggesting, the client would only issue 2 TCP sends, which seems more in line with the efforts in TLS 1.3 to be more efficient. It would just be a matter of *generating* those 6 bytes a bit later, *after* having fully unwrapped the ServerHello. > We may consider a option to turn off the middlebox compatibility mode if > it helps Jetty. No need to. > But the implementation code and the application should > be ready to accept the fact that third party's implementation may be > implemented in middlebox compatibility mode, and the change_cipher_spec > record may still come in. See above, it's not about processing the dummy change_cipher_spec, it's just about generating it a little later. > After #7 (FINISHED), if the client only send application data, but never > call read again, it still works, right? The application don't have to > read something, I think. Sure, it just can't take advantage of session resumption though. The client never knows that there are post-handshake messages to read because SSLEgine did not tell. SSLEngine just said FINISHED and NOT_HANDSHAKING. However, I see your point. Handshake messages could come at any time, just that this one seems more part of the initial handshake, especially if it's about session resumption. > in #10, you said, "Client MUST unwrap ...". Do you mean that the client > cannot send application data without read/unwrap something? It can send data, but won't be good to leave unprocessed bytes around. The application would not know that there is stuff to read from the network and to process as post-handshake messages. It may never read from the network again. > Half-close means that the server may not send the close_notify when it > receive the client close_notify. It's a very tricky policy. The client > close_notify only means the client want to close its writing side. The > server may not send the close_notify if it doesn't want to close. If we > have the client goes into NEED_UNWRAP, there is a potential dead waiting. Good point. Thanks! -- Simone Bordet --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz
Re: SSLEngine weird behavior in 11+21?
Hi, On Thu, Jul 12, 2018 at 10:05 PM David Lloyd wrote: > But more importantly, I think this > represents a chance to fix this long-standing bad behavior of the JDK. Indeed! Another thing I was about to complain is that SSLSocket.shutdownOutput() still throws: UnsupportedOperationException("The method shutdownOutput() is not supported in SSLSocket"); Thanks! -- Simone Bordet http://cometd.org http://webtide.com Developer advice, training, services and support from the Jetty & CometD experts.
Re: SSLEngine weird behavior in 11+21?
On Thu, Jul 12, 2018 at 2:30 PM Xuelei Fan wrote: > On 7/12/2018 10:47 AM, Simone Bordet wrote: > > On Thu, Jul 12, 2018 at 4:34 PM Xuelei Fan wrote: [...] > >> In TLS 1.3, half-close policy is used. The server may not response with > >> the close_notify for client close_notify request. See the "Closure > >> Alerts" section for more details: > >> https://tools.ietf.org/html/draft-ietf-tls-tls13-28#section-6.1 > > > > In that section I read: > > "Each party MUST send a "close_notify" alert before closing its write > > side of the connection, unless it has already sent some error alert." > > > > Given that, I expect that for close_notify, at step #2, the client > > goes into NEED_UNWRAP, as the server MUST send a close_notify too. [...] > Per the TLS 1.3 specification, the local can request to close its > writing side, but it cannot to request to close the peer writing side. > It means it is not defined about how to close the read side. It could > lead to issues in practice. I had a question in the IETF TLS email > list. But unfortunately, I did not get a ideal solution: > https://www.ietf.org/mail-archive/web/tls/current/msg25581.html > > In order to countermeasure the issues, JDK will try to close the > underlying socket if either side sends the close_notify message. Please forgive my interjection - does this not explicitly violate the half-close policy? The socket should not be closed until close_notify is both received (which means that read-shutdown is allowed) and sent (after which write-shutdown may be performed), should it not? > > At this point, my gut feeling is that having a single codebase > > handling TLS 1.2 and TLS 1.3 would be difficult. Is this feeling due to the half-closed situation? Because it's my firm belief that an TLS 1.2 implementation which treats half-closed in the manner specified by TLS 1.3 cannot be shown (from the perspective of an outside observer) to be in violation of the specification, and furthermore, _not_ doing so may represent a theoretical existent security risk. Put another way, imagine that my endpoint receives data and then sends data. Now imagine that the receive side immediately receives close_notify after its data is processed but before the send (response) begins. This obviously will terminate the connection before the data is output. Now imagine that the close_notify is completely contained in a single TCP packet, and that packet is (for whatever reason) delayed on the network until my send is complete. The OpenJDK TLS stack closes the socket at the right time (from my perspective), after both receive and send completed correctly. This delay has caused a material behavior change of my application, which is not a desirable effect. Two different behaviors, with the only input difference being one of timing. Imagine again now that the TLS 1.2 stack would not immediately send a close_notify, but instead would follow the half-close model as specified in TLS 1.3. As far as the *peer* is aware, my endpoint is conformant, because the peer cannot possibly differentiate between the TLS stack deferring shutdown to the application versus the close_notify message being delayed on the network. And, a bad actor can no longer affect the behavior of the application by delaying packets. These reasons were, without a doubt, considerations when the rules were changed for TLS 1.3. Also another important consideration was probably the observation that OpenSSL *already* appears to behave in this manner. OpenJDK's behavior has always IMO been clearly incorrect in this regard, despite the wording of the specification, and seeing this belief borne out by the changes in the TLS 1.3 spec is somewhat vindicating in this regard. But more importantly, I think this represents a chance to fix this long-standing bad behavior of the JDK. -- - DML
Re: SSLEngine weird behavior in 11+21?
On 7/12/2018 10:47 AM, Simone Bordet wrote: Hi, On Thu, Jul 12, 2018 at 4:34 PM Xuelei Fan wrote: In order to mitigate the compatibility impact, the TLS 1.3 is implemented in the middlebox compatibility mode (See Section D.4, TLS 1.3 draft 28) in JDK. The 6 bytes message is the dummy change_cipher_spec record. More details, please refer to the "Middlebox Compatibility Mode" section in TLS 1.3 draft 28: https://tools.ietf.org/html/draft-ietf-tls-tls13-28#appendix-D.4 Okay. But then those 6 bytes don't need to be generated immediately, right? Per the TLS 1.3 specification: - The server sends a dummy change_cipher_spec record immediately after its first handshake message. This may either be after a ServerHello or a HelloRetryRequest. and - If not offering early data, the client sends a dummy change_cipher_spec record (see the third paragraph of Section 5.1) immediately before its second flight. This may either be before its second ClientHello or before its encrypted handshake flight. If offering early data, the record is placed immediately after the first ClientHello. My read of the spec is that the dummy change_cipher_spec record is generated immediately after the ServerHello in server side, and before the 2nd flight in client side. The SSLEngine can finish to unwrap everything received by the server, and then wrap everything it needs to wrap. This will make simpler for applications to allocate buffers, recycle them, and in general implement TLS via SSLEngine in a simpler way (and more efficiently with less writes and/or data copies). The call to wrap() and unwrap() should follow the instructions of the handshake status (NEED_UNWRAP or NEED_WRAP). The sequence of wrap()/unwrap() is not expected to impact the usage. We may consider a option to turn off the middlebox compatibility mode if it helps Jetty. But the implementation code and the application should be ready to accept the fact that third party's implementation may be implemented in middlebox compatibility mode, and the change_cipher_spec record may still come in. In JDK 11, two post-handshake messages are supported, new session ticket and key/iv update. The 72 bytes in #9 and #10 are for the new session ticket. In JDK, the new session ticket post-handshake message is delivered immediately after the main handshake in server side. While for client side, it should be ready to accept the message at any time the main handshake complete. So, in #7, the FINISHED status means the main handshake complete. While in #10, the FINISHED status means that the post-handshake message get processed. I really concern about the compatibility impact. Did you run into problems with the new handshake message flow? Yes, the Jetty CI integration went from 0 test failures in JDK 11+18 (and all previous JDK up to 8) to 500+ failures, all due to TLS 1.3. We will need to update our usage of SSLEngine heavily and disable TLS 1.3 in the meantime :( Please feel free to send us the compatibility problems in this email alias. For example, FINISHED state was only every happening once before. Based on that, we could detect how many times a client was requesting renegotiations, and impose a limit on that (otherwise it would be an attack to the server). With the double FINISHED state in #7 and #10 we are not able to tell the difference based on the SSLEngine state alone. We would need to put "if (tls > 1.2) cannot_be_renegotiation_do_something_else()". As the logic is related to TLS specification details, it might need to make some adjustment in Jetty as renegotiation has gone in TLS 1.3. What concerns me is that before SSLEngine was emitting instructions on what it needed an application to do in order to proceed the TLS handshake. You would call beginHandshake() on a client, and SSLEngine.getHandshakeStatus() would return NEED_WRAP, and the application would know that it needed to call wrap(). It still works, I think. Normally, call beginHandshake(), and then check the handshake status for further operations. If the status if NEED_WRAP, call wrap(); if the status if NEED_UNWRAP(), call unwrap(); etc. Now at step 7 it returns FINISHED, and SSLEngine.getHandshakeStatus() returns NOT_HANDSHAKING, so in principles there is nothing more to do. It is still the same for TLS 1.3, right? The client can send application data now, I think. In a custom application protocol where a client opens a connection expecting to only send data to the server and never receiving data from the server, the client would never expect to read more bytes from the server for those post-handshake frames, because the SSLEngine does not tell. Having to put conditional code like "if (tls > 1.2) read_more_from_server_even_if_finished()" seems ugly. After #7 (FINISHED), if the client only send application data, but never call read again, it still works, right? The application don't have to read something,
Re: SSLEngine weird behavior in 11+21?
Hi, On Thu, Jul 12, 2018 at 4:34 PM Xuelei Fan wrote: > In order to mitigate the compatibility impact, the TLS 1.3 is > implemented in the middlebox compatibility mode (See Section D.4, TLS > 1.3 draft 28) in JDK. The 6 bytes message is the dummy > change_cipher_spec record. More details, please refer to the "Middlebox > Compatibility Mode" section in TLS 1.3 draft 28: > https://tools.ietf.org/html/draft-ietf-tls-tls13-28#appendix-D.4 Okay. But then those 6 bytes don't need to be generated immediately, right? The SSLEngine can finish to unwrap everything received by the server, and then wrap everything it needs to wrap. This will make simpler for applications to allocate buffers, recycle them, and in general implement TLS via SSLEngine in a simpler way (and more efficiently with less writes and/or data copies). > In JDK 11, two post-handshake messages are supported, new session ticket > and key/iv update. The 72 bytes in #9 and #10 are for the new session > ticket. In JDK, the new session ticket post-handshake message is > delivered immediately after the main handshake in server side. While > for client side, it should be ready to accept the message at any time > the main handshake complete. So, in #7, the FINISHED status means the > main handshake complete. While in #10, the FINISHED status means that > the post-handshake message get processed. > > I really concern about the compatibility impact. Did you run into > problems with the new handshake message flow? Yes, the Jetty CI integration went from 0 test failures in JDK 11+18 (and all previous JDK up to 8) to 500+ failures, all due to TLS 1.3. We will need to update our usage of SSLEngine heavily and disable TLS 1.3 in the meantime :( For example, FINISHED state was only every happening once before. Based on that, we could detect how many times a client was requesting renegotiations, and impose a limit on that (otherwise it would be an attack to the server). With the double FINISHED state in #7 and #10 we are not able to tell the difference based on the SSLEngine state alone. We would need to put "if (tls > 1.2) cannot_be_renegotiation_do_something_else()". What concerns me is that before SSLEngine was emitting instructions on what it needed an application to do in order to proceed the TLS handshake. You would call beginHandshake() on a client, and SSLEngine.getHandshakeStatus() would return NEED_WRAP, and the application would know that it needed to call wrap(). Now at step 7 it returns FINISHED, and SSLEngine.getHandshakeStatus() returns NOT_HANDSHAKING, so in principles there is nothing more to do. In a custom application protocol where a client opens a connection expecting to only send data to the server and never receiving data from the server, the client would never expect to read more bytes from the server for those post-handshake frames, because the SSLEngine does not tell. Having to put conditional code like "if (tls > 1.2) read_more_from_server_even_if_finished()" seems ugly. Would returning OK instead of FINISHED at step #10 be doable? After all, encrypting an empty string was always possible and there really is no difference for SSLEngine users: unwrapping those 72 bytes for the session ticket or the empty string would produce in both cases 0 decrypted bytes (it only changes the internal state of SSLEngine with the session ticket). > In TLS 1.3, half-close policy is used. The server may not response with > the close_notify for client close_notify request. See the "Closure > Alerts" section for more details: > https://tools.ietf.org/html/draft-ietf-tls-tls13-28#section-6.1 In that section I read: "Each party MUST send a "close_notify" alert before closing its write side of the connection, unless it has already sent some error alert." Given that, I expect that for close_notify, at step #2, the client goes into NEED_UNWRAP, as the server MUST send a close_notify too. Or at least that the server close_notify is consumed by the client at step #5, although this would again break the "SSLEngine state machine telling applications what to do". > It is a little bit complicated when moving from the duplex-close to > half-close policy. In order to mitigate the compatibility impact, in > JDK implementation, if the local send the close_notify, we choose to > close the underlying socket as well if needed. But, if the peer send > the close_notify, the unwrap() should be able to consume the bytes. It > looks like a implementation bug to me. Okay. > Would you please let us know if there are compatibility issues with the > new half-close policy? We've been massively struck by test failures, so I cannot tell which test had compatibility issues due to the close_notify differences. I suspect that most failures are due to the change of behavior at step #4. At this point, my gut feeling is that having a single codebase handling TLS 1.2 and TLS 1.3 would be difficult. I expect a few "if (tls > 1.2)" spread out in the code
Re: SSLEngine weird behavior in 11+21?
On 7/12/2018 7:33 AM, Xuelei Fan wrote: In addition to what reported above, I would like to report also the weird behavior during the close handshake (i.e. when one side decides to close the connection). 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING (?) 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 5. Client unwraps 0 bytes (?) I think at step 2 the client should go into NEED_UNWRAP to read (at step 5) the server response to the close_notify. In TLS 1.3, half-close policy is used. The server may not response with the close_notify for client close_notify request. See the "Closure Alerts" section for more details: https://tools.ietf.org/html/draft-ietf-tls-tls13-28#section-6.1 It is a little bit complicated when moving from the duplex-close to half-close policy. In order to mitigate the compatibility impact, in JDK implementation, if the local send the close_notify, we choose to close the underlying socket as well if needed. But, if the peer send the close_notify, the unwrap() should be able to consume the bytes. It looks like a implementation bug to me. The issue is tracked in JBS: https://bugs.openjdk.java.net/browse/JDK-8207177 Xuelei
Re: SSLEngine weird behavior in 11+21?
Hi Simone, Thank you very much for the debug logs. Read more in-lines, please. On 7/12/2018 2:03 AM, Simone Bordet wrote: Hi, On Thu, Jul 12, 2018 at 12:06 AM Simone Bordet wrote: Hi, I can see this (weird) behavior in SSLEngine for TLS 1.3 in JDK 11+21. It's a simple new connection (no resumption) that performs a TLS 1.3 handshake. The bytes numbers are those that I get, they may be different for others depending on certificate size, etc. 1. Client wraps 394 bytes then goes into NEED_UNWRAP. 2. Server unwraps 394 bytes then goes into NEED_TASK and then into NEED_WRAP. 3. Server wraps (in 3 wraps) 160, 6 and 907 bytes then goes into NEED_UNWRAP. 4. Client unwraps 160 bytes then goes into NEED_TASK and then into NEED_WRAP (?) 5. Client wraps 6 bytes, then goes into NEED_UNWRAP. 6. Client unwraps (in 2 unwraps) 6 and 907 bytes, then goes into NEED_TASK and then into NEED_WRAP. 7. Client wraps 58 bytes and goes into FINISHED (?) 8. Server unwraps (in 2 unwraps) 6 and 58 bytes then goes into NEED_WRAP. 9. Server wraps 72 bytes then goes into FINISHED. 10. Client MUST unwrap those 72 bytes going again into FINISHED (which already happened at 7). There are 2 things that I find weird: A) That at 4, the client goes into NEED_WRAP, even if it has not finished to unwrap what the server sent. Apparently, it only goes into NEED_WRAP to generate a CHANGE_CIPHER_SPEC (I am guessing from the number of bytes generated), but then goes back into NEED_UNWRAP to finish reading what the server sent. This is also not optimal as it forces applications to do something with those 6 bytes: either put them aside (additional data structures that may not be needed) or - worse - write them to the server causing an additional write (after all the effort TLS 1.3 put in to have a 1 RTT handshake). I think that step 4 should go into NEED_UNWRAP, and that step 5 and step 6 should be switched, so that the client would unwrap the 160, 6 and 907 sent by the server, and only after wrapping the 6 and the 58. To be clear, the current behavior is (u==unwrap, w=wrap): u160, w6, u6, u907, w58. I think it should be: u160, u6, u907, w6, w58. Is there any reason that the 6 bytes needs to be generated in-between the processing of the frames sent by the server? In order to mitigate the compatibility impact, the TLS 1.3 is implemented in the middlebox compatibility mode (See Section D.4, TLS 1.3 draft 28) in JDK. The 6 bytes message is the dummy change_cipher_spec record. More details, please refer to the "Middlebox Compatibility Mode" section in TLS 1.3 draft 28: https://tools.ietf.org/html/draft-ietf-tls-tls13-28#appendix-D.4 B)That at 7 the client goes into FINISHED, but it is not finished really: in fact it needs to perform step 10, but there is no indication from the SSLEngine that it must do so. Currently, step 7 says the client is finished and there is no clue that it must unwrap those last 72 bytes. I think step 7 should go into NEED_UNWRAP instead, and only at 10 go into FINISHED. TLS 1.3 allows handshake message delivered after the main handshake. The concept is called post-handshake message. Please refer to the "Post-Handshake Messages" for more details: https://tools.ietf.org/html/draft-ietf-tls-tls13-28#section-4.6 In JDK 11, two post-handshake messages are supported, new session ticket and key/iv update. The 72 bytes in #9 and #10 are for the new session ticket. In JDK, the new session ticket post-handshake message is delivered immediately after the main handshake in server side. While for client side, it should be ready to accept the message at any time the main handshake complete. So, in #7, the FINISHED status means the main handshake complete. While in #10, the FINISHED status means that the post-handshake message get processed. I really concern about the compatibility impact. Did you run into problems with the new handshake message flow? In addition to what reported above, I would like to report also the weird behavior during the close handshake (i.e. when one side decides to close the connection). 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING (?) 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 5. Client unwraps 0 bytes (?) I think at step 2 the client should go into NEED_UNWRAP to read (at step 5) the server response to the close_notify. In TLS 1.3, half-close policy is used. The server may not response with the close_notify for client close_notify request. See the "Closure Alerts" section for more details: https://tools.ietf.org/html/draft-ietf-tls-tls13-28#section-6.1 It is a little bit complicated when moving from the duplex-close to half-close policy. In order to mitigate the compatibility impact, in JDK implementation, if the local send the close_notify, we choose to close the
Re: SSLEngine weird behavior in 11+21?
Hi, On Thu, Jul 12, 2018 at 12:06 AM Simone Bordet wrote: > > Hi, > > I can see this (weird) behavior in SSLEngine for TLS 1.3 in JDK 11+21. > It's a simple new connection (no resumption) that performs a TLS 1.3 > handshake. > The bytes numbers are those that I get, they may be different for > others depending on certificate size, etc. > > 1. Client wraps 394 bytes then goes into NEED_UNWRAP. > 2. Server unwraps 394 bytes then goes into NEED_TASK and then into NEED_WRAP. > 3. Server wraps (in 3 wraps) 160, 6 and 907 bytes then goes into NEED_UNWRAP. > 4. Client unwraps 160 bytes then goes into NEED_TASK and then into NEED_WRAP > (?) > 5. Client wraps 6 bytes, then goes into NEED_UNWRAP. > 6. Client unwraps (in 2 unwraps) 6 and 907 bytes, then goes into > NEED_TASK and then into NEED_WRAP. > 7. Client wraps 58 bytes and goes into FINISHED (?) > 8. Server unwraps (in 2 unwraps) 6 and 58 bytes then goes into NEED_WRAP. > 9. Server wraps 72 bytes then goes into FINISHED. > 10. Client MUST unwrap those 72 bytes going again into FINISHED (which > already happened at 7). > > There are 2 things that I find weird: > > A) That at 4, the client goes into NEED_WRAP, even if it has not > finished to unwrap what the server sent. Apparently, it only goes into > NEED_WRAP to generate a CHANGE_CIPHER_SPEC (I am guessing from the > number of bytes generated), but then goes back into NEED_UNWRAP to > finish reading what the server sent. This is also not optimal as it > forces applications to do something with those 6 bytes: either put > them aside (additional data structures that may not be needed) or - > worse - write them to the server causing an additional write (after > all the effort TLS 1.3 put in to have a 1 RTT handshake). > I think that step 4 should go into NEED_UNWRAP, and that step 5 and > step 6 should be switched, so that the client would unwrap the 160, 6 > and 907 sent by the server, and only after wrapping the 6 and the 58. > To be clear, the current behavior is (u==unwrap, w=wrap): u160, w6, > u6, u907, w58. > I think it should be: u160, u6, u907, w6, w58. > > Is there any reason that the 6 bytes needs to be generated in-between > the processing of the frames sent by the server? > > B)That at 7 the client goes into FINISHED, but it is not finished > really: in fact it needs to perform step 10, but there is no > indication from the SSLEngine that it must do so. > Currently, step 7 says the client is finished and there is no clue > that it must unwrap those last 72 bytes. > I think step 7 should go into NEED_UNWRAP instead, and only at 10 go > into FINISHED. In addition to what reported above, I would like to report also the weird behavior during the close handshake (i.e. when one side decides to close the connection). 1. client.closeOutbound() then goes into NEED_WRAP. 2. Client wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING (?) 3. Server unwraps 24 bytes, result is CLOSED, then goes into NEED_WRAP. 4. Server wraps 24 bytes, result is CLOSED, then goes into NOT_HANDSHAKING. 5. Client unwraps 0 bytes (?) I think at step 2 the client should go into NEED_UNWRAP to read (at step 5) the server response to the close_notify. Instead, at step 5 the client unwraps 0 bytes so we are left with those 24 bytes from the server that applications need to discard. Also, I am not sure that the wrap result at step 2 and 3 should be CLOSED, perhaps OK is better? The server is actually closed at step 4, and the client at step 5. However, this is a minor issue. Attached the debug logs as you requested. Thanks! -- Simone Bordet --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz sslengine.log.gz Description: application/gzip
Re: SSLEngine weird behavior in 11+21?
Hi Simone, Thank for reporting this issue. To speed up the evaluation, would you mind send us the debug log (using system property "javax.net.debug=all")? Thanks, Xuelei On 7/11/2018 3:06 PM, Simone Bordet wrote: Hi, I can see this (weird) behavior in SSLEngine for TLS 1.3 in JDK 11+21. It's a simple new connection (no resumption) that performs a TLS 1.3 handshake. The bytes numbers are those that I get, they may be different for others depending on certificate size, etc. 1. Client wraps 394 bytes then goes into NEED_UNWRAP. 2. Server unwraps 394 bytes then goes into NEED_TASK and then into NEED_WRAP. 3. Server wraps (in 3 wraps) 160, 6 and 907 bytes then goes into NEED_UNWRAP. 4. Client unwraps 160 bytes then goes into NEED_TASK and then into NEED_WRAP (?) 5. Client wraps 6 bytes, then goes into NEED_UNWRAP. 6. Client unwraps (in 2 unwraps) 6 and 907 bytes, then goes into NEED_TASK and then into NEED_WRAP. 7. Client wraps 58 bytes and goes into FINISHED (?) 8. Server unwraps (in 2 unwraps) 6 and 58 bytes then goes into NEED_WRAP. 9. Server wraps 72 bytes then goes into FINISHED. 10. Client MUST unwrap those 72 bytes going again into FINISHED (which already happened at 7). There are 2 things that I find weird: A) That at 4, the client goes into NEED_WRAP, even if it has not finished to unwrap what the server sent. Apparently, it only goes into NEED_WRAP to generate a CHANGE_CIPHER_SPEC (I am guessing from the number of bytes generated), but then goes back into NEED_UNWRAP to finish reading what the server sent. This is also not optimal as it forces applications to do something with those 6 bytes: either put them aside (additional data structures that may not be needed) or - worse - write them to the server causing an additional write (after all the effort TLS 1.3 put in to have a 1 RTT handshake). I think that step 4 should go into NEED_UNWRAP, and that step 5 and step 6 should be switched, so that the client would unwrap the 160, 6 and 907 sent by the server, and only after wrapping the 6 and the 58. To be clear, the current behavior is (u==unwrap, w=wrap): u160, w6, u6, u907, w58. I think it should be: u160, u6, u907, w6, w58. Is there any reason that the 6 bytes needs to be generated in-between the processing of the frames sent by the server? B)That at 7 the client goes into FINISHED, but it is not finished really: in fact it needs to perform step 10, but there is no indication from the SSLEngine that it must do so. Currently, step 7 says the client is finished and there is no clue that it must unwrap those last 72 bytes. I think step 7 should go into NEED_UNWRAP instead, and only at 10 go into FINISHED. Thanks!