Re: [TLS] Sending Custom DHE Parameters in TLS 1.3
On 2020-10-12 19:28, Ilari Liusvaara wrote: On Mon, Oct 12, 2020 at 12:36:06PM -0400, Michael D'Errico wrote: It appears that there may be a need to revert to the old way of sending Diffie-Hellman parameters that the server generates. I see that TLS 1.3 removed this capability*; is there any way to add it back? The Diffie-Hellman support in TLS 1.2 is severly broken. There is no way to use it safely on client side. This has lead to e.g., all the web browers to remove support for it. You have to excuse me, but there is a fair amount of noise in this group, and it is sometimes hard to find the information you are looking for, in past discussions held years or even decades ago. But surely DHE support can't be considered broken at the protocol level, because the client can't confirm that the server implementation of DHE parameter generation isn't broken? There are gazillion of ways the server implementation might be broken, that the client has absolutely no way to test, regardless of which TLS protocol it supports. I do not think I have to go into details. If I remember correctly, the problem was rather that some of the most common implementations had made a habit out of using poorly chosen parameters, and the automated security testing tools couldn't easily tell flawed servers, from servers that had fixed this issue. It wasn't really a protocol issue, but purely an implementation issue. Correct me if I remember incorrectly. There is no way to ensure that the parameters sent are not totally broken, e.g.: - Modulus too small. - Modulus too large. - Modulus not prime (has been used as a backdoor!). - Modulus is weak (possibly backdoored). - Subgroup order does not have large prime factor. Even checking the third would require primality test, and primality tests at relevant sizes are slow. And the fourth and fifth can not be checked at all in general case. For ECDHE, TLS 1.2 allowed server to specify custom curve to do the key exchange with. Rightfully pretty much nobody implemented that. I think TLS WG should withdraw recommendation (as flawed) on all TLS_DHE_* ciphersuites. -Ilari ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls smime.p7s Description: S/MIME Cryptographic Signature ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Confirming consensus: TLS1.3->TLS*
On 2016-11-18 16:40, Ilari Liusvaara wrote: On Fri, Nov 18, 2016 at 01:03:50PM +, Peter Gutmann wrote: So you're saying that apart from the different algorithms, cipher suites, messages, message fields, message flow, handshaking, negotiation, extensions, and crypto, it's practically the same thing? Yes. One can downnegotiate TLS 1.3 to TLS 1.2. Shouldn't you be able to do that between major protocol versions as well? ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] BoringSSL's TLS test suite
On 2016-09-26 02:02, Jim Schaad wrote: OPTIONAL and DEFAULT are not the same things. A DEFAULT value is omitted but not an OPTIONAL value. A single field cannot be both OPTIONAL and DEFAULT. My point was that "DEFAULT" is not the same as "default" either, but let's leave it there. You were listed as a co-author on RTC 5912, so maybe you could clarify whether the declaration of pk-rsa should still be considered to be correct: pk-rsa PUBLIC-KEY ::= { IDENTIFIER rsaEncryption KEY RSAPublicKey PARAMS TYPE NULL ARE absent -- Private key format not in this module -- CERT-KEY-USAGE {digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment, keyCertSign, cRLSign} } ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] BoringSSL's TLS test suite
On 2016-09-26 01:29, Jim Schaad wrote: The ASN.1 module in RFC 5280 does not say anything about if the field is optional for any specific algorithm. The ASN.1 for algorithm identifier is AlgorithmIdentifier ::= SEQUENCE { algorithm OBJECT IDENTIFIER, parameters ANY DEFINED BY algorithm OPTIONAL This very explicitly says that the value (and hence presence) of the parameters fields is strictly defined by the algorithm identifier. The algorithm identifiers for RSA with the SHA2 algorithms explicitly say they are required. RFC 5912 shows that this is required with the way it defines the same information sa-sha256WithRSAEncryption SIGNATURE-ALGORITHM ::= { IDENTIFIER sha256WithRSAEncryption PARAMS TYPE NULL ARE required HASHES { mda-sha256 } PUBLIC-KEYS { pk-rsa } SMIME-CAPS { IDENTIFIED BY sha256WithRSAEncryption } } You can see that the parameters are required and not optional. Thanks, you are absolutely correct about this, and this crucial for getting PKCS #1 v1.5 signatures right (since the algorithm identifier encoding is part of the data to be signed), but at the same time, NULL should be absent from the RSA public key: pk-rsa PUBLIC-KEY ::= { IDENTIFIER rsaEncryption KEY RSAPublicKey PARAMS TYPE NULL ARE absent -- Private key format not in this module -- CERT-KEY-USAGE {digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment, keyCertSign, cRLSign} } and this is definitely not common practice. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] BoringSSL's TLS test suite
On 2016-09-25 23:55, David Benjamin wrote: I believe we are also correct per spec. My interpretation of these documents is that the general AlgorithmIdentifier structure may or may not include parameters. However, whether a given parameter value or omitting parameters altogether is legal is a question for the particular algorithm. It's not overriding but plugging into the general framework. THe ITU-T X.690 standard for DER might require some close reading to interpret, and this is obviously a topic for debate. I would presume that the use of "default" in lower caps in section 11.5, refers to both the DEFAULT and the OPTIONAL keyword: 11.5 Set and sequence components with default value The encoding of a set value or sequence value shall not include an encoding for any component value which is equal to its default value. After all, this is the only interpretation that is consistent with the description in section 12.5 of DER as unambiguous. Since NULL is always empty, it should be omitted when OPTIONAL, given the above interpretation. But is it optional? The 1988 syntax did not feature information objects and classes, hence the use of the ANY keyword. Luckily the ASN.1 modules were updated in RFC 5912 to modern ASN.1 syntax. There you have these declarations: -- SIGNATURE-ALGORITHM -- -- Describes the basic properties of a signature algorithm -- -- - contains the OID identifying the signature algorithm -- - contains a type definition for the value structure of -- the signature; if absent, implies that no ASN.1 -- encoding is performed on the value -- - if present, contains the type for the algorithm -- parameters; if absent, implies no parameters -- - parameter presence requirement -- - The set of hash algorithms used with this -- signature algorithm -- - the set of public key algorithms for this -- signature algorithm -- - contains the object describing how the S/MIME -- capabilities are presented. -- -- Example: -- sig-RSA-PSS SIGNATURE-ALGORITHM ::= { -- IDENTIFIER id-RSASSA-PSS -- PARAMS TYPE RSASSA-PSS-params ARE required -- HASHES { mda-sha1 | mda-md5, ... } -- PUBLIC-KEYS { pk-rsa | pk-rsa-pss } -- } SIGNATURE-ALGORITHM ::= CLASS { OBJECT IDENTIFIER UNIQUE, OPTIONAL, OPTIONAL, ParamOptions DEFAULT absent, DIGEST-ALGORITHM OPTIONAL, PUBLIC-KEY OPTIONAL, SMIME-CAPS OPTIONAL } WITH SYNTAX { IDENTIFIER [VALUE ] [PARAMS [TYPE ] ARE ] [HASHES ] [PUBLIC-KEYS ] AlgorithmIdentifier{ALGORITHM-TYPE, ALGORITHM-TYPE:AlgorithmSet} ::= SEQUENCE { algorithm ALGORITHM-TYPE.({AlgorithmSet}), parameters ALGORITHM-TYPE. ({AlgorithmSet}{@algorithm}) OPTIONAL } pk-rsa PUBLIC-KEY ::= { IDENTIFIER rsaEncryption KEY RSAPublicKey PARAMS TYPE NULL ARE absent -- Private key format not in this module -- CERT-KEY-USAGE {digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment, keyCertSign, cRLSign} } Hence, the correct encoding is for NULL to be absent. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] BoringSSL's TLS test suite
Have you noticed that BoringSSL seems to abort handshakes with an illegal_parameter alert, if the server certificate uses the standard compliant (albeit highly unusual) DER encoding of NULL OPTIONAL as the empty string, instead of the non-standard but ubiquitous 0x05 0x00 encoding? Is this just a regression bug in BoringSSL, or is it an intentional restriction of the TLS protocol that should be propagated to other implementations as well? On 2016-08-16 20:08, David Benjamin wrote: Hi folks, BoringSSL has developed a test harness[1] that consists of a fork of Go’s crypto/tls package (recently dubbed “BoGo" at the Berlin hackathon) plus a test runner that allows BoGo to be run against the TLS stack under test. BoGo can be configured to behave in a number of unexpected ways that violate the TLS standard, thus enabling the testing of many scenarios that would be otherwise difficult to obtain with a standard stack. We (David Benjamin and Eric Rescorla) have been getting it to work with NSS and wanted to let others know in case they might find it useful. This system was initially designed to work with BoringSSL, but in principle can be used with any stack. The portability is still a little rough, and we'll likely make changes as we get more experience here, but it has already been used to test NSS[2] and OpenSSL[3]. We've written up some notes at [4]. The test suite should be fairly extensive for DTLS and TLS 1.2 (giving around 75% line coverage in BoringSSL’s TLS code at last count). It tests TLS 1.3 as well, though those tests are still in progress. They track BoringSSL’s in-progress TLS 1.3 implementation. David and Eric [1] https://boringssl.googlesource.com/boringssl/+/master/ssl/test/ [2] https://hg.mozilla.org/projects/nss/file/tip/external_tests/nss_bogo_shim [3] https://github.com/google/openssl-tests [4] https://boringssl.googlesource.com/boringssl/+/master/ssl/test/PORTING.md ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] call for consensus: changes to IANA registry rules for cipher suites
On 2016-03-30 17:33, Benjamin Kaduk wrote: I am not sure that we want to be in the business of explicitly marking things as insecure other than our own RFCs, though -- there could be an implication of more review than is actually the case, which is what this proposal is trying to get rid of. So how about explicitly marking things as "obsolete" instead? ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] call for consensus: changes to IANA registry rules for cipher suites
On 2016-03-30 13:27, Dmitry Belyavsky wrote: Dear Sean, I support the plan in general, but I think that we need to separately indicate that a particular algorithm/ciphersuite is not just "Not recommended" but found insecure. This does indeed sound reasonable. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] TLS 1.2 Long-term Support Profile draft posted
On 2016-03-18 09:57, Peter Gutmann wrote: Watson Laddwrites: >As written supporting this draft requires adopting the encrypt-then-MAC >extension. But there already is a widely implemented secure way to use MACs >in TLS: AES-GCM. This is there as an option if you want it. Since it offers no length hiding, it's completely unacceptable to some users, for example one protocol uses TLS to communicate monitoring commands to remote gear, they're very short and fixed-length, different for each command, so if you use GCM you may as well be sending plaintext. In addition GCM is incredibly brittle, get the IV handling wrong and you get a complete, catastrophic loss of both integrity and confidentiality. The worst that happens with CBC, even with a complete abuse like using an all-zero IV, is that you drop back to ECB mode. Indeed. For instance, if VM reset attacks are a concern, GCM is arguably a worse option than CBC, in particular if the CBC record IV generation can be made to be random even in the case of a VM reset attack. http://crypto.stackexchange.com/questions/32203/is-tls-secure-against-vm-reset-attacks ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Data volume limits
On 2015-12-16 12:17, Eric Rescorla wrote: Can we see a brief writeup explaining the 2^36 number? I believe Watson provided one a while back at: https://www.ietf.org/mail-archive/web/tls/current/msg18240.html One rather obvious problem with trying to equate probability of loss of confidentiality with the advantage for an IND-KPA adversary, is that the IND-models don't account for the length of the plain text. The real life problem is that you lose a lot more information a lot faster, by revealing the amount and frequency of the data transfer, than through the KPA distinguisher for CTR mode. And, furthermore, the IND-KPA distinguisher is a fairly well understood abstract artifact of CTR mode. It is not obviously relevant to compare it to distinguishers for primitives such as RC4, which typically indicate that there might be even worse problems. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Data volume limits
On 2015-12-16 01:31, Watson Ladd wrote: You don't understand the issue. The issue is PRP not colliding, whereas PRF can. Oh, but I concur. This means that if you observe two same valued cipher text blocks, you know that the corresponding key stream blocks can't be identical, and deduce that the corresponding plain text blocks have to be different. Such observations consequently leak information about the plain text, in the rare and unlikely event they actually occur. However, calling it an exploitable weakness is a bit of a stretch. AES-CBC is likely to loose confidentiality slightly faster, for typical plain texts. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Encrypting record headers: practical for TLS 1.3 after all?
On 2015-11-29 10:48, Bryan A Ford wrote: In short, leaving TLS headers in cleartext basically hands any eavesdropper a huge information side-channel unnecessarily and precludes anyone*but* the TLS implementation itself from adding any traffic analysis protection into the system. Encrypting TLS headers appears to cost practically nothing (at least if done as I've proposed), and it allows traffic analysis protection (whether weak or strong, intentional or unintentional) to be introduced at multiple points: e.g., by TLS itself, or by the TCP stack, or by middleboxes. Thank you for the explanation. A few points: The only way to completely thwart traffic analysis, is to always send data with the exact same amount-frequency pattern. The middleboxes you describe will *not* be able to achieve this, unless the TLS sender is adapted for such processing anyway. The middleboxes can't inject data. All they can do is wait for data to arrive and delay data that has already arrived. Traffic analysis is about deriving information from patterns that emerge from the combination of timing and sizes. If data is delayed in order to reach a specific size before being forwarded, the middlebox might just be shifting the size signal to an equally detectable timing signal. This is particularly true, if low latency is also a requirement. If very high latency is acceptable, the middleboxes might theoretically hide everything except the total amount of data transmitted during specific time intervals. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Encrypting record headers: practical for TLS 1.3 after all?
On 2015-11-28 12:30, Kriss Andsten wrote: On 27 Nov 2015, at 17:21, Henrick Hellström <henr...@streamsec.se> wrote: How, exactly, would this be significantly harder? The adversary will still be able to tell when, and how much, TCP/IP data is sent between the peers. If there happens to be a revealing TLS record boundary in the middle of a TCP/IP packet, it would seem to me there is an implementation problem rather than a problem with the abstract protocol. This is actually quite common. Even when it does align with packet boundaries, it is providing known information rather than inferred information ("here's a length X blob, then a length Y blob" vs "Here's a bunch of packets whose lengths minus TLS headers amount to to X+Y"). Maybe I have missed something, but this seems awfully implementation dependent to me. Let's take a more specific example: Suppose a web server is serving a request for a web page, which typically means that the client firstly sends a single HTTP request for the HTML page, and then multiple requests for the css, images, etc in a row. Most times, the latter row of requests could easily be encoded in a single TLS fragment. This means that the server will become aware of all of the requests at the same time, and might encode all of the HTTP responses before beginning to encode the TLS fragments. Carefully implemented, such a solution would not necessarily require significantly more resources to handle pipe-lining, compared to an alternative solution that would serve, encode and send the responses on-the-fly, and as a consequence quickly fill up the outgoing TCP/IP queue. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Encrypting record headers: practical for TLS 1.3 after all?
On 2015-11-28 12:15, Peter Gutmann wrote: Encrypting the length information is a serious step backwards both in terms of security and processing efficiency. I am inclined to +1 on that. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Encrypting record headers: practical for TLS 1.3 after all?
On 2015-11-28 19:58, Watson Ladd wrote: I think the above analysis is wrong. Consider a service written in Go using the built-in TLS library. Then the number and sizes of writes is visible to an attacker, which can reveal information about which branches were taken and the data sent. That's not because the total size of the response necessarily changes, but the sequence of writes taken to get there. I am not familiar with the internals of that implementation, but if the individual writes are immediately TLS encrypted and sent over the network, the timing of the TCP/IP data will likely leak a lot of information about the number and sizes of writes as well. It doesn't seem like a perfect design choice to use encryption to hide information that will leak with non-negligible probability anyway. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Encrypting record headers: practical for TLS 1.3 after all?
On 2015-11-27 15:35, Bryan A Ford wrote: The idea of encrypting TLS record headers has come up before, the most important purpose being to hide record lengths and boundaries and make fingerprinting and traffic analysis harder. How, exactly, would this be significantly harder? The adversary will still be able to tell when, and how much, TCP/IP data is sent between the peers. If there happens to be a revealing TLS record boundary in the middle of a TCP/IP packet, it would seem to me there is an implementation problem rather than a problem with the abstract protocol. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls