Re: [TLS] [EXT] Re: Deprecating Static DH certificates in the obsolete key exchange document
Blumenthal, Uri - 0553 - MITLL writes: >Nobody in the real world employs static DH anymore – in which case this draft >is useless/pointless It's not "any more", AFAICT from my inability to find any evidence of the certificates needed for it in 25-odd years it's "nobody has ever used static DH" (with the absence-of-evidence caveat). >I’m amazed by drafts like this one. Is nothing constructive remains out there >to spend time and efforts on? Slow news day? End-of-financial-year clearout? Quota to fill? Someone lost a bet? Could be all sorts of things. Someone else commented on having seen code to support this, that's just a natural side-effect of having code that supports DH and code that supports certificates, you end up with code that probably supports DH certificates, probably because without ever having seen one to test your code with you can't be 100% sure there isn't some glitch somewhere. For example my code happens to support Elgamal certificates because there's Elgamal code in there for PGP support and so if you use an Elgamal key in a certificate you'll get an Elgamal certificate. As with the DH-cert code it's never been tested because I don't think such a thing as an Elgamal X.509 certificate exists, but in theory there's support for them in there. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Deprecating Static DH certificates in the obsolete key exchange document
I realise that absence of evidence != evidence of absence, but in response to my previous request for anyone who has such a thing to comment on it, and even better to send me a sample so I can see one, no-one has mentioned, or produced, even one example of "a legitimate CA-issued [static-epmeheral DH certificate] rather than something someone ran up in their basement for fun". So is the draft busy deprecating unicorns and jackalopes? Nothing against that, but it's probably worth adding a note that such certificates are currently not known to exist so you probably don't have to worry about it too much. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Deprecating Static DH certificates in the obsolete key exchange document
Joseph Salowey writes: >At IETF 119 we had discussion that static DH certificates lead to static key >exchange which is undesirable. Has anyone every seen one of these things, meaning a legitimate CA-issued one rather than something someone ran up in their basement for fun? If you have, can I have a copy for the archives? The only time I've ever seen one was some custom-created ones for S/MIME when the RSA patent was still in force and we were supposed to pretend to use static-ephemeral DH for key transport instead of RSA. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [EXT] Re: Adoption call for 'TLS 1.2 Feature Freeze'
Salz, Rich writes: > My starting assumption here is that the majority of people implementing > TLS and/or deciding what to authorize for deployment TLS-wise, are not > stupid, and understand the benefits of the newer protocol version, > including its added security. And capable of evaluating the risks of > moving to TLS 1.3 vs. staying with 1.2. > >That is a much nicer and broader brush than one I am willing to use to paint >the IT industry. The near-universal thing I've run into is "our customers have read about this thing called TLS 1.3. 3 is bigger than 2 and they want some 3". Seriously. More generally, the request is phrased as "our customers are saying that our X can't talk to their Y. We need to make our X talk to their Y" (Y could be a 20-year-old buggy version of SSH, it's not necessarily newer stuff, just stuff that isn't currently handled). Technically the "capable of evaluating the risks" is accurate in that if they don't get some 3 there's the real risk that their customers will complain, but that's probably not what the OP was thinking about. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'
Arnaud Taddei writes: >This is why I asked the question whether there would be volunteers to design >a ‘survey’ approach. > >This could bring data points from the broader community that could help guide >this particular area of the work. I don't think the problem is volunteers, it's getting anyone to go on record, or even off-the-record, about it. For (say) SCADA stuff you're talking to embedded systems and networking engineers deep down in the internals of large corporations/organisations who can't speak publicly about anything, and I doubt anyone higher up who can go on the record is going to bother making a statement about their TLS migration policy. Even getting a boring and very sanitised tech report on something published can take so much paperwork that it's sometimes easier to drop co-authors than to try and get approval to have their names added. This is why the stuff I post here is often anecdotal and quite redacted, or done off-list, because it's too much of a headache to go through the paperwork to say anything publicly. Having said that, if someone can figure out a way to get the data out, it'd be great to have it available. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'
Watson Ladd writes: >Why would deploying that change to TLS 1.2 be easier than deploying TLS 1.3? One is making a (presumably) small tweak to an existing deployed protocol, the other is deploying an entirely new protocol. They're totally different things. (Not to mention additional issues like some devices not having the headroom to run two different protocol stacks side by side and other odds and ends, but those are minor nits). Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'
Loganaden Velvindron writes: >I'm curious. Are those embedded devices or IoT type of appliances where the >firmware has a TLS library that will never be updated ? Typically, yes. Many devices don't support remote firmware update, or need physical access to do it so it's never done, or will be decertified if you change the firmware so it's also never done. Some of these things also have 5-10 year development cycles, so there's an emphasis on getting it right the first time (as well as ensuring things like guaranteed supply of hardware for long periods, for example most embedded CPUs have 15-year supply longevity guarantees for this purpose, so if you were to start a new design today and have it done by 2028 you'd know the same chip would still be manufactured until 2038, at which point you buy up all the remaining production and stockpile it). Devices may gradually get updated over time as new units ship with newer firmware, but they're typically run alongside existing devices rather than replacing them. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'
Viktor Dukhovni writes: >Peter, is there anything beyond TLS-TLS that you're looking to see work on? >Is the issue foreclosing on opportunities to do anticipated necessary work, >or is it mostly that the statement that the work can't happen causing >disruption with audits and other bureaucratic issues? I can't foresee anything, but I also can't predict what the future will bring. It's more a case of some currently unknown thing cropping up and an RFC saying you can't make any changes preventing anything being done, at least in a published-standard manner. If it really is necessary to publish an RFC like this then perhaps text along the lines of "you can't add major new features but performing maintenance is OK" would work, although overall I still can't see why such an RFC is necessary in the first place. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'
Off-list: Funny that you should mention nuclear power plants, at least one of the systems I'm thinking of is used in nuclear power control. Those things are remarkably resilient, including in at least one case having the facility overrun by an invading army. They looted all the standard PCs but didn't know what the controllers were, once the facility was liberated they reconnected the controllers and they resumed operation as if nothing had happened (they have internal power sources so kept running despite external power being cut). They're between 20 and 25 years old, and I haven't seen any plans to retire them. Peter. From: TLS on behalf of Viktor Dukhovni Sent: Tuesday, 12 December 2023 17:49 To: TLS@ietf.org Subject: Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze' On Mon, Dec 11, 2023 at 07:51:13PM -0800, Rob Sayre wrote: > > Absolutely clear. I work with stuff with 20-30 year deployment and > > life cycles. I'm fairly certain TLS 1.2 will still be around when > > the WebTLS world is debating the merits of TLS 1.64 vs. TLS 1.65. > > I have to say, I am skeptical of this claim. The reason being that you > don't really want 20 year old computers connected directly to local > ethernet without a bridge. Did Peter say anything about (general purpose) computers or connections to the "local ethernet" (or Internet)? Suppose you have a control system for a ship, an factory floor, or a nuclear power plant. How often would one want to perform major software updates that substantially change aspects of the system design? What is the expected lifetime of such systems? Since Peter has been addressing market needs in that space for some decades, I'd be inclined to take him at his word... Again, it may well be that he does not have a compelling case for ongoing TLS working-group processes to enhance TLS 1.2, or he may yet. Peter, is there anything beyond TLS-TLS that you're looking to see work on? Is the issue foreclosing on opportunities to do anticipated necessary work, or is it mostly that the statement that the work can't happen causing disruption with audits and other bureaucratic issues? -- Viktor. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'
Rob Sayre writes: >>On Mon, Dec 11, 2023 at 5:30 PM Peter Gutmann >>wrote: >> >>Absolutely clear. I work with stuff with 20-30 year deployment and life >>cycles. I'm fairly certain TLS 1.2 will still be around when the WebTLS >>world is debating the merits of TLS 1.64 vs. TLS 1.65. > >I have to say, I am skeptical of this claim. Which one, that there is equipment out there with 20-30 year life cycles or that the WebTLS folks will be arguing over TLS 1.64 in the future? If the latter then it may just be TLS 1.59 at that point, as I said I can't see the future. If the former then I don't really know how to respond to that, are you saying you don't believe that there are systems out there deployed and used with multi-decade life cycles? Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'
Rob Sayre writes: >>Given that TLS 1.2 will be around for quite some time >Not clear. Absolutely clear. I work with stuff with 20-30 year deployment and life cycles. I'm fairly certain TLS 1.2 will still be around when the WebTLS world is debating the merits of TLS 1.64 vs. TLS 1.65. (This is also why the TLS-LTS draft was created, BTW). Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'
Watson Ladd writes: >How does a feature freeze make it impossible to keep supporting TLS 1.2 as >is? Because if there's some tweak required for some reason (I don't know what that could be since I can't predict the future) the draft seems to prohibit it. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'
In all the rush to jump on the bandwagon, no-one has yet answered the question I posed earlier: For anyone who's already moved to TLS 1.3 the draft is irrelevant, and for people who have to keep supporting TLS 1.2 gear more or less indefinitely it makes their job hard if not impossible. So what's the point of this draft? For TLS 1.3 it's just workgroup posturing which I don't have a problem with (I was on the PKIX WG for years, that was practically in their charter), but for ongoing TLS 1.2 use it's going to pointlessly make life difficult for anyone working in that area. Why is this draft even a thing? Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'
Deirdre Connolly writes: >At the TLS meeting at IETF 118 there was significant support for the draft >'TLS 1.2 is in Feature Freeze' ( >https://datatracker.ietf.org/doc/draft-rsalz-tls-tls12-frozen/) This call is >to confirm this on the list. Please indicate if you support the adoption of >this draft and are willing to review and contribute text. If you do not >support adoption of this draft please indicate why. This call will close on >December 20, 2023. I don't see what the point of this draft is. For anyone who's already made the switch to TLS 1.3 it's irrelevant, and for people who will have to support TLS 1.2 deployments probably for the rest of eternity it makes it impossible to make any changes or fixes. At best it does nothing, at worst it makes life very difficult for anyone who has to keep maintaining TLS 1.2 systems. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Request mTLS Flag
Viktor Dukhovni writes: >On Fri, Nov 17, 2023 at 09:57:42AM +0000, Peter Gutmann wrote: >> Could this use/behaviour be referenced somewhere to provide guidance for >> implementers in general? It'd be good to have this made an official way to >> do >> things rather than just being done on an ad hoc basis. > >What did you have in mind? I was thinking moving it into some mainstream RFC covering TLS use in general rather than the current need to hunt it through various RFCs and registries where, even if you know about it in advance, it's quite tricky to find it in there. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Request mTLS Flag
Viktor Dukhovni writes: >Indeed, Postfix 3.9 (release estimated Q1 '2024), when compiled against >OpenSSL 3.2 (release estimated circa next week), will automatically signal >client certificate types X.509(0) and RPK(2) iff and only a client >certificate is configured (available). Could this use/behaviour be referenced somewhere to provide guidance for implementers in general? It'd be good to have this made an official way to do things rather than just being done on an ad hoc basis. >AFAIK, today just two MTAs are doing SMTP with raw public keys, both are >mine. You have to wonder about the qualifications for being a standards-track RFC if, after ten years, the total installed base (at least for MTAs) is one person. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [EXTERNAL] Re: Fwd: New Version Notification for draft-davidben-tls-key-share-prediction-00.txt
It's OK, just appeared on the admin page. The Uni email can be pretty messed up sometimes so whenever things seem to take too long I check that they're actually still working. All fine, as you were :-). Peter. From: TLS on behalf of Michael P1 Sent: Friday, 27 October 2023 22:04 To: Rob Sayre; David Benjamin Cc: tls@ietf.org Subject: Re: [TLS] [EXTERNAL] Re: Fwd: New Version Notification for draft-davidben-tls-key-share-prediction-00.txt Hi All, Thank you for this interesting draft, I had a couple of quick questions. OpenSSL has been mentioned in this thread, but I was wondering if you had examples of other implementations or services that use the "key_share first" algorithm outlined in Section 3.1 of the draft, so that as this document is taken forward it's both clear what the impact is and what needs to be updated? Similarly, in Section 3.2 of the draft, can we be explicit about what we mean by "best common option"? As mentioned in the thread, some servers may prefer size/speed, and others security level. This is particularly relevant in the PQ algorithms case, but also applies to current implementations choosing between x25519 and secp384r1, for example. DNS hints may not help decide which is best as we explicitly are not using key_shares. Just to clarify, is the purpose of this draft to use new, duplicate groups for TLS to indicate that the server adheres to this draft? If so, would a simpler option be to update servers to use the guidance in Section 4.2.7 of RFC8446 to use the information in a successful handshake to change the groups used in the key_share in subsequent connections? Worst case here is that we have a suboptimal choice on first connection which can be improved on even when HRR is not an option. As a way forward, would it be worth working on this in rfc8446bis to clarify the desired behaviour? An example change would be to Section 2.1 which implies preference for key_share first selection. Thanks, Michael From: Rob Sayre Sent: Tuesday, October 17, 2023 9:08 PM To: David Benjamin Cc: Andrei Popov ; tls@ietf.org Subject: Re: [TLS] [EXTERNAL] Re: Fwd: New Version Notification for draft-davidben-tls-key-share-prediction-00.txt On Tue, Oct 17, 2023 at 12:32 PM David Benjamin mailto:david...@chromium.org>> wrote: > Server-side protection against [clients adjusting HRR predictions on > fallback] is not effective. Especially when we have both servers that cannot > handle large ClientHello messages and servers that have buggy HRR. I think the discussion about buggy HRR is a red herring. I agree with almost everything in the email except for this part. It's even worse than HRR, isn't it? The initial ClientHello will fail if spread across too many packets on some implementations, and then a new ClientHello will be sent using X25519 unless you want to lose customers. The client won't get an HRR back on the first try, the stuff just breaks (it's their bug, but it must be dealt with). But, if the DNS says it should work, it should be ok to fail there. The trustworthiness of this hint must also be weighed with ECH. So, if you're using SVCB with this idea and ECH, it seems pretty reasonable to me. thanks, Rob ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [EXTERNAL] Re: Request mTLS Flag
Viktor Dukhovni writes: >I think what you're really saying, is that it may be time replace the extant >client certificate request message with a completely new one, because the old >one is ossified. No, just have the server echo back the cert-auth flag from the client to indicate that it really wants to do this. Either that or mention in the RFC that some servers will send a cert request no matter what, so getting a cert request in response to an mTLS flag [*] doesn't necessarily mean that the server is expecting cert auth. Adding the note at least makes it Someone Else's Problem. Peter. [*] Why is it called mTLS? It's just TLS, mTLS doesn't add anything new that hasn't been in there for decades. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [EXTERNAL] Re: Request mTLS Flag
Andrei Popov writes: >An "I really mean it" flag. We can add these for every TLS message, not just >authentication-related ones. Just to make sure the peer truly is serious >about the TLS handshake. It really depends on how servers react when they see client-cert-auth when they're not expecting it. Some time ago I tested one of the always-requests- client-auth servers to see what happened when it actually did get client-cert- auth and the result was a Handshake Failure alert. For J.Random messages it won't matter, but if the server is requesting client auth without knowing it's doing it then some "I really mean it" indication back to the client might be useful. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [EXTERNAL] Re: Request mTLS Flag
Viktor Dukhovni writes: >I don't see in your comment anything to suggest that the flag is a no-go. Oh, it's definitely not a no-go, just pointing out that you shouldn't read too much into seeing a cert request from a server. In other words if the client says "I have a cert" and the server responds "please authenticate using the cert", that doesn't mean that the server will actually expect client cert auth at that point. So it may be necessary to have the server respond with its own flag to indicate that it really does want client cert auth and isn't just asking for a client cert on autopilot. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [EXTERNAL] Re: Request mTLS Flag
Andrei Popov writes: >Yes, but, arguably, such broken clients won't be fixed by adding new >extensions/flags/etc. If they do not comply with the simple RFC language that >exists, can we expect them to implement the new flag correctly? I would argue that it's the server that's broken, not the client. An awful lot of (non-WWW) servers automatically request a client cert without anyone running the server being aware of it, or when asked, how to disable it. The clients then sleepwalk their way past it with a zero-length reply and things continue as normal with neither the server admin nor the client-side user being aware that certificate auth was requested and denied. At least as a client, you can't read anything into seeing a cert request from the server, it's just a standard part of the handshake, like a keyex or a finished. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] I-D Action: draft-ietf-tls-deprecate-obsolete-kex-03.txt
This draft still has the same problem that's been pointed out previously: Clients MUST NOT offer and servers MUST NOT select FFDHE cipher suites in TLS 1.2 connections. What this means is that if the implementation doesn't support ECC, as some do, then it's in effect saying: Clients and servers MUST use RSA cipher suites. Some people may actually read a bit further and see the MUST NOT RSA, but that's just as non-useful because now it's saying you can't do TLS at all. So it needs to say: Unless ECC suites are not available, [Clients MUST NOT ...]. Or just something that doesn't end up being MUST RSA as it's currently being interpreted. I'd also go further and say that since FFDHE is allowed in TLS 1.3 it's also safe with EMS or LTS in effect, so it should really be: Clients and servers that do not support TLS-EMS or TLS-LTS MUST NOT offer and servers MUST NOT select FFDHE cipher suites in TLS 1.2 connections. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [EXT] Re: WG Last Call for draft-ietf-tls-deprecate-obsolete-kex
Blumenthal, Uri - 0553 - MITLL writes: >I’m aware of at least one company (using the term loosely) that uses custom >group, How does that work with TLS 1.3? Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] WG Last Call for draft-ietf-tls-deprecate-obsolete-kex
Hubert Kario writes: >FIPS requires to support only well known groups (all of them 2048 bit or >larger), and we've received hardly any customer issues after implementing >that as hard check (connection will fail if the key exchange uses custom DH >parameters) good few years ago now. Interesting, so you're saying that essentially no-one uses custom groups? My code currently fast-tracks the known groups (RFC 3526 and RFC 7919) but also allows custom groups (with additional checking) to be on the safe side because you never know what weirdness is out there, do you have an idea of what sort of magnitude "hardly any" represents? And can something similar be said about SSH implementations? There's fixed DH groups and then the Swiss-army-knife diffie-hellman-group-exchange-*, but AFAIK the only groups that ever get exchanged there are the RFC 3526/7919 ones. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] WG Last Call for draft-ietf-tls-deprecate-obsolete-kex
I wrote: >Salz, Rich writes: Argh, sorry, text-trimming fail, I was quoting Viktor Dukhovni but cut out the wrong block of text. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] WG Last Call for draft-ietf-tls-deprecate-obsolete-kex
Salz, Rich writes: >The formulation I would choose would be: > > - MUST prefer ECDHE key exchange, when supported, over FFDHE key exchange. > - MUST prefer FFDHE key exchange, when supported, over RSA key exchange. I think there should also be some wording around avoiding falling back to RSA because of choices made elsewhere. In the cases I'm aware of the use of RSA wasn't because anyone chose to use it but because some (I assume) best- practices document somewhere told admins "herp derp, disable DH" and the result was use of RSA without them being aware of it (it's led to weird configs where what might be enabled on one or both sides is a few ECDH suites at the start followed by a large hole where FFDH is and then finally a bunch of RSA suites at the other end). I would hope no-one actually *chooses* to use RSA, it just ends up as the silent fallback when other things are unavailable. So perhaps a note wherever some form of "SHOULD NOT FFDHE" appears along the lines of: Note that disabling FFDHE may cause systems to silently fall back to the far less secure RSA instead. If choosing to disable FFDHE, users should ensure that this doesn't result in clients or servers silently falling back to RSA, as this is far less secure than FFDHE. I realise that "MUST prefer FFDHE" says this too, but since users have already fallen into this trap in the past it'd be worth emphasising how to avoid it. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] WG Last Call for draft-ietf-tls-deprecate-obsolete-kex
Viktor Dukhovni writes: >What benefit do we expect from forcing weaker security (RSA key exchange or >cleartext in the case of SMTP) on the residual servers that don't do either >TLS 1.3 or ECDHE? This already happens a lot in wholesale banking, the admins have dutifully disabled DH because someone said so and so all keyex falls back to RSA circa 1995, and worst possible situation to be in. There needs to be clear text in there to say that if you can't do ECC then do DH but never RSA, or even just "keep using DH because it's still vastly better than the alternative of RSA". At the moment the blanket "don't do DH" is in effect saying "use RSA keyex" to a chunk of the market. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [EXTERNAL] Re: Servers sending CA names
Richard Barnes writes: >Let's Encrypt issues roughly 3 million publicly trusted certificates per day >that contain the client authentication EKU But they just set that by default for every cert they issue so it's pretty much meaningless. There are public CAs that set keyAgreement for RSA certs, and emailProtection for TLS server certs, doesn't mean any of them ever get used for that. (My more snarky response would have been that I should have asked that the IETF define a peaceOnEarth EKU so Let's Encrypt could set that as well :-). Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Servers sending CA names
Ilari Liusvaara writes: >You mean overflow the maximum field size (64kB)? No, just the 16kB message size, so you get what should be a ~100-byte cert request that's 20-30kB long. The code assumed - and I know this is crazy talk here - that a 100-byte message would fit easily into a 16kB I/O buffer, just needed a quick mod to throw away the message in two parts rather than one. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Servers sending CA names
Salz, Rich writes: >Is this generally used? Would things go badly if we stopped sending them? Just as a data point, in the SCADA world it seems to be universally ignored. I've seen everything from servers that send a list containing every CA in existence, so much data in that one field that it overflows the TLS maximum message size (when queried the server admins asked what a CA name list was, and what it was used for), to a few random CA names that don't correspond to anything they'll accept (when queried the server admins asked what a CA name list was, and what it was used for), to nothing at all. I've also seen plenty of servers that send cert requests to the client without actually wanting a cert (when queried the server admins asked what a cert request was, and what it was used for). The behaviour to make things work in this environment is to treat the cert request as a no-biased boolean: * No cert request present -> Proceed * Cert request present, no cert available -> Proceed * Cert request present, cert available -> Auth with whatever cert you happen to have using whatever algorithm it happens to use. So far this has produced zero complaints about things breaking. The fact that they've not received complaints from anyone else either indicates that pretty much every other implementation is doing something along similar lines. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] WGLC for draft-ietf-tls-rfc8446bis and draft-ietf-tls-rfc8447bis
Ben Smyth writes: >Because pre_shared_key appears in ClientHello and ServerHello, whilst >psk_key_exchange_modes only appears in the former? That's a circular argument, pre_shared_key already has two different forms that depend on whether it's the ClientHello or ServerHello it so this is saying "psk_key_exchange_modes has to be in a different extension because psk_key_exchange_modes is in a different extension". Making it { modes + info if required } in one extension where the mode is right next to the info it applies to rather than splitting it across two extensions would be the obvious way to do it. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] WGLC for draft-ietf-tls-rfc8446bis and draft-ietf-tls-rfc8447bis
On the subject of clarification, the update also needs to explain why PSK is split across two separate extensions, psk_key_exchange_modes and pre_shared_key, with complex and awkward reconciliation rules between then, and why the PSK has to be the last extension in the client hello. I can't see any reason for either of those two, which in particular for the latter one means why would an implementation follow that apparently pointless requirement? Is this codifying someone's implementation bug? Do demons fly out of your nose if it's not the last extension? Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] TLS 1.3 servers and psk_key_exchange_modes == [psk_ke]?
Viktor Dukhovni writes: >I am tacitly assuming that the implementation community might be somewhat >more pragmatic than the WG, and be willing to improve the behaviour of the >current extension, or perhaps the "silent majority" of the WG would in fact >be willing be more pragmatic on resumption, but haven't chosen to engage in >this thread, and we could ideally even reach some language in an update that >recommends more liberal settings in general, with punishment set aside only >for the faithful who believe they're sure to never stray, in case they do. It really depends on what the best way forward is for getting it working. The problem with adding even more conditions to the existing ones for the two PSK extensions (and I'll ask again, can anyone explain why a single function is split across two extensions?) is that "Errata exist" on the RFC's IETF page is really "Errata exist in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard'", while having a new standards-track RFC with "Updated by RFC " added to RFC 8446 means it'll actually get noticed and used. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] TLS 1.3 servers and psk_key_exchange_modes == [psk_ke]?
Viktor Dukhovni writes: >The protocol specification needs to say something along the lines of: I'm not sure if this will work though. The PSK stuff is already the second biggest dog's breakfast in the spec (why are there two extensions used to communicate PSK information, with complex reconciliation rules needed between them?), full of special cases and exceptions and weird requirements (if the client asks for meat in pre_shared_key then the server must abort the handshake if it doesn't have red wine for its pre_shared_key unless the client has indicated it doesn't drink in its psk_key_exchange_modes or has negotiated a vegetarian option in signature_algorithms that will be used during the handshake or indicated they're vegan in signature_algorithms_cert), adding the list you've given just makes it an even more complex mess, assuming you can get implementations to both adopt it and get it right. I think a safer option moving forward is "scrap it and order a new one", just do an RFC with a new, single extension with unambiguous semantics that reintroduces the TLS classic session resumption, but done with TLS 1.3 mechanisms. In other words just a standalone psk_ke in a new extension with a description of "if the client sends this, use it". Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] TLS 1.3 servers and psk_key_exchange_modes == [psk_ke]?
Viktor Dukhovni writes: >I took a look at whether it is practically possible for a client to "opt-in" >to (ostensibly cheaper) non-DHE TLS 1.3 resumption by sending a >"psk_key_exchange_modes" extension consisting of just "psk_ke". > >Turns out that at least when the server is OpenSSL, the client is likely to >be sad. We found that too, TLS classic-style session resumption is essentially impossible on TLS 1.3 because very widely-deployed implementations don't allow non-DHE PSK, the replacement for TLS classic session resumption. We were getting complaints about timeouts with RTUs (SCADA devices) and eventually tracked it down to the fact that they had to perform an expensive full crypto handshake on each ping to the controllers they talked to because they couldn't do a resume. This is one of the (several) reasons I referred to in a previous post why TLS 1.3 can be at lot lower-performance than TLS classic, luckily they were only testing 1.3 use so just dropped it and kept going with 1.2 which fixed the problem. Not really sure how to fix this, although at the moment "stay with TLS classic" seems to be the preferred option. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] How are we planning to deprecate TLS 1.2?
Viktor Dukhovni writes: >Yes, once TLS 1.3 is closer to 20 years old, we'll know whether TLS 1.2 can >or should be retired, but until such time, TLS 1.2 is likely to still be with >us (embedded in home routers, printers, refrigerators, ...). Another thing we need a lot more time to find out is whether, like HTTP > 1.1, TLS 1.3 has forked TLS. For HTTP there'll perpetually be two lines going forward, HTTP for web browsers and HTTP 1.1 for everything that isn't a web browser. For embedded/SCADA TLS use you've now got a complete second protocol stack to fit into your limited firmware space, it offers no real security advantages over (non-buggy) TLS 1.2, and its performance is often much worse than TLS 1.2 (yeah, citation needed for that, I'm working on writing up some of this), so that some users who did try TLS 1.3 quickly reverted back to 1.2. So I'd say wait 20 years or so to see where things are going, and look across at HTTP vs. HTTP 1.1 for a worked example. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] multi-identity support in RFC 8446
Chuck Lever III writes: >We're implementing TLSv1.3 support for PSK and note there is a capability in >the PSK extension described in S 4.2.11 for sending a list of identities. We >don't find support for a list of alternate identities implemented in user >space TLS libraries such as GnuTLS or OpenSSL. Is there a known reason for >that omission? If it's the same as similar locations in previous versions of TLS where it's possible to specify a list of X instead of just an X then it could be because no-one has any idea why you'd specify a list of X, or what to do with it if one does turn up. There are several fields where, in the past, we've had users ask what to do with them and it turned out, after some testing, that the answer is "whatever you want" because the other side pays no attention whatsoever to what's in there. Two that spring to mind are cert requests from the server, which some servers send out for every connection even though they don't actually want a cert from the client and get very surprised if one turns up (the user's comment on this was that no other software they'd used had an issue with this, which implied that exactly zero implementations actually paid any attention to it), and CA name lists, which some servers fill up with every CA name they've ever heard of with the server not actually caring which CA gets used, assuming they even care whether they get a response to the cert request. This actually extends to all of the other fields in the cert request as well, since the client typically has one single certificate to auth with and sends it, take-it-or-leave-it, without bothering to check whether it's in the server's long list of approved signature, hash, and CA names. Again, from interop testing, if you get a cert request from the server and have a cert you use it, if not you don't, and no server we could find ever complained about it. So the entire extension is in effect one of those zero-length signalling ones, telling the client "auth with a cert if you've got one, otherwise just keep going as if nothing had happened". Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
Hubert Kario writes: >Because there are software stacks that allow configuration of arbitrary >parameters for FFDH (see GnuTLS, OpenSSL), and there are software stacks that >generate one public key share and reuse it for a long time, or allow >configuration of this kind of behaviour (see old OpenSSL, NSS for ECDHE). If we had to throw out every security mechanism that it's possible to configure badly then there wouldn't be any crypto left. For example every SSH server in existence can be configured to use username = root, password = password, but that doesn't mean we need to mandate ZKP authentication as the only allowed way to log into an SSH server. It's also quite likely you can configure a lot of TLS servers to use RC4/40 (at least with OpenSSL-based ones it's 'enable-weak-ssl-ciphers') but that doesn't mean we need to deprecate all use of symmetric crypto. If people want to go out of their way to make their server insecure then there's not much we can do to stop them. This is a bit of a philosophical question here, and probably not worth debating, but is it really necessary to put "Do not hold the wrong end of a chainsaw" - that's actually used on chainsaws - in a technical RFC? That's something for the software vendor to take care of if they're going to allow settings for users to shoot themselves in the foot. Even for the proposed BCP it seems a bit redundant, why single out this particular, probably almost nonexistent, misconfiguration when there are hundreds or even thousands of others that are more likely to occur? Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
Hubert Kario writes: >It's also easy and quick to verify that the server *is* behaving correctly >and thus is not exploitable. It's also a somewhat silly issue to raise, if we're worried about a server using deliberately broken FFDHE parameters then why aren't we worried about the server leaking its private key through the server random, or posting it to Pastebin, or sending a copy of the session plaintext to virusbucket.ru? If the server's broken it's broken and there's not much a client can do about it. (As an aside, -LTS fixes this by requiring FIPS-186-style FFDHE values rather than PKCS #3-style ones, although a determined server can still bypass even this level of verification, just as they can spike ECDHE in a dozen ways if they want). Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
Hal Murray writes: >Would a BCP be a better approach? That might provide a good setting to >discuss the issues. There is no reason to limit a BCP to TLSv1.2 or FFDHE. That seems like a much better idea. A deprecate RFC can only say "no" while a BCP can cover alternatives, in this situation do this, in that situation do that. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
John Mattsson writes: >A more reasonable approach would be to deprecate all cipher suites without >_ECDHE_. > >While the WG is in deprecation mode, please deprecate all non-AEAD cipher >suites as well. RFC 7540 did this 7.5 years ago... An even more reasonable approach would be to mandate EMS, EtM, and (and I realise I'm biased here) LTS, which solve all of the above problems without having to throw away a bunch of long-standing cipher suites with massive existing deployed base. That's a simple, backwards-compatible tweak to the deployed base to fix existing problems rather than scrap-it-and-order-a-new- one to replace existing problems with a new set. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
Rob Sayre writes: >For my part, I'm sick of "IoT" or "SCADA" or "embedded" vendors just >endlessly keeping old cipher suites alive. The unwise cost-cutting in those >areas does not constrain the rest of the internet. And for my part I'm... well not really sick of but resigned to accepting the fact that as far as the WG seems to be concerned, nothing exists outside of the web [*] and there's no need to accommodate anything but that. Saying "lalalalala I'm not listening, I'm not listening" won't deal with the fact that there's a staggering amount of gear out there with product lifecycles sometimes measured in decades that needs a sound basis for making decisions about what to deploy, which this deprecation isn't providing. Maybe that's the way to resolve this, make it explicit that the deprecation applies for web use and not for other uses like SCADA, embedded, or anything else that needs to take long-term usage into account. Peter. [*] Once you exclude your list of IoT, SCADA, embedded, and the case I mentioned, transaction processing, you've pretty much ruled out everything but web use... well OK, admittedly there's still email (so opportunistic encryption) and a bunch of barely-visible stuff like any tunnels that for some reason don't already use IPsec/OpenVPN/WireGuard. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
Carrick Bartle writes: >In the situation you've described, they've been told they shouldn't use RSA >either, so clearly it doesn't matter to them what we've deprecated or not. Yup, because if you give people the choice between not A but not B either then they have to ignore one of the two, and without further guidance they've chosen to go with literally the worst possible option instead of the perfectly-OK one. Piggybacking a reply to your other message, anything that's online is DoS- able. If I want to DoS a web server, or anything at all for that matter, I'll hit it with a botnet, an attack that's effective on anything no matter what algorithm it uses. It seems the only real reason for deprecating DHE is that it's not fashionable. And as my earlier message pointed out, this WG fashion statement has real consequences in practice. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
Nimrod Aviram writes: >Let me clarify that the document also has RSA as a MUST NOT. > >So there will be no reason to read this document and switch from FFDHE to >RSA. If you tell people they can't have A but they can't have B either then they're going to have to choose one of the two in order to communicate, and in (at least some) banking it's RSA, the most insecure option there is, because they've been told they shouldn't use DHE. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] consensus call: deprecate all FFDHE cipher suites
Blumenthal, Uri - 0553 - MITLL writes: >I do not support deprecation, because there will be deployed devices (IoT, >SCADA) that aren’t upgradable – and the new stuff will have to access them. It's actually much worse than just SCADA, there are deployments in things like wholesale banking where the semi-deprecation of DH suites has led to them falling back to RSA instead. This pointless removal of FFDHE, while it'll be written as MUST NOT FFDHE, will actually be MUST RSA in some environments. In other words not only will it not make things any more secure, it'll make some things much, much less secure. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] sslkeylogfile
Martin Thomson writes: >The exact same thing, just using different words and style. But it's not the same thing, it only seems to cover some TLS 1.3 extensions. Thus my suggestion to call it "Extensions to the SSLKEYLOGFILE Format for TLS 1.3". Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] sslkeylogfile
Martin Thomson writes: >Maybe the web page is easier to consume, but a spec needs to be a little more >precise in definition. Well at the moment the web page defines what's used in practice and the spec defines... something? A hope for the future? An extension to the current usage? At the moment if I'm using something like Wireshark to debug a connection issue I need to go to the Mozilla page because that's what tells me what's needed by the app consuming the data. Maybe if it was renamed to "Extensions to the SSLKEYLOGFILE Format for TLS 1.3" it'd be a bit clearer. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] sslkeylogfile
Martin Thomson writes: >I just posted https://datatracker.ietf.org/doc/draft-thomson-tls-keylogfile/ This looks like some variant of https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format but I'm not sure what it is or what form it takes. Is it an extension of that for TLS 1.3? If it is then the form looks quite different from the existing one. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [lamps] [EXTERNAL] Re: Q: Creating CSR for encryption-only cert?
John Gray writes: >You can replay the CSR and get the certificate request by the original party >signed by whatever CA you want, but would that do you any good if you don't >have the private key? That's exactly the point, which others have also made in the thread. Yes, you can do this, but then what? Or, to be pedantic, "then what that's actually useful in practice to an attacker rather than something that justifies a conference paper?". In other words, what real-world problem are we actually solving by requiring PoP, how much existing practice will it break by doing so, and is it worth the cost? Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [lamps] [EXTERNAL] Re: Q: Creating CSR for encryption-only cert?
Blumenthal, Uri - 0553 - MITLL writes: >Peter, "Compromised" in the context must necessarily mean "someone stole the >key", because if someone "broke the crypto" - then none of the certs issued >by that CA is worth the weight of electrons that carried it. "Compromised" meant (at the time, I was trying to avoid bringing in specific references) someone factoring the 512-bit RSA key in the cert. Since CAs used 2048-bit keys in HSMs for signing this wasn't an issue for them. Stolen keys were't any more than a minor theoretical consideration compared to attacking the crypto until the cybercrime industry started doing it en masse, completely ignoring the crypto in the process (see Shamir's Law). Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [lamps] [EXTERNAL] Re: Q: Creating CSR for encryption-only cert?
Tomas Gustavsson writes: >I'd like to add that adding a challenge-response POP need to be built into >protocols as well, not only in CSR formats/specification. Only adding a >method for this to PKCS#10, without also specifying how it is to be used in >ACME, CMP, EST and SCEP will most likely wreak total havoc. We also need to ask CAs and users what they want. The advantage of a CSR is that it can be pasted into a web form, emailed, POSTed to a server, and many other mechanisms. Challenge-response PoP breaks all of that, which means it breaks most of the common mechanisms for getting a cert outside the web PKI where CSRs are near-universal. So even adding a mechanism for this to PKCS #10 will wreak total havoc, or in practice just get ignored. This is why the nearly 30-year-old PKCS #10, like the B52, keeps outliving all of its successors, it gets the job done in a way that suits users. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [lamps] [EXTERNAL] Re: Q: Creating CSR for encryption-only cert?
von Oheimb, David writes: >Peter, the argument you gave below: > >> I mean what actual attack that's been actively exploited in the real world >> will use of PoP prevent? >> We've been shipping raw PKCS #10's around for decades (with no PoP) without >> causing the collapse of civilisation. > >appears invalid to me because PKCS#10 requires a self-signature (at least, >this is how they are understood/used by most implementations) and thus does >provide a PoP - and maybe civilization has survived just because of tha A self-signature on a CSR isn't a PoP though, I can intercept your CSR and get myself a certificate issued for it even though I don't have the private key. >Strictly speaking, it is invalid (also) because the absence of known real- >world attacks does not prove that real attacks do not exist by now or cannot >be found in the future. Sure, but we lots of real-world attacks being actively exploited at scale that we aren't dealing with (a great quote from a vulnerability researcher on this a few years ago was "If there's a booming criminal marketplace associated with your security mechanism then it's not working"). Once those are addressed we can look at the near-infinite number of theoretical attacks that no-one's ever been able to figure out what to do with. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [lamps] [EXTERNAL] Re: Q: Creating CSR for encryption-only cert?
Tim Hollebeek writes: >There’s also the problem that there’s no standard for secure proof of >possession for revocation, despite a number of us calling for one for years. This is one of the 8,000 (approximately) great unresolved PKIX disagreements where about half of PKIX thought revocation should be made as easy as possible to be able to deal with things like compromised keys [0] and the other half of PKIX thought it should be made as difficult as possible to be able to deal with DoS via hostile revocations (during one of the interminable debates around this, one of the participants suggested that supplicants should be required to fly to the CA's place of business and beg them on their knees to revoke the cert). The difficult-as-possible side mostly won in the standards (e.g. the CMP requirement to sign a revocation request for a key you've lost before it can be revoked) while the easy-as-possible mostly won in practice because that's what people actually wanted. Peter. [0] "Compromised" meaning someone broke the crypto, not stole the key, since that's not supposed to happen. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [lamps] [EXTERNAL] Re: Q: Creating CSR for encryption-only cert?
A general question, motivated by "I need a different hammer because the one I'm currently using isn't able to pound screws in properly": Why is PoP actually required? And by this I don't mean "why is it in theory a good thing", I mean what actual attack that's been actively exploited in the real world will use of PoP prevent? We've been shipping raw PKCS #10's around for decades (with no PoP) without causing the collapse of civilisation. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Creating CSR for encryption-only cert?
Brockhaus, Hendrik writes: >During the last LAMPS interim call, I mentioned this topic as well. It was >decided to add support for KEM keys in RFC4210bis. Sean said, that he is >working on a draft on PoP for KEM keys. Uhh... CMP has supported KEM keys since day one. And signing keys, and key agreement keys, and door keys, and numeric keypad keys, C-minor-scale keys, wall plaster keys, and yu-shiang whole fish as well because why not? Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] New Version for draft-segers-tls-cert-validation-ext
Ashley Kopman writes: >But I want to be clear that I do not intend to implement a solution and try >to sell it to the community. Sure, and I wasn't saying that, just pointing out the problems that have arisen in other situations where industry bodies have adopted orphan standards that ended up requiring custom implementations and support, which gives vendors a pretty captive market. Speaking of which, the ASN.1 diagnostic tool dumpasn1 doesn't currently have any real support for SCVP in it because until now I've never been able to find any examples of it. Do you, or anyone else, have samples of a typical request and response, and the accompanying policy request and response, that I could use to test dumpasn1 on? I haven't looked at RFC 5055 for a long time but just skimmed it recently and it looks like a prime example of the problems I described in my previous message, all SEQUENCE OF SEQUENCE { CHOICE { CHOICE { CHOICE { SEQUENCE OF { OPTIONAL, OPTIONAL, OPTIONAL, OPTIONAL, OPTIONAL } } } } }, there's so many variants and optional pieces that I'd have no idea what's actually used in practice. That's also why I'm fairly surprised that anyone was able to achieve interoperability with that as the spec, unless there's a profile of it somewhere that I don't know about. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] New Version for draft-segers-tls-cert-validation-ext
Ashley Kopman writes: >Although our use case is aviation, our goal is to write this draft so that it >can be used by other domains. Someone has to say this, so I may as well: I don't think you'll have to worry about anyone else using it. The PKIX WG left behind it a long trail of RFCs that no-one ever implemented except for maybe one sample attempt by the company that commissioned the RFC. One notorious example is one of the protocols with "path validation" in the name somewhere for which the sole known sort-of implementation at the time was a set of abandoned patches to OpenSSL dating from about 2005 located on an FTP server in a disused lavatory in France with a "Beware of the leopard" sign on the door [0]. The problem is that every now and then some industry body finds one of these RFCs and decides that they're exactly what they need without being aware of the fact that support for them is essentially nonexistent, thus my surprise when mention of SCVP use came up a few months ago, it was one of the many protocols that had no known, or at least visible, support anywhere. >From a vendor point of view this is brilliant because you get paid once to implement your guess at some protocol that none of the usual suspects supports so it's necessary to commission a custom implementation, and then again for each other custom implementation that it has to talk to when you try and reverse-engineer what their guess at the protocol was. Then once it's implemented you've got a captive market with very little choice for alternatives or competitive pressure. It's, um, great for business. More difficult is when said industry body specifies stuff that's in the spec but that absolutely nobody implements, typically because nobody can figure out why it's in the spec or what purpose it serves. If there's five different ways of doing the same thing (there usually are in PKIX specs), industry bodies have an unerring knack for picking one of the four that no-one implements because when you do you realise they don't make any sense, rather than the one that does. This is less good because as an implementer you're now stuck between a rock and a hard place. Technically what's being asked for is in the spec so you end up in a long argument over whether the contract should be interpreted as "follows the spec" or whether it should be "works in practice", and if it's the former, how the other implementation managed to create something that appears to work ("accept any old rubbish that arrives" seems to be one approach, including in one case a Stalin quote in what was supposed to be a security-relevant field). After all that, two comments: 1. You may as well make it as aviation-specific as you like and thereby get exactly what you want, I doubt anyone will care, or even notice. 2. We really need a register of RFCs that essentially nothing supports or implements to prevent more cases of industry groups thinking they should do something with them. Peter. [0] It may have been Italy or one of the Benelux countries, I can't remember exactly. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Getting started, clock not set yet
Kyle Rose writes: >A large attack surface can't be avoided with the MTI for these protocols. It can be vastly reduced by only implementing the MTI rather than every possible bell and whistle in existence. Also since an RTU (remote terminal unit) doesn't need to talk to every single piece of broken software on the planet but only what the master station it's talking to is running, all you need is whatever the de facto universal standard config is, either DH+RSA+AES or P256 ECDH/ECDSA+AES and nothing else, no suite negotiation, no extensions, nothing. And that goes all the way up and down the protocol stack. TCP options, fragmentation, UDP, ICMP, packet reordering, most flow control and congestion avoidance, none of that's there. Fuzzing these things is mostly a waste of time because there's no alternate code paths or corner cases to discover in the fuzzing. Makes them remarkably resistant to attack because there's very little there to attack. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Getting started, clock not set yet
Kyle Rose writes: >IMO, the two requirements "Prohibit upgrades" and "Leverage general-purpose >network protocols with large attack surfaces" are in direct conflict. Only if you implement them with large attack surfaces, for which again see my earlier comments. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Getting started, clock not set yet
Kyle Rose writes: >I wish I had some more context for this area of embedded devices. For example: > > * Why is an RTC more expensive (along whatever axis you choose) than a NIC >(wifi or ethernet)? Quoting "IoT / SCADA Crypto, What you Need to Know": The device often won't have any on-board time source because it's not feasible to include an RTC in the design. An RTC adds considerable cost (possibly as much as the rest of the device), may be larger/heavier than the rest of the device, typically requires one or more extra assembly steps to fit because they can't be installed via pick-and-place and reflow soldering, make the device more vulnerable to issues like high and low temperatures that embedded devices are typically exposed to, and wear out (the batteries die) long before the rest of the device does. > * What classes of devices would reasonably sit on a shelf for ten years and >subsequently prove useful without being updated? Any number of SCADA devices. They're an exact replacement for an existing device, so the fact that you're replacing something that's failed with something else that's exactly identical is a requirement. You don't want to replace it with something that someone's fiddled with in the meantime because you can't guarantee that it'll behave the same as the original device did. > * If it's been sitting on a shelf for ten years, why is reattaching it to >the network easy, while plugging it into an upgrade klosk first and *then* >reattaching it to the network is hard? See my earlier comments on this. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] TLS ECDSA nonce reuse attack?
Robert Moskowitz writes: >The article is unclear if this is a TLS 1.2 and/or 1.3 problem. It does >claim that 1.3 does not fix all problems with TLS. It's neither TLS 1.2 or 1.3, it's an ECDSA problem. The paper happened to use TLS because it's a convenient way to probe the Internet for problematic implementations, but it's a problem with ECDSA, not with TLS. You could do the same thing with SSH, there's just a lot less of it out there, and since TLS servers are things you want everyone to see and access while SSH servers are ones you don't, I would imagine probing SSH servers in the same manner would run into a lot more problems than probing TLS ones, e.g running into fail2ban rules and similar which would mess up your results. As an aside, it also backs up my comments earlier about ECDH being just as problematic as DH in TLS: Our data shows that non-unique server ECDH parameters are very common; in the UCSD data almost 15% of observed connections used a non-unique set of server key exchange parameters. In other words telling everyone to move from DH to ECDH just moves the same problem across to a different algorithm. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Getting started, clock not set yet
Kyle Rose writes: >Expired CAs are definitely a problem for PKI participation after such a >delay, but probably one that is dwarfed by the near certain existence of >known vulnerabilities in firmware that hasn't been updated in 10 years. So >it's probably best they remain air-gapped and don't participate in active >networked systems until they've been updated, which would then include new CA >certificates. Getting a bit off-track, but the ones I've encountered won't be updated, typically because there's no way to do so and/or because the software is written to a higher standard than the usual Internet-facing stuff. One common defence, although it's not really intended as such, is that there's nothing there to attack, everything is hardcoded and fixed and does just barely enough to communicate with the other side, so there's very little attack surface. >Consequently, I would not recommend any device interact with the web without >being able to establish that server certificates have not expired. Sure, but that's web use. For SCADA use you don't care (or check) whether the certificates have expired or not because that's the difference between "things work" and "things don't work": "When PLCs’ certificates expire, they just disappear off the network. Plus, 99 percent of the industrial world has no idea what a certificate is, so how do they troubleshoot the problem at 2am?" (from "Control Systems Security from the Front Lines"). I would assume this extends to lots of non-SCADA cases as well: If you want things to work, you don't check for cert expiry. Or revocation. >>Ignoring CA billing-cycle^H^H^Hexpiry dates > >You repeatedly bring up this point, but you do realize that one of the people >you're arguing with was instrumental in the establishment of a mechanism for >provisioning *free* web PKI certificates, right? The cost of procuring signed >certificates is no longer an obstacle to participating in the web PKI. Sure, and I pointed out that it was a thing for commercial CAs. In any case though I wasn't commenting on that but on why a 1-year expiration period is used, because the alternative was to point out that tying a supposed security parameter to the earth's (approximate) orbital period seems a bit paleolithic [0]. And in that case I think we should take a less geocentric view of certificate expiry and instead use the orbital period of the largest planet in the solar system, Jupiter, over 300 times the size of the earth so it's got to be more significant. Giving certificates an expiry time of 12 years to match Jupiter's orbital period should be enough to cover most use cases (extending this to Pluto, and whether it should actually be Pluto or stop at full planets like Neptune, is left as an open discussion point). Peter. [0] Based on nature-worship dating back to the Old Stone Age, I don't know whether they knew the earth's orbital period back then or not but I believe it was well known by the Bronze Age. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Getting started, clock not set yet
Christian Huitema writes: >For example, the device will get some notion of time from the dates in the >certificates that are provisioned during enrollment. Maybe that's enough to >move from the 10 years scenario to the one year scenario, and then call NTP. >But it would probably be better to spell it out. That's one of several ways I've seen of getting an approximate time, if you get fed a cert with validFrom = X then you know that it's at least time X. A more common one is to use HTTP as NTP and take the time from the "Date:" line. For store-and-forward, you take the message signing time, e.g. the CMS signingTime attribute. One I haven't seen for awhile (thankfully) is to take the time in the TLS server hello, the gmt_unix_time, and use that (I never set that to anything valid so as not to expose the client or server to time-based attacks, problem was that sometimes it looked valid enough that it messed up the other side). In any case there's no need to implement yet another protocol on top of the existing ones, you can make do with what you've got - there are timestamps in so many things that you can typically find one in existing messaging. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Getting started, clock not set yet
Kyle Rose writes: >What Peter said isn't quite right, since (for example) you wouldn't want to >be obliged to distribute revocations for compromised but long-expired >certificates under the assumption that a properly-functioning client wouldn't >accept them anyway Ah, good point. However, you also need to look at what problem is being solved. The OP mentioned systems without real-time clocks, which most of the time means systems outside of the web PKI - try starting a PC whose clock is too far off and lots and lots of things break badly, at which point you ask the user to set the time correctly regardless of NTP availability. So we're down to mostly non-web-PKI devices and/or the ten year problem, of which I've encountered the latter several times with gear that sits on a shelf for years and then when it's time to provision it all the certificates have long since expired, which is another reason why you ignore expiry dates (or at least you ignore them after you get hit by the first major outage caused by this because until then no-one realised that it was an issue, a ticking time- bomb that may take years to detonate). That leaves revocation, which alongside ignoring expiry dates is another thing that's commonly ignored in SCADA, both for the same reason as expiry dates are ignored, you don't want to DoS yourself, and because in many cases there's neither a logical nor a practical basis for revocation or revocation checks. For example in typical SCADA networks a device is removed by shutting off its access, not by adding an entry to a CRL somewhere and hoping someone notices. In fact it's not even clear what certificate would be revoked in order to achieve some effect, or why. In addition since revocation checking assumes online access to a CA, which typically isn't the case (I know of one setup where CRL delivery is by USB key once a year or so, and AFAIK the CRL is always empty because you don't want to knock a system offline with a revocation), there's no practical revocation checking done even if someone can figure out a logical reason for performing one. So I think before jumping in with a solution, we'd need to look at actual real-world usage cases to determine what actually needs to be solved. Ignoring CA billing-cycle^H^H^Hexpiry dates seems to be the simplest solution for many cases, and is more or less necessary for the ten-year problem. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Getting started, clock not set yet
Hal Murray writes: >Many security schemes get tangled up with time. TLS has time limits on >certificates. That presents a chicken-egg problem for NTP when getting >started. > >I'm looking for ideas, data, references, whatever? For commercial CAs, the expiry time is a billing mechanism, not a security mechanism. A certificate is no more, or less, valid at 23:59:59 than it is at 00:00:01, no matter what the subscription renewal time in it says. It's fairly widespread practice in SCADA to completely ignore expiry times because equipment that takes itself offline at 4am at a site six hours' drive away because of an expired certificate is the last thing you want. So set up the TLS connection, ignore the expiry time, perform your NTP update, and then if necessary do the expiry check (unless it's SCADA gear, in which case don't). Nothing of value will be lost. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Before we PQC... Re: PQC key exchange sizes
Phillip Hallam-Baker writes: >Quantum Annoyance: I thought a Quantum Annoyance was someone who keeps banging on about imaginary attacks that don't exist as a means of avoiding having to deal with actual attacks that have been happening for years without being addressed. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] draft-deprecate-obsolete-kex - Comments from WG Meeting
David Benjamin writes: >Regardless, I don't think it's worth the time to define and deploy a fixed >variant of TLS 1.2 DHE. We've already defined a successor twice over. TLS 1.3 is a non-starter for lots of embedded stuff so that leaves ECDHE which I assume is what you're referring to with "successor twice over". That as a solution is a problem too for implementations that don't do ECC or if a problem is ever found in the ECC algorithms... well, actually lots of problems *have* been found in ECDSA/ECDH needing at least as many band-aids as FFDH [0] so that's a bit of a tautology. So it is worth fixing, and in particular it doesn't cost anything to say "do this if you want to use DH safely". Peter. [0] I haven't totalled the score for both sides so this is an approximation. Also a number of problems on both sides are due to poor implementations rather than actual problems with the algorithms, so they may or may not count. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Authentication weaker in PSK-mode?
Rob Sayre writes: >Couldn't an implementation use data from a preexisting agreement in a >conventional TLS handshake? Yep, that's more or less TOFU then. TLS isn't supposed to do that though because then it would look like it was SSH, or some reason like that. I sketched out TOFU-for-TLS years ago but never did anything with it because I just couldn't face the headache of trying to get it through the TLS WG. If there's any interest in it I could see if I've still got the text lying around somewhere and turn it into a draft. (Note that this is different from session resumption/session tickets in that it's a new session authenticated with previous-session data, not resuming an existing session based on cached data, so there's no need for a server to hang onto everyone's session state in perpetuity as there would be with resumption/ session tickets. It also provides PFS if the (EC)DH suites are used while resumption/session tickets don't). Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] draft-deprecate-obsolete-kex - Comments from WG Meeting
Ilari Liusvaara writes: >Unfortunately, that does not work because it would require protocol >modifications requiring coordinated updates to both clients and servers. I was thinking of it more as a smoke-em-if-you-got-em option, since -LTS is by negotiation it'd be something to the effect that if you're using -LTS then you're covered, otherwise do X. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] draft-deprecate-obsolete-kex - Comments from WG Meeting
An additional comment on this, a pretty straightforward solution is to use the TLS-LTS one: TLS-LTS sends the full set of DH parameters, X9.42/FIPS 186 style, not p and g only, PKCS #3 style. This allows verification of the DH parameters, which the current format doesn't allow: o TLS-LTS implementations MUST send the DH domain parameters as { p, q, g } rather than { p, g }. This makes the ServerDHParams field: struct { opaque dh_p<1..2^16-1>; opaque dh_q<1..2^16-1>; opaque dh_g<1..2^16-1>; opaque dh_Ys<1..2^16-1>; } ServerDHParams; /* Ephemeral DH parameters */ Note that this uses the standard DLP parameter order { p, q, g }, not the erroneous { p, g, q } order from the X9.42 DH specification. o The domain parameters MUST either be compared for equivalence to a set of known-good parameters provided by an appropriate standards body or they MUST be verified as specified in FIPS 186 [9]. Examples of the former may be found in RFC 3526 [32]. That pretty much solves the problem once and for all without needing magic-number groups or similar. Peter. From: TLS on behalf of Nimrod Aviram Sent: Friday, 29 July 2022 02:41 To: Subject: [TLS] draft-deprecate-obsolete-kex - Comments from WG Meeting Hi Everyone, Thank you for chiming in with comments and suggestions regarding draft-deprecate-obsolete-kex :-) I've tried to summarize everyone's comments below, hopefully grouped by subject. Apologies in advance if I missed anything (or misspelled names...), please do reply to this thread :-) My intent here is only to make sure we have a good record of the comments made. I hope to follow up soon with a suggested way forward for the draft. thanks, Nimrod === Scott Fluhrer: We can only check for group structure if it's a safe prime, and even for a safe prime it's too expensive. Suggest limiting groups to a safelist. Mike Ounsworth: Automated scanning tools routinely flag standardized FFDHE groups. Daniel Kahn Gillmor and Thom Wiggers: This is because of the Logjam paper and precomputation. But they missed that the advice to generate your own DH params was for 1024 bit parameters for sofware that didn't support anything else. Daniel Kahn Gillmor: Would be good to discourage non-standard groups, while acknowledging the original argument for non-standard groups and explaining why it doesn't motivate non-standard groups today. Viktor Dukhovni: Postfix is far from the only one with non-standardized, built-in default groups. Even for Postfix there are several groups, depending on the version. Would be hard to build a list of widespread groups. Ben Kaduk: Can we start a registry for safe, widespread groups? Martin Thomson: We tried using a safelist (that included only 7919 groups? - Nimrod) but people use weird groups, and we couldn't turn that on. David Benjamin: Agree, better to turn off FFDHE entirely. The deployability issue with 7919 is also documented in https://mailarchive.ietf.org/arch/msg/tls/bAOJD281iGc2HuEVq0uUlpYL2Mo/ https://mailarchive.ietf.org/arch/msg/tls/DzazUXCUZDUpVgBPVHOwatb65dA/ Uri Blumenthal: We should neither recommend or discourage non-standard groups. Leave it to each operator to decide for themselves, they likely know what they're doing. Jonathan Hoyland and Martin Thomson: The pen-testing comment provides a counterargument. Uri Blumenthal: The draft is unnecessarily strict, from both deployment and security points of view. Examples of stuff that should be retained: RSA, FFDHE. PQ implications: all the NIST PQC winners and finalists are KEMs, not KA - aka, similar to RSA rather than DH. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Authentication weaker in PSK-mode?
Ben Smyth writes: >should we consider PSK-mode authentication weaker than certificate-based >authentication? No, it's much stronger. With cert-based server auth, a client will connect to anything that has a certificate from any CA anywhere, in other words pretty much anything at all, and declare the connection secure. It's slightly better than anon-DH, but it offers almost no protection against phishing, the most common attack on the web today. The best form of this mixes in cert-based client auth as well, so the client connects to a phishing site that authenticates itself with a CA-issued cert, the client then authenticates with a CA-issued cert, and the result as far as the client is aware is a fully CA-certified cryptographically secured connection to a site that's actually run by the MageCart Syndicate. With PSK a client can only connect to a server that proves knowledge of the shared secret, which immediately kills phishing because the attacker would need to prove knowledge of the credentials they're trying to phish before they can phish them. (Note that this is for web use, for non-web use like SCADA the software typically hardcodes in certificates and keys and will only trust those, which in a sense makes it more PSK than cert-based auth). Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Draft TLS Extension for Path Validation
An indirect question on the overall premise here: Given that SCVP is essentially nonexistent (unless there's some niche market somewhere using it that I'm not aware of, which is why I didn't use an unqualified "nonexistent"), does it really matter much? If an RFC falls in the forest and all that... Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Does revocation matter?
Felipe Gasper writes: >It begs the question … how relevant is certificate revocation nowadays? How >big of a problem is it if TLS validity checks ignore it? Given that mbedTLS is unlikely to be used in public web servers, which means in turn it's unlikely to be used with certificates issued by public CAs, the issue of revocation probably won't crop up - you just use whatever trust mechanism was used to set up the initial cert to set up its replacement. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] supported_versions in TLS 1.2
Rob Sayre writes: >The WG is not obligated to support TLS 1.2. Is that an official WG position, that the TLS WG has abandoned TLS 1.2? If it is, can I have change control over it, because if the WG doesn't want to support it, someone will have to. (I'm not fussed either way, but would like to see things resolved rather than just being left in limbo). Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] supported_versions in TLS 1.2
Salz, Rich writes: >Peter has forgotten more about long-term embedded applications than the rest >of us have experience. I’ll leave it to him to say why it’s important. I was making a more general point about not assuming that the only thing that matters is TLS 1.3 vs. TLS 1.2, and that that's all that needs to be accommodated. Because of the TLS family A vs. family B protocol fork, there will be family A around more or less forever. For example just a few days ago I was part of a long conference call with a major global user who was looking at a minimum 15-year (but in practice I expect more like 20-30 year) support plan for an upcoming rollout of family A TLS. So just because TLS 1.3 exists doesn't mean all work on, and accommodation of, earlier versions should stop. In particular that's why I wrote the TLS-LTS doc, because that explains how to apply family A in the safest manner possible for the foreseeable future. In the specific case of supported_versions, there's no reason why the same thing can't be used to deal with e.g. TLS 1.0 -> TLS 1.2 version intolerance, which is still a thing. That's one thing that SSH did right (alongside a lot of stuff that TLS does much better), you can fingerprint a server via its ID string and work around problems when you connect without needing to change the code on the server you're connecting to. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] supported_versions in TLS 1.2
David Benjamin writes: >The operators themselves are probably not in a position to either implement >supported_versions or not in TLS 1.2. If an operator, for whatever reason, >only has a TLS 1.2 implementation on hand, it presumably predates TLS 1.3 and >thus does not implement supported_versions. If it implements >supported_versions, it presumably postdates TLS 1.3 and the operator should >enable the TLS 1.3 bit. Not necessarily. Since TLS 1.3 has forked TLS into two protocols, 1.0-1.2 and 1.3 (lets call them TLS family A and TLS family B), there are a large number of users who will be sticking with the TLS A rather than TLS B family for an indefinite amount of time, years if not decades. So it makes perfect sense to define things for TLS A as well as TLS B because, like the HTTP 1.x vs. HTTP 2.x fork, the A family could well be around forever for situations where users don't want to switch over to a second protocol, in the same way that HTTP 1.x will be with us forever. So it makes perfect sense to keep defining new mechanisms for TLS family A alongside TLS family B. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Question regarding RFC 7366
alex.sch...@gmx.de writes: >I would really appreciate a response to get some clarification on what the >intended interpretation is, i.e., when the extension should be used. There's not really any contradiction, encrypt-then-MAC has nothing to do with AEAD which is an all-in-one mode, so it doesn't apply to any AEAD cipher. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] A side meeting on OpenSSL's plans about QUIC
Reposted here (with permission) since I think it's important to get this on the record for discussion on this list. It's always interesting to read about protocol implementation details, especially if others can learn from them. Peter. -- Snip -- Please change your mind about your announced release plans Salz, Rich rsalz at akamai.com Fri Oct 29 15:13:35 UTC 2021 I think the current plan, to do QUIC over a series of releases, is a mistake it seems seriously likely to make OpenSSL less relevant. Some reasons follow. According to https://www.openssl.org/blog/blog/2019/11/07/3.0-update/, the initial target for 3.0 was end of 2019, but the announced release date was moved once to early Q2 2020, then early Q4 2020. It was ultimately released early Q4 2021. That’s a slip of nearly two years. Sure, I know COVID affected many things, but the impact on OpenSSL, whose fellows all worked from home offices, should have been slight. Regardless, the focus of this release was on *cryptography* something which is highly familiar to the team. With this release, as detailed in https://www.mail-archive.com/openssl-project@openssl.org/msg02585.html, it appears that OpenSSL is trying to do two things. Have a more rapid release cadence -- the stated objective is six months -- and implement QUIC over a series of releases. To keep a six-month cycle probably means five months of planning and development. During the 3.0 work, the project hired, and then fired, a project manager to keep things on track. Are you going to hire another? The project has a very bad history of meeting time-bounded deadlines. This is not surprising; software engineers tend to chronically under-estimate the amount of time needed. In addition, an LTS release will be either 3.1 if available by next September, or the current 3.0. Neither alternative is good. The section on QUIC discusses the MVP for that release. Making an LTS release that is based on a radical restructuring of the record layer, not to mention having a half-baked and useless QUIC release, does not meet the community's needs. It is also exceedingly unlikely to be ready within a year, which means 3.0 will be the next LTS release. This means that many issues found by the community while adopting 3.0 will go unaddressed in LTS, as each will have to pass the "is this a bug-fix" barrier or be approved as an exception. Exceptions are *wrong* for an LTS release. The next LTS release should be just cleanup, fixes, and usability enhancements found during 3.0 deployments. The level of technical detail in the release requirements are curious. They go into great detail and greatly constrain the OTC. This was not done for FIPS, where the design was led by the OTC and had involvement from the project sponsors. I know that some OTC members are not pleased with this level of detail, and I don't blame them. It might be useful for the OMC to release a rationale. Why is it necessary to "implement QUIC" when instead the directive could have been "support QUIC"? If the API from https://github.com/openssl/openssl/pull/8797 is so distasteful (yes, enum's in function parameters are not OpenSSL style), it would have been better if the project convened leading TLS and QUIC implementors to come up with a new one. That would have shown true leadership, and I know some of those affected by these plans would have supported that. QUIC is a subtle protocol. Its evolution within the IETF took several years, and the development staff at major implementations includes full-time developers, QA engineers, and documentation writers. Currently OpenSSL has five full-time fellows, and none of them have any recognized experience in implementing transport protocols. Look at the complexity in the BIO code around DNS and handling IPv4/IPv6 address families portably. While the project currently has DTLS, which is a UDP protocol, QUIC is more like the former than the latter. The MVP does not rise to the level of minimally viable: A single-stream client that can be added to the s_client application without significant API changes. There is no mention of server implementation -- will there be one? It's not even mentioned as a stretch goal for the first release. Is that bifurcation of the community intentional? If it's an oversight, it can be seen as yet another indication that the QUIC plan is not well thought out. There are numerous freely available QUIC implementations. For example, look at https://interop.seemann.io/ which shows 14. It is easy to get an implementation added to that, and yet OpenSSL is only promising to interop against one implementation. Why aren't Google, bundled with Chrome and Android, or Microsoft, bundled with Windows, or Apple, bundled into iOS 15, tested? Perhaps the reason is that only client-side is part of the first release. Again, the Internet doesn't need yet another limited client-side-only QUIC implementation, and the plan as currently stated will force anyone deploying a server to go
Re: [TLS] Adoption call for Deprecating FFDH(E) Ciphersuites in TLS
David Benjamin writes: >RFC7919 tried to solve the problem but, by reusing the old cipher suites, it >fails to solve the problem. It didn't just not solve the problem, it made things worse: 7919 doesn't say "I want to do DHE, if possible with these parameters", it says "I will only accept DHE if you use these parameters, otherwise you cannot use DHE but must drop back to RSA". Because of this and other issues, a discussion on this list in 2019 indicated that no-one was planning to implement it. >We don't have a way to tell the server to only consider DHE ciphers if it >would have used a group the client supports. Why would that be an issue? I know 7919 invents a bunch of reasons why this could be a problem, but in practice you just connect and take what the server gives you. If you don't like it you can always choose not to connect, but it's not like someone is going to rekey or rebuild the server if the client says it doesn't like the DH group it's offering. Given that everyone seems to have a different idea of what is and isn't a problem and what does and doesn't need to be addressed, perhaps we first need to define what we're trying to achieve... Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for Deprecating Obsolete Key Exchange Methods in TLS
Viktor Dukhovni writes: >with confirmation from Peter Gutmann below that any custom groups we're >likely to encounter are almost certainly safe Well, I haven't examined every crypto library on the planet, it's not to say there isn't something somewhere that implements its keygen as: for i = 0 to 256 dhprime[ i ] = rand(); but of the ones I'm aware of, when you ask for DLP parameters you get something appropriate like Sophie Germain primes or FIPS 186 or equivalent, e.g. Lim-Lee parameter generation. >I don't see a realistic scenario in which sufficiently large ad-hoc server DH >parameters are a problem. +1. Also if mentioning specific published values it'd be good to go with 3526 rather than 7919 due to the non-use of 7919 in implementations (unless there are implementations using the 7919 primes while not implementing 7919 itself). Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for Deprecating Obsolete Key Exchange Methods in TLS
Viktor Dukhovni writes: >OK, who goes around bothering to actually generate custom DH parameters, and >with what tools, but then does not use a "strong" (Sophie Germain) prime? That's better :-). That was my thought too, every DH/DLP keygen I've seen generates either Sophie Germain or FIPS 186 primes/parameters, so on the off chance that your server feels like generating custom primes you'd need to go out of your way to generate unsuitable ones, i.e. you'd probably need to write custom code specifically for bad prime generation, and if you're going to do that then all bets are off anyway. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for Deprecating Obsolete Key Exchange Methods in TLS
Viktor Dukhovni writes: >I strongly doubt there's a non-negligible server population with weak locally >generated groups. Would you care to rephrase that so we can make sure you're saying what we think you're saying in order to disagree with it? Peter :-). ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for Deprecating Obsolete Key Exchange Methods in TLS
Scott Fluhrer (sfluhrer) writes: >The problem is that it is hard for the client to distinguish between a well >designed server vs a server that isn't as well written, and selects the DH >group in a naïve way. What should the client do if it could detect this? And how would it distinguish between a server that selects bad DH parameters, a server that uses time() to seed its RNG for prime generation, a server that has a buffer overflow allowing RCE, and a server whose ACLs allow read/write access to any file on the filesystem including its private key(s)? >Now, as I mentioned in the WG meeting, it would be possible to detect if the >server proposes a safe prime (it's not especially cheap, being several times >as expensive as the rest of the DH operations, but it's possible), Or you could use TLS-LTS, which sends { p, q, g } allowing the client to verify certain properties about the primes being used at next to no cost. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for Deprecating Obsolete Key Exchange Methods in TLS
Viktor Dukhovni writes: >Can you explain what you mean by "don't do things that you should never have >been doing in the first place"? Just what the draft says, don't use static-ephemeral DH, don't reuse DHE secrets, etc. It seems a bit like publishing an RFC telling people not to stick forks in light sockets, but I guess enough people must have been sticking forks in light sockets to make it necessary. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Adoption call for Deprecating Obsolete Key Exchange Methods in TLS
Viktor Dukhovni writes: >The only other alternative is to define brand new TLS 1.2 FFDHE cipher code >points that use negotiated groups from the group list. But it is far from >clear that this is worth doing given that we now have ECDHE, X25519 and X448. There's still an awful lot of SCADA gear that does FFDHE, and that's never going to change from that. The current draft as it stands is fine, in fact it seems kinda redundant since all it's saying is "don't do things that you should never have been doing in the first place", but I assume someone needs to explicitly say that. No need to go beyond that. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Possible TLS 1.3 erratum
Ryan Sleevi writes: >For example, the NSS library used by Mozilla has well exceeded a thousand >lines of code so far. Is that purely to parse PSS in X.509, or for the overall PSS implementation? I know PSS is a dog's breakfast of arbitrary parameters and values, but I'm a bit suspicious of that line count just to to process the 'Parameters' structure, particularly since it's shared with OAEP so already present if you support OAEP. Or are you supporting every single possible corner case and weird option rather than just { SHA-2, MGF1, SHA-2, $SHA2-blocksize }? >For browsers like Chrome, used by over a billion users, there are indeed >practical concerns regarding the separation between the TLS layers and the >certificate layer. The "classic" late-90s view of "one library to do it all" >(TLS and PKI) is actually not that common in industry, at least not those >being used "at internet scale". And that's exactly the point I'm making, the standard currently encodes low- level internal details of the PKI implementation into the TLS implementation. Unless you're using one library to do it all, the TLS layer has no idea what OID the PKI layer is using to identify an RSA key in a certificate, and so it has no idea whether it should be saying rsa_pss_rsae or rsa_pss_pss because the PKI layer just presents a certificate with an RSA key. All of this is currently hidden by the fact that you can't get PSS-OID certs from any public CA that I know of so everyone can just hardcode rsa_pss_rsae everywhere and ignore the issue, but at some point some CA may accidentally issue a PSS-OID cert and then who knows what'll happen. For every single TLS implementation out there. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Possible TLS 1.3 erratum
Hubert Kario writes: >I suggest you go back to the RFCs and check exactly what is needed for proper >handling of RSA-PSS Subject Public Key type in X.509. Specifically when the >"parameters" field is present. Looking at the code I'm using, it's four lines of extra code for PSS when reading sigs and four lines extra when writing (OK, technically seven if you include the "if" statement and curly braces lines). >You definitely won't be able to implement it in just "few lines". See above. >2. "What certificates can peer accept" is totally within the purview of TLS. Two things, firstly those values are used in two different extensions, only one of which covers signatures in certificates, and secondly, what happens if the client says rsa_pss_pss and the server only has an rsa_pss_rsae certificate? Does the server admin rush out and buy an rsa_pss_pss certificate (or, at the moment, found a new public CA that will issue them an rsa_pss_pss certificate) just to keep the client happy? Or are they expected to go out and buy two lots of every certificate that differ only in the RSA OID used just to play along with the client's request? This is exactly the reason why CMS rejected special-case handling for this stuff, because it doesn't make any sense. And now TLS is doing the exact thing that CMS rejected for not making any sense. Since the next step in the exchange will be to send the client the certificate, the only thing it could potentially do is save a single round trip when the handshake is rejected by the server for lack of an appropriately-OIDed cert rather than the client when it gets said cert. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Possible TLS 1.3 erratum
Hubert Kario writes: >It only doesn't matter if you don't want to verify the certificate... > >It's one thing to be able to be able to verify an RSA-PSS signature on TLS >level, it's entirely another to be able to properly handle all the different >RSA-PSS limitations when using it in SPKI in X.509. Is there anything that's jumped through all the hoops to implement the complex mess that is PSS but then not added the few lines of code you need do verify it in certificates? And if so, why? In any case it's still encoding a minor implementation artefact of the certificate library being used into the TLS protocol, where it has absolutely no place. You either do PSS or you don't, and the TLS layer doesn't need to know what magic number you use to identify it in certificates. More to the point, for a number of certificate libraries there's no way for the TLS layer to know what magic number is used because it's a minor implementation detail that isn't exposed in the API. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Possible TLS 1.3 erratum
Ilari Liusvaara writes: >Actually, I think this is quite messy issue: It certainly is. >Signature schemes 0x0403, 0x0503 and 0x0603 alias signature algoritm 3 hash >4, 5 and 6. However, those two things are not the same, because the former >have curve restriction, but the latter do not. That and the 25519/448 values are definitely the weirdest of the lot. In particular the value 0x03 means P256 when used with SHA256, P384 when used with SHA384, and P521 when used with SHA512. >So one algorithm one could use is: > >- Handle anything with signature 0-3/224-255 and hash 0-6/224-255 as > signature/hash pair. >- Display schemes 0x0840 and 0x0841 specially. >- Handle anything else as signature scheme. I think an easier, meaning with less special cases, way to handle it is for a TLS 1.2 implementation to treat the values defined in 5246 as { hash, signature } pairs and for TLS 1.3 and newer implementations to treat all values as 16-bit cipher suites, combined with a reworking of the definitions, e.g. to define the "ed25519" suite in terms of the curve and hash algorithm, not just "Ed25519 and you're supposed to know the rest". >The reason is that some TLS implementations have very hard time supporting >RSA-PSS certificates. But why should the TLS layer care about what OID is used to represent an RSA key in a certificate? The signature at the TLS level is either a PSS signature or it isn't, it doesn't matter which OID is used in the certificate that carries the key. More to the point, the TLS layer may have no way to determine which OID is used in the certificate, it's either an RSA key or not, not "it's an RSA key with OID A" or "it's an RSA key with OID B". So I think for bis the text should rename rsa_pss_rsae_xxx to just rsa_pss_xxx and drop rsa_pss_pss_xxx, which I assume has never been used anyway because I don't know of any public CA that'll issue a certificate with a PSS OID. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Possible TLS 1.3 erratum
Eric Rescorla writes: >As we are currently working on a 8446-bis, the best thing to do would be to >file a PR at: https://github.com/tlswg/tls13-spec Before I do that I thought I'd get some input on what to say, there's actually two issues, the first being the one I mentioned. I was thinking something like: TLS 1.2 defined the entries in the "extension_data" as two eight-bit values constituting a { hash, signature } pair. TLS 1.3 changes the definition to be a single 16-bit value constituting a cipher suite that encodes both the signature and hash algorithm into a single value. Although some of the TLS 1.3 values, in particular the rsa_pss_rsae_xxx ones, appear to follow the TLS 1.2 format, implementations SHOULD NOT treat them as { hash, signature } pairs but as a single cipher suite identifier. The second one is the fact that there are two different cipher suites for RSA- PSS, rsa_pss_rsae_xxx and rsa_pss_pss_xxx, with conditions for use that are stated in a somewhat backwards form, "If the public key is carried in an X.509 certificate, it MUST use the RSASSA-PSS OID". Since the only reason these exist AFAICT is to deal with rsaEncryption vs. RSA-PSS certs, it should really be stated as something like "the RSA-PSS code point used depends on how the key is carried in an X.509 certificate. If the certificate OID is rsaEncryption then the rsa_pss_rsae_xxx form MUST be used. If the certificate OID is RSASSA-PSS then the rsa_pss_pss_xxx form MUST be used". And then add some explanation for why this is so and what'll go wrong if you use the other one, since I can't see any reason why you can't just use rsa_pss_rsae_xxx or rsa_pss_pss_xxx for everything. What vulnerability is this mitigating? Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
[TLS] Possible TLS 1.3 erratum
I've got some code that dumps TLS diagnostic info and realised it was displaying garbage values for some signature_algorithms entries. Section 4.2.3 of the RFC says: In TLS 1.2, the extension contained hash/signature pairs. The pairs are encoded in two octets, so SignatureScheme values have been allocated to align with TLS 1.2's encoding. However, they don't align with TLS 1.2's encoding (apart from being 16-bit values), the values are encoded backwards compared to TLS 1.2, so where 1.2 uses { hash, sig } 1.3 uses values equivalent to { sig, hash }. In particular to decode them you need to know whether you're looking at a 1.2 value or a 1.3 value, and a 1.2-compliant decoder that's looking at what it thinks are { hash, sig } pairs will get very confused. Should I submit an erratum changing the above text to point out that the encoding is incompatible and signature_algorithms needs to be decoded differently depending on whether it's coming from a 1.2 or 1.3 client? At the moment the text is misleading since it implies that it's possible to process the extension with a 1.2-compliant decoder when in fact all the 1.3 ones can't be decoded like that. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] draft-les-white-tls-preferred-pronouns
Lester White writes: > Title : Preferred Pronouns Extension for TLS I think the Security Considerations section needs to mention two security considerations, firstly the preferred identity is an unauthenticated parameter and can't safely be used until the Finished message definitely determines its validity so it shouldn't be used until after the other side's Finished message is received, and secondly that the odd-numbered (presumably "prime" is meant) number of experts needs to be at least 1024 bits worth, and a strong prime, i.e. p-1 and p+1 should have large prime factors. The suggested value of 11 doesn't meet these criteria, so should be changed for a value that does. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Regarding draft-bartle-tls-deprecate-ffdhe
Brian Smith writes: >It would be useful for the browser vendors that recently dropped FFDHE >support to share their measurements for how much RSA key exchange usage >increased after their changes. That would help us quantify the real-world >impact of this change. ... in mice. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Regarding draft-bartle-tls-deprecate-ffdhe
David Benjamin writes: >[*] From the conclusion of the paper: "The most straightforward mitigation >against the attack is to remove support for TLS-DH(E) entirely, as most major >client implementations have already stopped supporting them" Just as you need to automatically add "in mice" to the end of any announcement of a new medical result, so you also need to add "on the web" to the end of any pronouncement about TLS. This only applies to the special bubble of the web. For other situations, the effect of banning DHE will be to force everyone to move to RSA. I've already seen this in payments processing applications, instead of using the secure-unless-you-implement-it-really-badly DHE they use the almost-impossible-to-do-securely RSA, because someone has told them not to use DHE. So a blanket ban of DHE will lead to a net loss in security. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Question to TLS 1.3 and certificate revocation checks in long lasting connections
Benjamin Kaduk writes: >Just to confirm: the scenario you're using to contrast to the one described >by Viktor (and Nico) is a scenarios in which the certificates expire at >"never" (1231235959Z)? Not that "never" since it would break a lot of things, but some time far enough in the future that you don't have to worry about it. 14 January 2038 was one I've seen used, but that was at a point when 2038 was still 20+ years away and the equipment might have been expected to be EOL'd by then. Not sure what's being used now that the time to Y2038 is a lot less than the lifetime of the equipment. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Question to TLS 1.3 and certificate revocation checks in long lasting connections
Viktor Dukhovni writes: >But if the signal is not ignored, and proper automation is applied, >reliability actually improves. No, it drops. You're going from a situation where you've eliminated any chances of outages due to expired certs to one where you get to play Russian roulette every single day: Will the cert renewal work, or will it fail for some reason? Let's spin the cylinder and see if this is the day our grid goes down. Even if you somehow create a 100% successful magical process for replacing certs that never ever fails, you're now introduced another possible failure situation, with certs changing constantly there's a chance that something that relies on them can't handle a new cert, for example because an issuing CA has renewed as well or something similar. It's just building in more and more opportunities for failure from a mechanism that's supposed to be making your infrastructure more robust, not less. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Question to TLS 1.3 and certificate revocation checks in long lasting connections
Nico Williams writes: >When expirations are short, you will not forget to renew. That's part of the >point of short-lived certificates. So instead of getting one chance a year for your control system to break itself if the renewal fails, you get hundreds of them? Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] Question to TLS 1.3 and certificate revocation checks in long lasting connections
Nico Williams writes: >I've seen 5 day server certificates in use. For IEC-62351 work you're far more likely to see certificates issued with an expiry date of never, because the last thing you want is your power grid to be taken offline due to a cert someone forgot to renew. In terms of CRL updates the situation is similar, the spec may say you need to check once every X time interval but in practice you forget to perform the check in case it takes your grid offline. Or set a flag saying "cert revoked" and continue anyway, I've seen both. The 24-hour thing sounds like someone's checkbox requirement rather than anything practically useful, or usable. Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] EXTERNAL: TLS 1.3 Authentication and Integrity only Cipher Suites
Ben Schwartz writes: >If you are updating the text, I would recommend removing the claim about >performance. In general, the ciphersuites specified in the text are likely >to be slower than popular AEAD ciphersuites like AES-GCM. Uhh... when is AES-GCM faster than SHA2, except on systems with hardware support for AES-GCM and no hardware support for SHA2? Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] [Emu] Fwd: Benjamin Kaduk's Discuss on draft-ietf-emu-eap-tls13-13: (with DISCUSS and COMMENT)
Alan DeKok writes: >OpenSSL has a feature SSL_MODE_AUTO_RETRY which makes it process TLS messages >*after* the Finished message. i.e. the Session Ticket, etc. When an >application calls SSL_Read(), all of the TLS data is processed, instead of >just the "TLS finished" message. They've made this the default, because most >applications get it wrong. Asking as the author of a TLS library that has always done this, why would you stop immediately after the Finished and leave metadata messages sitting unread in the input stream? Was it just some arbitrary implementation decision, or is there a technical reason for it? Peter. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls