Re: [TLS] ESNIKeys over complex
On Wed, 21 Nov 2018, Stephen Farrell wrote: We currently permit >1 RR, but actually I suspect that it would be better to try to restrict this. Not sure we can and I suspect that'd raise DNS-folks' hackles, but maybe I'm wrong. I think the SOA record is the only exception allowed (and there is an exception to that when doing AXFR I believe) Usually these things are defined as "pick the first DNS RRTYPE that satisfies you". - get rid of not_before/not_after - I don't believe those are useful given TTLs and they'll just lead to failures I'm mostly ambivalent on this, but on balance, I think these are useful, as they are not tied to potentially fragile DNS TTLs. If there were a justification offered for 'em I'd be ok with it, but TBH, I'm not seeing it. And my main experience of the similar dates on RRSIGs are that they just break things and don't help. This has a totally different expiry behavior from RRSIGs, so I'm not sure that's that useful an analogy. Disagree. They're both specifying a time window for DNS data. Same problems will arise is my bet. You mean the problem of not being able to replay old data? :) My main ask though for these time values is that their presence be explicitly justified. That's missing now, and I suspect won't be convincing, but maybe I'll turn out to be wrong. Note that TTLs are about the caching of data and nothing else. If the content of your record requires some specific time of death, you cannot rely on TTL. Note that a TTL on a received RRTYPE can have any value under the published TTL on the authoritative server if it was flowing through caching recursive servers. So you cannot use a TTL for some other kind of expiry value for another protocol. Also, DNS software sometimes enforces maximum and minimum TTL values, so again, do not use DNS TTL for other protocol timing parameters. Although, if I am correct, the epectation is that all of this data will be used without mandating DNSSEC validation, so all these security parameters could be modified by any DNS party in transit to try and break the protocol or privacy of the user. The "not DNSSEC camel" is growing fast. Paul Paul ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] ESNIKeys over complex
On Tue, Nov 20, 2018 at 7:40 PM Salz, Rich wrote: > >- No, I don't think so. The server might choose to not support one of >the TLS 1.3 ciphers, for instance. And even if that weren't true, how would >we add new ciphers? > > > > Standard TLS negotiation. I don’t see that we need to specify ciphers at > the DNS layer. A client with new ciphers will add it in the hello message > and the server will pick one it supports. It seems complex and fragile > (keeping the server cipher config, not just the fronted hosts, in sync with > DNS). > I'm sorry, I'm not quite following. In this draft, ESNI ciphers are orthogonal to the ciphers used to encrypt the TLS records.This is perhaps easier to see in a split configuration, where (for instance) the client-facing server might support only AES-128-GCM and the back-end server might support only ChaCha/Poly1305. As you say, the negotiation works well for the TLS records, but that doesn't influence the ESNI encryption cipher suite selection (because that happens before the Hello exchange). So, if we don't provide a list of the ESNI ciphers in the ESNIKeys record, then we are effectively creating a fixed list. Am I missing something here? -Ekr > ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] ESNIKeys over complex
* No, I don't think so. The server might choose to not support one of the TLS 1.3 ciphers, for instance. And even if that weren't true, how would we add new ciphers? Standard TLS negotiation. I don’t see that we need to specify ciphers at the DNS layer. A client with new ciphers will add it in the hello message and the server will pick one it supports. It seems complex and fragile (keeping the server cipher config, not just the fronted hosts, in sync with DNS). ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] ESNIKeys over complex
On Tue, Nov 20, 2018 at 6:04 PM Salz, Rich wrote: > >Sure a list of ciphersuites isn't bad. But the current > design has a set of keys and a set of ciphersuites and a > set of extensions and a set of Rdata values in the RRset. > > Since this is defined for TLS 1.3 with all known-good ciphers, can't that > field be eliminated? > No, I don't think so. The server might choose to not support one of the TLS 1.3 ciphers, for instance. And even if that weren't true, how would we add new ciphers? -Ekr > >I'd bet a beer on such complexity being a source of bugs > every time. > > All sorts of aphorisms come to mind. :) > > > This has a totally different expiry behavior from RRSIGs, so I'm > > not sure that's that useful an analogy. > > Disagree. They're both specifying a time window for DNS data. > Same problems will arise is my bet. > > I am inclined to agree. > > > > ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] ESNIKeys over complex
>Sure a list of ciphersuites isn't bad. But the current design has a set of keys and a set of ciphersuites and a set of extensions and a set of Rdata values in the RRset. Since this is defined for TLS 1.3 with all known-good ciphers, can't that field be eliminated? >I'd bet a beer on such complexity being a source of bugs every time. All sorts of aphorisms come to mind. :) > This has a totally different expiry behavior from RRSIGs, so I'm > not sure that's that useful an analogy. Disagree. They're both specifying a time window for DNS data. Same problems will arise is my bet. I am inclined to agree. ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] ESNIKeys over complex
(Trimming bits down...) On 21/11/2018 00:59, Eric Rescorla wrote: > On Tue, Nov 20, 2018 at 4:36 PM Stephen Farrell >> Aren't DNS answers RRsets? I may be wrong but I thought DNS >> clients have to handle that anyway, > > > Not really, because any of them is co-valid. Sure, in DNS terms. > We currently permit >1 RR, but > actually > I suspect that it would be better to try to restrict this. Not sure we can and I suspect that'd raise DNS-folks' hackles, but maybe I'm wrong. >> That said, >1 ciphersuite wouldn't be so bad if that were >> the only list per RData instance. Or maybe one could get >> rid of it entirely via some conditional link the to set >> of suites in the CH, not sure. (Or just go fully experimental >> and say everyone doing esni for now has to use the same >> suite all the time.) >> > > I've implemented this and did not find it to be a major obstacle. > I do not think unnecessary duplication is a good tradeoff for > such a trivial implementation complexity reduction. Sure a list of ciphersuites isn't bad. But the current design has a set of keys and a set of ciphersuites and a set of extensions and a set of Rdata values in the RRset. Surely we can collapse at least most of those down to one list without too much of a problem. And as to trivial, I'd bet a beer on such complexity being a source of bugs every time. > I don't see any advantage to choosing a suboptimal design, just > based on it being Experimental. All designs are suboptimal for someone:-) >>> - get rid of not_before/not_after - I don't believe those are useful given TTLs and they'll just lead to failures >>> >>> I'm mostly ambivalent on this, but on balance, I think these are useful, >>> as they are not tied to potentially fragile DNS TTLs. >> >> If there were a justification offered for 'em I'd be >> ok with it, but TBH, I'm not seeing it. And my main >> experience of the similar dates on RRSIGs are that they >> just break things and don't help. > > > This has a totally different expiry behavior from RRSIGs, so I'm > not sure that's that useful an analogy. Disagree. They're both specifying a time window for DNS data. Same problems will arise is my bet. My main ask though for these time values is that their presence be explicitly justified. That's missing now, and I suspect won't be convincing, but maybe I'll turn out to be wrong. >> Put another way, I >> don't know what sensible code to write to decide between >> not connecting or sending SNI in clear if one of these >> dates is out of whack. (I be tempted to just ignore the >> time constraints and try send the SNI encrypted instead.) >> > > You should connect with SNI in the clear. As a generic browser, I guess so. As some specialised privacy sensitive application, not sure. As a library that could be used for either, I'm not clear there's a good answer other than ignoring the artificial time window and encrypting anyway. > And having to deploy a cron job equivalent for the DNS >> data is an order of magnitude harder than not. >> > > Nothing stops you having an infinite expiry. Nothing stops us deleting the useless dates:-) > They will also use different keys for x and y, so they will > have different records and can have different pad lengghts. All going well, yes. All not going well, the pad lengths may get out of whack, exposing names. > How about rounding up to the nearest power of 2 that's >> bigger than 5? (Or some such.) > > I don't know what this means. Ah sorry. I meant just take the length of the server name and pad to the shortest of 32, 64, 128 or 256 octets. (Or some other breakpoints.) I'm sure we could do some measurement so that an acceptable fraction of names fit in the shortest bucket. (Didn't DKG do work on that for DNS padding?) >> (As a >> nasty hack, you could even derive the padded_length >> from the value of the key_share and fronters could just >> keep generating shares until they get one that works:-) > > I thought you were complaining about complexity That's not complex, it's just waay hacky:-) It'd actually be simpler for the client to just take some (e.g. low order) bits of the key share as the padded_length. More work for the people generating the key share yes, (they need to keep iterating 'till they find a key share that works for their preferred padding_length) but that's easy, offline, done by fewer folks and removes a way of screwing up the ops. All-in-all, while it's too hacky it's not complex at all. Cheers, S. > > -Ekr > > >>> - I'm not convinced the checksum is useful, but it's not hard to handle - (Possibly) drop the base64 encoding, make it DNS operator friendly text (or else binary with a zonefile text format defined in addition) >>> >>> We are likely to drop the base64 encoding. >> >> Ack. >> >> And just to note again - I suspect a bunch of the above would >> be better sorted out as ancillary changes once a multi-CDN >> proposal is figured out. >> >>
Re: [TLS] ESNIKeys over complex
On Tue, Nov 20, 2018 at 4:36 PM Stephen Farrell wrote: > > Hiya, > > On 20/11/2018 23:30, Eric Rescorla wrote: > > On Tue, Nov 20, 2018 at 1:46 PM Stephen Farrell < > stephen.farr...@cs.tcd.ie> > > wrote: > > > >> > >> Hiya, > >> > >> I've started to try code up an openssl version of this. [1] > >> (Don't be scared though, it'll likely be taken over by a > >> student in the new year:-) > >> > > > > Thanks for your comments. Responses below. > > Ditto. > > > > >>From doing that I think the ESNIKeys structure is too > >> complicated and could do with a bunch of changes. The ones > >> I'd argue for would be: > >> > >> - use a new RR, not TXT > >> > > > > This is likely to happen. > > > > - have values of ESNIKey each only encode a single option > >> (so no lists at all) since >1 value needs to be supported > >> at the DNS level anyway > >>- that'd mean exactly one ciphersuite > >>- exactly one key share > >> > > > > I don't agree with this. It is going to lead to a lot of redundancy > because > > many > > servers will support >1 cipher suite with the same key share. Moreover, > from > > an implementation perspective, supporting >1 RR would be quite a bit more > > work. > > Aren't DNS answers RRsets? I may be wrong but I thought DNS > clients have to handle that anyway, Not really, because any of them is co-valid. We currently permit >1 RR, but actually I suspect that it would be better to try to restrict this. This would be especially true if we get rid of expiry, because then there would be no good reason to have >1 ESNIKeys record with a given version. and I'd expect use of > RRsets to be a part of figuring out a multi-CDN stpry. > That's not at all obvious. > That said, >1 ciphersuite wouldn't be so bad if that were > the only list per RData instance. Or maybe one could get > rid of it entirely via some conditional link the to set > of suites in the CH, not sure. (Or just go fully experimental > and say everyone doing esni for now has to use the same > suite all the time.) > I've implemented this and did not find it to be a major obstacle. I do not think unnecessary duplication is a good tradeoff for such a trivial implementation complexity reduction. >- no extensions (make an even newer RR or version-bump:-) > >> > > > > Again, not a fan of this. It leads to redundancy. > > That's reasonable. OTOH, it's equally reasonable to say that > we're dealing with an experimental draft and a future PS > version could use another RRytpe and add extensions if they > end up needed. > I don't see any advantage to choosing a suboptimal design, just based on it being Experimental. Incidentally, there seems to be some uncertainty about the status. I'm not quite sure why this is marked Experimental (I think it may have just been a thinko on my part), and the chairs didn't ask at call for acceptance time, so I'd encourage them to sort that out. > > > > > > - get rid of not_before/not_after - I don't believe those > >> are useful given TTLs and they'll just lead to failures > >> > > > > I'm mostly ambivalent on this, but on balance, I think these are useful, > > as they are not tied to potentially fragile DNS TTLs. > > If there were a justification offered for 'em I'd be > ok with it, but TBH, I'm not seeing it. And my main > experience of the similar dates on RRSIGs are that they > just break things and don't help. This has a totally different expiry behavior from RRSIGs, so I'm not sure that's that useful an analogy. > Put another way, I > don't know what sensible code to write to decide between > not connecting or sending SNI in clear if one of these > dates is out of whack. (I be tempted to just ignore the > time constraints and try send the SNI encrypted instead.) > You should connect with SNI in the clear. And having to deploy a cron job equivalent for the DNS > data is an order of magnitude harder than not. > Nothing stops you having an infinite expiry. > > > > > - get rid of padded_length - just say everyone must > >> always use the max (260?) - > > > > > > I'm not in favor of this. The CH is big enough as it is, and this has a > > pretty big impact on that, especially for QUIC. There are plenty of > > scenarios where the upper limit is known and << 160. > > True, big CH's are a bit naff, but my (perhaps wrong) > assumption was that nobody cared since the F5 bug. This has nothing to do with the F5 bug. It's about not exceeding one packet in the QUIC CH. It > seems a bit wrong though to have every domain that's > behind the same front have to publish this. They also have to publish the key, so I don't really see a problem. I'm also > not sure it'll work well if we ever end up with cases > where domains A and B both use fronts/CDNs x and y and > can't figure out a good value as x prefers 132 and y > prefers 260. > They will also use different keys for x and y, so they will have different records and can have different pad lengghts. How about rounding up to
Re: [TLS] ESNIKeys over complex
Hiya, On 20/11/2018 23:30, Eric Rescorla wrote: > On Tue, Nov 20, 2018 at 1:46 PM Stephen Farrell > wrote: > >> >> Hiya, >> >> I've started to try code up an openssl version of this. [1] >> (Don't be scared though, it'll likely be taken over by a >> student in the new year:-) >> > > Thanks for your comments. Responses below. Ditto. > >>From doing that I think the ESNIKeys structure is too >> complicated and could do with a bunch of changes. The ones >> I'd argue for would be: >> >> - use a new RR, not TXT >> > > This is likely to happen. > > - have values of ESNIKey each only encode a single option >> (so no lists at all) since >1 value needs to be supported >> at the DNS level anyway >>- that'd mean exactly one ciphersuite >>- exactly one key share >> > > I don't agree with this. It is going to lead to a lot of redundancy because > many > servers will support >1 cipher suite with the same key share. Moreover, from > an implementation perspective, supporting >1 RR would be quite a bit more > work. Aren't DNS answers RRsets? I may be wrong but I thought DNS clients have to handle that anyway, and I'd expect use of RRsets to be a part of figuring out a multi-CDN stpry. That said, >1 ciphersuite wouldn't be so bad if that were the only list per RData instance. Or maybe one could get rid of it entirely via some conditional link the to set of suites in the CH, not sure. (Or just go fully experimental and say everyone doing esni for now has to use the same suite all the time.) > >- no extensions (make an even newer RR or version-bump:-) >> > > Again, not a fan of this. It leads to redundancy. That's reasonable. OTOH, it's equally reasonable to say that we're dealing with an experimental draft and a future PS version could use another RRytpe and add extensions if they end up needed. > > > - get rid of not_before/not_after - I don't believe those >> are useful given TTLs and they'll just lead to failures >> > > I'm mostly ambivalent on this, but on balance, I think these are useful, > as they are not tied to potentially fragile DNS TTLs. If there were a justification offered for 'em I'd be ok with it, but TBH, I'm not seeing it. And my main experience of the similar dates on RRSIGs are that they just break things and don't help. Put another way, I don't know what sensible code to write to decide between not connecting or sending SNI in clear if one of these dates is out of whack. (I be tempted to just ignore the time constraints and try send the SNI encrypted instead.) And having to deploy a cron job equivalent for the DNS data is an order of magnitude harder than not. > > - get rid of padded_length - just say everyone must >> always use the max (260?) - > > > I'm not in favor of this. The CH is big enough as it is, and this has a > pretty big impact on that, especially for QUIC. There are plenty of > scenarios where the upper limit is known and << 160. True, big CH's are a bit naff, but my (perhaps wrong) assumption was that nobody cared since the F5 bug. It seems a bit wrong though to have every domain that's behind the same front have to publish this. I'm also not sure it'll work well if we ever end up with cases where domains A and B both use fronts/CDNs x and y and can't figure out a good value as x prefers 132 and y prefers 260. How about rounding up to the nearest power of 2 that's bigger than 5? (Or some such.) Very long names might lose some protection, but I'm not sure that's a big deal and one can likely just register a shorter name for applications using ESNI. > > > that needs to be the same >> for all encrypted sni values anyway so depending on >> 'em all to co-ordinate the same value in DNS seems >> fragile >> > > It only has to be the same for all the ones in the anonymity set, and they > already need to coordinate on the key. Saying that every key share in DNS needs to be published with the same padded_length would be ok actually. (As a nasty hack, you could even derive the padded_length from the value of the key_share and fronters could just keep generating shares until they get one that works:-) > - I'm not convinced the checksum is useful, but it's not >> hard to handle >> - (Possibly) drop the base64 encoding, make it DNS operator >> friendly text (or else binary with a zonefile text format >> defined in addition) >> > > We are likely to drop the base64 encoding. Ack. And just to note again - I suspect a bunch of the above would be better sorted out as ancillary changes once a multi-CDN proposal is figured out. Cheers, S. > > -Ekr > > > >> [1] https://github.com/sftcd/openssl/tree/master/esnistuff >> ___ >> TLS mailing list >> TLS@ietf.org >> https://www.ietf.org/mailman/listinfo/tls >> > 0x5AB2FAF17B172BEA.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature ___ TLS mailing list
Re: [TLS] ESNIKeys over complex
On Tue, Nov 20, 2018 at 1:46 PM Stephen Farrell wrote: > > Hiya, > > I've started to try code up an openssl version of this. [1] > (Don't be scared though, it'll likely be taken over by a > student in the new year:-) > Thanks for your comments. Responses below. >From doing that I think the ESNIKeys structure is too > complicated and could do with a bunch of changes. The ones > I'd argue for would be: > > - use a new RR, not TXT > This is likely to happen. - have values of ESNIKey each only encode a single option > (so no lists at all) since >1 value needs to be supported > at the DNS level anyway >- that'd mean exactly one ciphersuite >- exactly one key share > I don't agree with this. It is going to lead to a lot of redundancy because many servers will support >1 cipher suite with the same key share. Moreover, from an implementation perspective, supporting >1 RR would be quite a bit more work. - no extensions (make an even newer RR or version-bump:-) > Again, not a fan of this. It leads to redundancy. - get rid of not_before/not_after - I don't believe those > are useful given TTLs and they'll just lead to failures > I'm mostly ambivalent on this, but on balance, I think these are useful, as they are not tied to potentially fragile DNS TTLs. - get rid of padded_length - just say everyone must > always use the max (260?) - I'm not in favor of this. The CH is big enough as it is, and this has a pretty big impact on that, especially for QUIC. There are plenty of scenarios where the upper limit is known and << 160. that needs to be the same > for all encrypted sni values anyway so depending on > 'em all to co-ordinate the same value in DNS seems > fragile > It only has to be the same for all the ones in the anonymity set, and they already need to coordinate on the key. - I'm not convinced the checksum is useful, but it's not > hard to handle > - (Possibly) drop the base64 encoding, make it DNS operator > friendly text (or else binary with a zonefile text format > defined in addition) > We are likely to drop the base64 encoding. -Ekr > [1] https://github.com/sftcd/openssl/tree/master/esnistuff > ___ > TLS mailing list > TLS@ietf.org > https://www.ietf.org/mailman/listinfo/tls > ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
[TLS] ESNIKeys over complex
Hiya, I've started to try code up an openssl version of this. [1] (Don't be scared though, it'll likely be taken over by a student in the new year:-) From doing that I think the ESNIKeys structure is too complicated and could do with a bunch of changes. The ones I'd argue for would be: - use a new RR, not TXT - have values of ESNIKey each only encode a single option (so no lists at all) since >1 value needs to be supported at the DNS level anyway - that'd mean exactly one ciphersuite - exactly one key share - no extensions (make an even newer RR or version-bump:-) - get rid of not_before/not_after - I don't believe those are useful given TTLs and they'll just lead to failures - get rid of padded_length - just say everyone must always use the max (260?) - that needs to be the same for all encrypted sni values anyway so depending on 'em all to co-ordinate the same value in DNS seems fragile - I'm not convinced the checksum is useful, but it's not hard to handle - (Possibly) drop the base64 encoding, make it DNS operator friendly text (or else binary with a zonefile text format defined in addition) I'm fine that such changes don't get done for a while (so I or my student get time to try make stuff work:-) and it might in any case take a while to figure out how to handle the multi-CDN use-case discussed in Bangkok which would I guess also affect this structure some, but I wanted to send this to the list while it's fresh for me. Cheers, S. [1] https://github.com/sftcd/openssl/tree/master/esnistuff 0x5AB2FAF17B172BEA.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] regd. signature algorithm 0x0804 (rsa_pss_rsae_sha256) use in TLSv1.2 CertificateVerify
Thanks David. with regards, Saravanan. On Wed, 21 Nov 2018 at 02:07, David Benjamin wrote: > > Yes, this is correct. > > On Tue, Nov 20, 2018 at 10:35 AM M K Saravanan wrote: >> >> Hi, >> >> RFC8446: >> = >> 4.2.3. Signature Algorithms >> >> [...] >> - Implementations that advertise support for RSASSA-PSS (which is >> mandatory in TLS 1.3) MUST be prepared to accept a signature using >> that scheme even when TLS 1.2 is negotiated. In TLS 1.2, >> RSASSA-PSS is used with RSA cipher suites. >> >> = >> >> The above paragraph gives me an impression that, in TLSv1.2, if >> CertificateRequest message advertise 0x0804, then the client can sign >> the CertificateVerify message with 0x0804 if client cert is RSA. >> >> 0x0804 = rsa_pss_rsae_sha256 >> >> Can some one please confirm whether my understanding is correct? >> >> with regards, >> Saravanan >> >> On Wed, 21 Nov 2018 at 00:27, M K Saravanan wrote: >> > >> > Hi, >> > >> > If a TLSv1.2 Certificate Request message contains 0x0804 >> > (rsa_pss_rsae_sha256) as one of the supported signature algorithms, >> > can a client sign the CertificateVerify message using that algorithm? >> > (client cert is RSA). Is it allowed in TLSv1.2? >> > >> > with regards, >> > Saravanan >> >> ___ >> TLS mailing list >> TLS@ietf.org >> https://www.ietf.org/mailman/listinfo/tls ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] regd. signature algorithm 0x0804 (rsa_pss_rsae_sha256) use in TLSv1.2 CertificateVerify
Yes, this is correct. On Tue, Nov 20, 2018 at 10:35 AM M K Saravanan wrote: > Hi, > > RFC8446: > = > 4.2.3. Signature Algorithms > > [...] > - Implementations that advertise support for RSASSA-PSS (which is > mandatory in TLS 1.3) MUST be prepared to accept a signature using > that scheme even when TLS 1.2 is negotiated. In TLS 1.2, > RSASSA-PSS is used with RSA cipher suites. > > = > > The above paragraph gives me an impression that, in TLSv1.2, if > CertificateRequest message advertise 0x0804, then the client can sign > the CertificateVerify message with 0x0804 if client cert is RSA. > > 0x0804 = rsa_pss_rsae_sha256 > > Can some one please confirm whether my understanding is correct? > > with regards, > Saravanan > > On Wed, 21 Nov 2018 at 00:27, M K Saravanan wrote: > > > > Hi, > > > > If a TLSv1.2 Certificate Request message contains 0x0804 > > (rsa_pss_rsae_sha256) as one of the supported signature algorithms, > > can a client sign the CertificateVerify message using that algorithm? > > (client cert is RSA). Is it allowed in TLSv1.2? > > > > with regards, > > Saravanan > > ___ > TLS mailing list > TLS@ietf.org > https://www.ietf.org/mailman/listinfo/tls > ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] regd. signature algorithm 0x0804 (rsa_pss_rsae_sha256) use in TLSv1.2 CertificateVerify
Hi, RFC8446: = 4.2.3. Signature Algorithms [...] - Implementations that advertise support for RSASSA-PSS (which is mandatory in TLS 1.3) MUST be prepared to accept a signature using that scheme even when TLS 1.2 is negotiated. In TLS 1.2, RSASSA-PSS is used with RSA cipher suites. = The above paragraph gives me an impression that, in TLSv1.2, if CertificateRequest message advertise 0x0804, then the client can sign the CertificateVerify message with 0x0804 if client cert is RSA. 0x0804 = rsa_pss_rsae_sha256 Can some one please confirm whether my understanding is correct? with regards, Saravanan On Wed, 21 Nov 2018 at 00:27, M K Saravanan wrote: > > Hi, > > If a TLSv1.2 Certificate Request message contains 0x0804 > (rsa_pss_rsae_sha256) as one of the supported signature algorithms, > can a client sign the CertificateVerify message using that algorithm? > (client cert is RSA). Is it allowed in TLSv1.2? > > with regards, > Saravanan ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
[TLS] regd. signature algorithm 0x0804 (rsa_pss_rsae_sha256) use in TLSv1.2 CertificateVerify
Hi, If a TLSv1.2 Certificate Request message contains 0x0804 (rsa_pss_rsae_sha256) as one of the supported signature algorithms, can a client sign the CertificateVerify message using that algorithm? (client cert is RSA). Is it allowed in TLSv1.2? with regards, Saravanan ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls
Re: [TLS] WGLC for draft-ietf-tls-dtls-connection-id
On Wed, 2018-11-07 at 14:39 +0700, Joseph Salowey wrote: > This is the working group last call for the "Connection Identifiers > for DTLS 1.2" draft available at > https://datatracker.ietf.org/doc/draft-ietf-tls-dtls-connection-id/. > Please review the document and send your comments to the list by 2359 > UTC on 30 November 2018. > Hi, It is a very good document, I support its publication. Some editorial comments follow. I think the paragraph of the section 3 that starts: "This is effectively the simplest possible design that will work." looks like unnecessary; why would previous designs be mentioned unless there is a challenge for this protocol and in that case an appendix may be more suitable. What about replacing with: "The design is kept simple to ease implementation and deployment" In security considerations the following two paragraphs seem to be part of a single one, that is separated by a However? (i.e., replace Importantly with However), or do I missread it? With multi-homing, an adversary is able to correlate the communication interaction over the two paths, which adds further privacy concerns. Importantly, the sequence number makes it possible for a passive attacker to correlate packets across CID changes. Thus, even if a client/server pair do a rehandshake to change CID, that does not provide much privacy benefit. regards, Nikos ___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls