On Thu, 17 Nov 2016 15:43:57 -1000 Brian Smith <[email protected]> wrote:
> Ryan Sleevi <[email protected]> wrote: > > > On Thu, Nov 17, 2016 at 3:12 PM, Nick Lamb <[email protected]> > > wrote: > > > There's a recurring pattern in most of the examples. A technical > > counter-measure would be possible, therefore you suppose it's OK to > > screw-up and the counter-measure saves us. I believe this is the > > wrong attitude. These counter-measures are defence in depth. We > > need this defence because people will screw up, but that doesn't > > make screwing up OK. > > > > I think there's an even more telling pattern in Brian's examples - > > they're all looking in the past. That is, the technical mitigations > > only exist because of the ability of UAs to change to implement > > those mitigations, and the only reason those mitigations exist is > > because UAs could leverage the CA/B Forum to prevent issues. > > > > That is, imagine if this was 4 years ago, and TCSCs were the vogue, > > and as a result, most major sites had 5 year 1024-bit certificates. > > The browser wants the lock to signify something - that there's some > > reasonable assurance of confidentiality, integrity, and > > authenticity. Yet neither 5 year certs nor 1024-bit certificates > > met that bar. > > > > The fundamental problem is that web browsers accept certificates with > validity periods that are years long. If you want to have the agility > to fix things with an N month turnaround, reject certificates that > are valid for more than N months. The N month turnaround is only a reality if operators of TCSCs start issuing certificates that comply with the new rules as soon as the new rules are announced. How do you ensure that this happens? Regards, Andrew _______________________________________________ dev-security-policy mailing list [email protected] https://lists.mozilla.org/listinfo/dev-security-policy

