On Thu, Jan 18, 2018 at 9:33 AM, Ryan Sleevi <r...@sleevi.com> wrote:

>
>
> On Thu, Jan 18, 2018 at 9:11 AM, Gervase Markham via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> On 18/01/18 13:55, Ryan Sleevi wrote:
>> > Was it your intent to redefine the problem like that? If not, do you
>> have
>> > those concerns about the objective measures, or is your goal to find
>> > objective measures which you subjectively believe to be 'fair'? For
>> > example, an objective measure would be "Paid for a 2-week vacation for
>> > Gervase Markham and family every year to a location of their choosing",
>> but
>> > I suspect that you might argue it's subjectively 'not fair'?
>>
>> If every CA had to do it, it would both be an objective measure and
>> subjectively fair. (Although one could argue it was unfair if I asked
>> one CA to send us to Bogor Regis and another to send us to the Maldives.
>> it would also mean a maximum of 26 CAs in the program at one time, and
>> in the past we have decided against a hard numerical limit.)
>>
>> I would like to find a set of objective measures which the group
>> considers as a whole to be "fair", in that the objective measures do not
>> use criteria which are irrelevant to operating as a CA (such as buying
>> my family a holiday). This may not be possible, of course - we will see
>> - but that doesn't stop me wanting it.
>
>
> To be honest, I think this highlights one of the challenges - namely, the
> lack of a numerical limit of CAs.
>
> Absent a numerical limit, there will always be the risk that CAs that
> "shouldn't" be in are accepted. One way to mitigate that is to try to raise
> the floor - but with the risk of rejecting CAs that "should" be in, for
> some value of "should" and "shouldn't" (both in terms of past failures and
> prediction of future failures).
>
> Another way to attempt to address that is to work to mitigate the damage
> of those that 'shouldn't' be in - both to the ecosystem and users - and
> inevitably, a key part of that would include some notion of agility. That
> agility extends to both software updates and to ecosystem updates.
>
> For example, an objective criteria might be that every new CA supports the
> ACME protocol with some minimum prescribed set of interoperable validation
> methods. In that model, all new CAs are afforded the same 'advantages' (or,
> from the CA perspective, 'disadvantages') of having an interoperable
> system, while addressing the ecosystem need by allowing ACME supporting
> clients to rapidly transition to other ACME supporting CAs in the event
> that the CA is determined they "shouldn't" be in (because they MITMed, for
> example)
>
> Similarly, requiring all CAs disclose their trusted certificates via
> Certificate Transparency offers an objective measure (modulo the nuance of
> defining what constitutes 'disclosure'), thus reducing the risk involved
> with scrying the various business purposes - and potentially providing
> incentives for those that /could/ use private PKIs to do so if they have
> concerns with such disclosure.
>
> Alternatively, all new CAs could be required to issue certificates for, at
> most, 90 days. This further helps reduce the risk of 'getting it wrong'.
>
> These are all objective measures that avoid the attempt to divine purpose
> and intent, and instead provide objective criteria based on the underlying
> set of concerns that necessitate the worry about intent. However, as you
> can also see, this inherently favors incumbents - so unless we treat an
> inclusion policy equivalent to an acceptance policy, it doesn't
> meaningfully improve the overall set of trust.
>
> This is why I think that some of the current proposals - such as scope or
> size - are problematic, as they are attempts to try to mitigate through
> policy the set of concerns that we could alternatively (and equitably)
> mitigate through technology, and with a greater benefit to the overall
> ecosystem :)
>

Apologies, accidentally tabbed to the send.

I was going to highlight the contrast of those proposals with an
alternative proposal. Limit the number of trusted root CAs to some
arbitrary number (say, 26). At that point, your criteria for who to include
becomes "Whomever is better than one of the existing 26, on whatever
dimensions the community values". At that point, you're no longer trying to
subjectively determine if the new CA rises past some abstract notion of the
minimum - you've instead got 26 concrete instances of some measure that you
can compare against, and see whether the new CA topples any of the
incumbents.

In the absence of having a strict limit, however, this isn't possible - and
so I think the answer needs to be "more agility" rather than "more policy".
Alternatively, if we (the community) want to limit to some subset, then we
can and should talk about the dimensions we value, and how to pit CAs
against eachother in a "Security and Policy Thunderdome" - where the most
secure CAs, the ones that provide the most value to the ecosystem, the ones
that are most responsive emerge the victors and recipients of 'trust' - and
those that don't compete on these valuable (to user) dimensions fade away
or dramatically improve.
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to