On Tue, Oct 2, 2018 at 10:02 AM Dimitris Zacharopoulos <[email protected]>
wrote:

> >> But this inaccurate data is not used in the validation process nor
> >> included in the certificates. Perhaps I didn't describe my thoughts
> >> accurately. Let me have another try using my previous example. Consider
> an
> >> Information Source that documents, in its practices, that they provide:
> >>
> >>
> >>     1. the Jurisdiction of Incorporation (they check official government
> >>     records),
> >>     2. registry number (they check official government records),
> >>     3. the name of legal representative (they check official government
> >>     records),
> >>     4. the official name of the legal entity (they check official
> >>     government records),
> >>     5. street address (they check the address of a utility bill issued
> >>     under the name of the legal entity),
> >>     6. telephone numbers (self-reported),
> >>     7. color of the building (self-reported).
> >>
> >> The CA evaluates this practice document and accepts information 1-5 as
> >> reliable, dismisses information 6 as non-reliable, and dismisses
> >> information 7 as irrelevant.
> >>
> >> Your argument suggests that the CA should dismiss this information
> source
> >> altogether, even though it clearly has acceptable and verified
> information
> >> for 1-5. Is that an accurate representation of your statement?
> >>
> > Yes, I'm stating that the existence of and inclusion of 5-7 calls into
> > question whether or not this is a reliable data source.
>
> Right, but in my example, the data source has already described -via
> their practices- that this is how they collect each piece of data. The
> CA, as a recipient of this data, can choose how much trust to lay upon
> each piece of information. Therefore, IMHO the CA should evaluate and
> use the reasonably verified information from that data source and
> dismiss the rest. That seems more logical to me than dismissing a data
> source entirely because they include "the color of the building", which
> is self-reported.
>
> > Your parenthetical
> > about how they check that is what the CA has the burden to demonstrate,
> > particularly given that they have evidence that there is
> less-than-reliable
> > data included. How does the competent CA ensure that the registry number
> is
> > not self-reported -
>
> The information in the parenthesis would be documented in the trusted
> source practices and the CA would do an inquiry to check that these
> practices are actually implemented and followed.
>
> > or that the QIIS allows it to be self-reported in the
> > future?
>
> No one can predict the future, which is why there is a process for
> periodic re-evaluation.
>

So let me understand: Your view is that QIIS's publish detailed policies
about the information they obtain (they don't), and the CA must
periodically re-evaluate that (which isn't in the BRs) to determine which
information is reliable or not. Presumably, that RDS/QIIS is also audited
against such statements (they aren't) in order to establish their
reliability. That's a great world to imagine, but that's not the world of
RDS or QIIS, and so it's an entirely fictitious world to imagine.

That world is either saying the RDS/QIIS is a Delegated Third Party - and
all the audit issues attendant - or we're treating them like a DTP for all
intents and purposes, and have to deal with all of the attendant DTP
issues, such as the competency of the auditor, the scoping of the audits,
etc. I see no gain from an overly convoluted system that, notably, does not
exist today, as compared to an approach of whitelisting such that the CA no
longer has to independently assess each source, and can instead work with
the community to both report omissions of qualified sources AND report
issues with existing qualified sources. That seems like a net win, without
an unnecessary veneer of assurance that does not actually provide it (as
shown by the issues with DTP audits for a number of CAs)


> > This is where the 'stopped-clock' metaphor is incredibly appropriate.
> Just
> > because 1-5 happen to be right, and happen to be getting the right
> process,
> > is by no means a predictor of future guarantees or correctness or
> accuracy.
>
> Of course, this is why you need re-evaluation. You can't guarantee
> correctness for anything, otherwise we wouldn't have cases of
> mis-issuance or mis-behavior. We add controls in processes to minimize
> the risk of getting bad data.
>
> > More importantly, the inclusion of 5-7 in the reporting suggest that
> there
> > is *unreliable* data actively being seen as acceptable, and because of
> > that, the CA needs to take a view against including.
>
> I am not sure if you have misunderstood my description, but let me
> repeat that despite getting the full data set, the CA would use only the
> information pre-evaluated as reliable, and that doesn't include
> self-reported data which they know -beforehand- (because it is
> documented in the data source's practices) it is self-reported.
>

You're resting a lot of assumptions on a world that doesn't exist, so
perhaps that's where we're at a disconnect. I'm discussing the world that
we have, and the sources CAs are using today as RDS and QIIS. Perhaps it's
not as applicable to ETSI audited CAs, because they share such a tight
government regulatory framework that they primarily only concern themselves
with EU business registries. However, for EV - not QWACs - and particularly
when looking at an international representation and not just a
trans-national representation in the EU, such systems are not the ones
being practiced.


> > This is the stopped-clock argument - that the process to getting that
> > information doesn't matter, as long as the information turns out
> eventually
> > consistent. However, the act of certification is about ensuring that the
> > process is well-formed.
>
> It was considered well-formed when the certificate was issued.
>

Thanks. This confirms the view that I find deeply incompatible with the Web
PKI and improving the CA ecosystem - that it doesn't matter how you got the
answer, as long as the answer looks right in the end. Cheating is not an
acceptable means to getting the right answer, and using unreliable methods
to assert confidence that the data was accurate at time of issue is both
cheating and lying. The assertion by the CA cannot be reasonably stated
through the use of an unreliable data source, full stop.


>
> > If the only backstop against misissuance is
> > eventual consistency, then there's no incentive to get the process
> correct.
> > And if the process isn't correct, there's nothing to mitigate adversarial
> > impact.
>
> This is not what is described in the current requirements. Re-evaluation
> of data sources, quarterly internal reviews are some of the existing
> controls to check for possible inconsistencies and flaws in existing
> processes with a goal of improving and correcting these processes. The
> CA audit schemes themselves aim for continuous improvements in all areas
> (organizational and technical controls).
>

This is not what is described in the current requirements, nor as
practiced. Yes, there is a regular re-evaluation of the CA's controls. No,
it does not require what you describe. And no, it does not excuse using
unreliable data sources in order to wait and see "what's the worst that can
happen," as somehow some perverse risk balancing ("the cost of doing it
right is so expensive, and it's not likely that someone would lie to this
unreliable source, so we'll use this unreliable source" *is* risk
management).

This presumption that audits somehow balance or catch this, however, is
laughable at best. The audits are not only not designed to address this,
they're fundamentally incapable of it, as I mentioned earlier in this
thread. The entire balancing act rests on the auditor knowing that, say,
D&B allows self-reporting of these fields - it doesn't resolve the problem,
it just moves it one step away, to an even less-qualified party because the
auditors are not going to be effective at monitoring everything going on,
because it's not aligned with their business duties.


> I have already agreed that creating a global list of reliable
> information sources is great because transparency will bring common
> understanding in the evaluation processes. Until we get there though,
> there is room for improving existing requirements and, as Tim said, one
> does not prevent the other.
>

I disagree with both you and Tim here - I think this approach to
'incremental' improvement is merely a means to delay meaningful
improvement, even if that improvement is 'tough'. I can understand why, as
CAs, there's value in appearing to do more than nothing, but in an
adversarial model, until the problem is fixed, it's not substantially
better than doing nothing. These approaches to 'incremental' improvements -
such as relying on auditors, expecting QIIS/RDS to have comprehensive
audits and policies around data handling, around quarterly CA reviews -
don't actually address the core problems in any substantive way. However,
they take energy - from CAs and the community - and in that regard, they
prevent discussions about how to 'solve' the problem due to ratholes on how
to 'bootstrap solutions' like transparency ledgers or normative audit
criteria.


>
> >
> > By comparison, we don't leave the root password to our servers as
> "secret",
> > with the assumption that 99/100, the person logging in to that server
> will
> > be authorized. Just because it's eventually consistent doesn't mean it's
> > not fundamentally flawed.
> >
> Your example is analogous only if the CA "knowingly" allowed and used
> self-reported information from a data source. Just like the
> administrators intentionally leaves the root password as "secret" when
> they know this is a very insecure practice. My examples described that
> the CA WOULD NOT accept and use unreliable information from a data
> source but only reliable information that had been previously evaluated.
>

No, it's not analogous. Your hypothetical CA, which does not exist, would
rely on reports, which do not exist, to make decisions, which are not
audited, about this source. My situation was describing an entity who was
'doing their best' and 'making an informed decision'.

Your hypothetical is demonstrably false due to the heavy reliance on D&B by
modern CAs. If your hypothetical world existed, this would have been a
known issue and long resolved. It's not, because it doesn't exist as you
imagine.
_______________________________________________
dev-security-policy mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to