On 2/10/2018 5:21 μμ, Ryan Sleevi via dev-security-policy wrote:
On Tue, Oct 2, 2018 at 10:02 AM Dimitris Zacharopoulos <[email protected]>
wrote:

But this inaccurate data is not used in the validation process nor
included in the certificates. Perhaps I didn't describe my thoughts
accurately. Let me have another try using my previous example. Consider
an
Information Source that documents, in its practices, that they provide:


     1. the Jurisdiction of Incorporation (they check official government
     records),
     2. registry number (they check official government records),
     3. the name of legal representative (they check official government
     records),
     4. the official name of the legal entity (they check official
     government records),
     5. street address (they check the address of a utility bill issued
     under the name of the legal entity),
     6. telephone numbers (self-reported),
     7. color of the building (self-reported).

The CA evaluates this practice document and accepts information 1-5 as
reliable, dismisses information 6 as non-reliable, and dismisses
information 7 as irrelevant.

Your argument suggests that the CA should dismiss this information
source
altogether, even though it clearly has acceptable and verified
information
for 1-5. Is that an accurate representation of your statement?

Yes, I'm stating that the existence of and inclusion of 5-7 calls into
question whether or not this is a reliable data source.
Right, but in my example, the data source has already described -via
their practices- that this is how they collect each piece of data. The
CA, as a recipient of this data, can choose how much trust to lay upon
each piece of information. Therefore, IMHO the CA should evaluate and
use the reasonably verified information from that data source and
dismiss the rest. That seems more logical to me than dismissing a data
source entirely because they include "the color of the building", which
is self-reported.

Your parenthetical
about how they check that is what the CA has the burden to demonstrate,
particularly given that they have evidence that there is
less-than-reliable
data included. How does the competent CA ensure that the registry number
is
not self-reported -
The information in the parenthesis would be documented in the trusted
source practices and the CA would do an inquiry to check that these
practices are actually implemented and followed.

or that the QIIS allows it to be self-reported in the
future?
No one can predict the future, which is why there is a process for
periodic re-evaluation.

So let me understand: Your view is that QIIS's publish detailed policies
about the information they obtain (they don't), and the CA must
periodically re-evaluate that (which isn't in the BRs) to determine which
information is reliable or not.

EVG 11.11.5 says that

"The CA SHALL use a documented process to check the accuracy of the database and ensure its data is acceptable, *including reviewing the database provider's terms of use*. The CA SHALL NOT use any data in a QIIS that the CA knows is (i) self-reported and (ii) not verified by the QIIS as accurate. Databases in which the CA or its owners or affiliated companies maintain a controlling interest, or in which any Registration Authorities or subcontractors to whom the CA has outsourced any portion of the vetting process (or their owners or affiliated companies) maintain any ownership or beneficial interest, do not qualify as a QIIS."

I would assume that the "database provider's terms of use" describe the practices, so it is not fiction. Perhaps this doesn't apply for many information sources but it's not unheard of.

As for the re-evaluation, we (HARICA) consider this part of ETSI EN 319 401 section 7.7 (Operational Security) with guidance provided by ISO/IEC 27002:2013 clause 15. I assume that WebTrust has something similar. Perhaps the connection is not so "direct" but when you depend on some external entity to provide any kind of information related to CA operations (in our case, the Subject information validation), then you must follow best practice and periodically re-evaluate.


Presumably, that RDS/QIIS is also audited
against such statements (they aren't) in order to establish their
reliability. That's a great world to imagine, but that's not the world of
RDS or QIIS, and so it's an entirely fictitious world to imagine.

That world is either saying the RDS/QIIS is a Delegated Third Party - and
all the audit issues attendant - or we're treating them like a DTP for all
intents and purposes, and have to deal with all of the attendant DTP
issues, such as the competency of the auditor, the scoping of the audits,
etc. I see no gain from an overly convoluted system that, notably, does not
exist today, as compared to an approach of whitelisting such that the CA no
longer has to independently assess each source, and can instead work with
the community to both report omissions of qualified sources AND report
issues with existing qualified sources. That seems like a net win, without
an unnecessary veneer of assurance that does not actually provide it (as
shown by the issues with DTP audits for a number of CAs)

I have already stated that I fully agree with this goal :)



This is where the 'stopped-clock' metaphor is incredibly appropriate.
Just
because 1-5 happen to be right, and happen to be getting the right
process,
is by no means a predictor of future guarantees or correctness or
accuracy.

Of course, this is why you need re-evaluation. You can't guarantee
correctness for anything, otherwise we wouldn't have cases of
mis-issuance or mis-behavior. We add controls in processes to minimize
the risk of getting bad data.

More importantly, the inclusion of 5-7 in the reporting suggest that
there
is *unreliable* data actively being seen as acceptable, and because of
that, the CA needs to take a view against including.
I am not sure if you have misunderstood my description, but let me
repeat that despite getting the full data set, the CA would use only the
information pre-evaluated as reliable, and that doesn't include
self-reported data which they know -beforehand- (because it is
documented in the data source's practices) it is self-reported.

You're resting a lot of assumptions on a world that doesn't exist, so
perhaps that's where we're at a disconnect. I'm discussing the world that
we have, and the sources CAs are using today as RDS and QIIS. Perhaps it's
not as applicable to ETSI audited CAs, because they share such a tight
government regulatory framework that they primarily only concern themselves
with EU business registries. However, for EV - not QWACs - and particularly
when looking at an international representation and not just a
trans-national representation in the EU, such systems are not the ones
being practiced.

The EU regulatory framework does not mandate which registries to use. I assume that all CAs use the same rules for evaluating information sources. CAs can use any number of business registries. If a business in Greece wants to get an EV Certificate from a CA in the US, that CA might use the Greek Business Registry after properly evaluating it according to their evaluation procedures.



This is the stopped-clock argument - that the process to getting that
information doesn't matter, as long as the information turns out
eventually
consistent. However, the act of certification is about ensuring that the
process is well-formed.
It was considered well-formed when the certificate was issued.

Thanks. This confirms the view that I find deeply incompatible with the Web
PKI and improving the CA ecosystem - that it doesn't matter how you got the
answer, as long as the answer looks right in the end. Cheating is not an
acceptable means to getting the right answer, and using unreliable methods
to assert confidence that the data was accurate at time of issue is both
cheating and lying. The assertion by the CA cannot be reasonably stated
through the use of an unreliable data source, full stop.

These are very strong words for something that is clearly not deliberate. According to the example, the process was well-formed, evaluations were conducted with the known-at-the-time facts and nothing was done in bad faith or with intent (until proven otherwise of course).


If the only backstop against misissuance is
eventual consistency, then there's no incentive to get the process
correct.
And if the process isn't correct, there's nothing to mitigate adversarial
impact.
This is not what is described in the current requirements. Re-evaluation
of data sources, quarterly internal reviews are some of the existing
controls to check for possible inconsistencies and flaws in existing
processes with a goal of improving and correcting these processes. The
CA audit schemes themselves aim for continuous improvements in all areas
(organizational and technical controls).

This is not what is described in the current requirements, nor as
practiced. Yes, there is a regular re-evaluation of the CA's controls. No,
it does not require what you describe. And no, it does not excuse using
unreliable data sources in order to wait and see "what's the worst that can
happen," as somehow some perverse risk balancing ("the cost of doing it
right is so expensive, and it's not likely that someone would lie to this
unreliable source, so we'll use this unreliable source" *is* risk
management).

This presumption that audits somehow balance or catch this, however, is
laughable at best. The audits are not only not designed to address this,
they're fundamentally incapable of it, as I mentioned earlier in this
thread. The entire balancing act rests on the auditor knowing that, say,
D&B allows self-reporting of these fields - it doesn't resolve the problem,
it just moves it one step away, to an even less-qualified party because the
auditors are not going to be effective at monitoring everything going on,
because it's not aligned with their business duties.

I have come across some very competent auditors who capture these risks and gaps in CA operations and insist on seeing them properly addressed/mitigated, even though they may not be strictly mandated in the BRs. Most competent auditors I've met care more about Relying Parties than the actual CA because they understand what's at stake. I am sure both WT and ETSI have excellent guidance on how to establish a secure organizational, operational and technical environment and some auditors use these standards to ensure that CAs adhere to the highest security expectations. I think your last overly general statement is unfair to these very competent auditors who have the skills, competence and deep knowledge to evaluate these complicated standards, propose very meaningful improvements to CAs, and actually take their obligation for impartiality, objectivity very seriously.



I have already agreed that creating a global list of reliable
information sources is great because transparency will bring common
understanding in the evaluation processes. Until we get there though,
there is room for improving existing requirements and, as Tim said, one
does not prevent the other.

I disagree with both you and Tim here - I think this approach to
'incremental' improvement is merely a means to delay meaningful
improvement, even if that improvement is 'tough'. I can understand why, as
CAs, there's value in appearing to do more than nothing, but in an
adversarial model, until the problem is fixed, it's not substantially
better than doing nothing. These approaches to 'incremental' improvements -
such as relying on auditors, expecting QIIS/RDS to have comprehensive
audits and policies around data handling, around quarterly CA reviews -
don't actually address the core problems in any substantive way. However,
they take energy - from CAs and the community - and in that regard, they
prevent discussions about how to 'solve' the problem due to ratholes on how
to 'bootstrap solutions' like transparency ledgers or normative audit
criteria.

There may be people with enough energy for both but I understand your argument. I'm happy to contribute in any direction.

Dimitris.

_______________________________________________
dev-security-policy mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to