On Wed, Jul 10, 2019 at 11:43 AM Scott Rea via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Mozilla’s new process, based on its own admission, is to ignore technical
> compliance and instead base its decisions on some yet to be disclosed
> subjective criterion which is applied selectively.  We think everybody in
> the Trust community should be alarmed by the fact that the new criterion
> for inclusion of a commercial CA now ignores any qualification of the CA or
> its ability to demonstrate compliant operations. We fear that in doing so
> Mozilla is abandoning its foundational principles of supporting safe and
> secure digital interactions for everyone on the internet.  This new process
> change seems conveniently timed to derail DigitalTrust’s application.
> By Mozilla’s own admission, DigitalTrust is being held to a new standard
> which seems to be associated with circular logic – a media bias based on a
> single claimed event that aims to falsely implicate DarkMatter is then used
> to inform Mozilla’s opinion, and the media seizes on this outcome to
> substantiate the very same bias it aimed to introduce in the first place.
> Additionally, in targeting DigitalTrust and in particularly DarkMatter’s
> founder Faisal Al Bannai, on the pretense that two companies can’t operate
> independently if they have the same owner, we fear another dangerous
> precedent has been set.

I broadly concur with these points.

In other significant risk management disciplines and domains in which a
plurality of diverse applicants seek trust, objectivity and strong
data-backed alignment of specific risk factors associated to specific bad
outcomes are prized above practically all else.  An obvious example is
consumer credit lending and particularly large loans like mortgages.

As an analogy, consider that at least in a broad directional sense, the
change in Mozilla's decisioning and underlying reasoning is akin to moving
from a mechanism where one particular FICO score means one particular
outcome regardless of the color of your skin or sexuality and toward a
mechanism in which despite having matching FICO scores two applicants and
their applications share dissimilar fates: one of them is declined not for
falling outside of objective risk management criteria but because they
"seem shady" or "fit the description of someone who did something bad" or
"just aren't a good match for our offering".  In finance, such decisioning
wouldn't survive the most cursory and forgiving review.  That "fact"
pattern wouldn't overcome a claim of racism even if the lender and the
applicant whose loan was declined were of the same race.

Please let me be quite specific in that I am not suggesting that there is
racial or national animus expressed in this decision by Mozilla.  I used
the parallel to racism in finance because it's exceedingly well documented
that strong objective systems of risk management and decisioning led to
better overall financial outcomes AND significantly opened the door to
credit (aka trust) to otherwise improperly maligned and underserved

To my mind, this decision is regression from a more formal standard and
better compliance monitoring than has ever been available (CT, etc.) to a
subjective morass with handwringing and feelings and bias.

I can not see how one reconciles taking pride in their risk management and
compliance acumen while making such a regression.  That kind of dissonance
would eat at my soul.
dev-security-policy mailing list

Reply via email to