Re: DarkMatter Concerns

2019-07-10 Thread Scott Rea via dev-security-policy
G’day Folks,

DigitalTrust first learned of the Mozilla decision via Reuters. We believe this 
is emblematic of Mozilla’s approach to our application which appears to have 
been predetermined from the outset. 

We believe yesterday’s decision is unfair and demonstrates an anti-UAE bias 
where a 2016 media report referring to a single claimed event that aims to 
falsely implicate DarkMatter (and repeatedly echoed over a span of 4 years) has 
now outranked Mozilla’s established process of demonstrated technical 
compliance. This very same compliance has been met by DigitalTrust for three 
consecutive years with full transparency. 

The emerging principle here seems to be that 508 WebTrust audit controls are 
not sufficient to outweigh a single media allegation referring to work we as 
well as DarkMatter simply don’t do. In fact DarkMatter’s work is focused on the 
exact opposite of the false claim as evidenced by the continuous work to 
protect all internet users, for example through on-going disclosure of zero day 
vulnerabilities to the likes of Cisco, Sony, ABB and others.

Mozilla’s new process, based on its own admission, is to ignore technical 
compliance and instead base its decisions on some yet to be disclosed 
subjective criterion which is applied selectively.  We think everybody in the 
Trust community should be alarmed by the fact that the new criterion for 
inclusion of a commercial CA now ignores any qualification of the CA or its 
ability to demonstrate compliant operations. We fear that in doing so Mozilla 
is abandoning its foundational principles of supporting safe and secure digital 
interactions for everyone on the internet.  This new process change seems 
conveniently timed to derail DigitalTrust’s application.  

By Mozilla’s own admission, DigitalTrust is being held to a new standard which 
seems to be associated with circular logic – a media bias based on a single 
claimed event that aims to falsely implicate DarkMatter is then used to inform 
Mozilla’s opinion, and the media seizes on this outcome to substantiate the 
very same bias it aimed to introduce in the first place. Additionally, in 
targeting DigitalTrust and in particularly DarkMatter’s founder Faisal Al 
Bannai, on the pretense that two companies can’t operate independently if they 
have the same owner, we fear another dangerous precedent has been set. 

What’s at stake here is not only denial of the UAE’s Roots but also Mozilla’s 
denial of the UAE’s existing issuing CAs. This means the nation’s entire Public 
Trust customer base is now denied the same digital protections that everyone 
else enjoys.

We fear that Mozilla’s action to apply this subjective process selectively to 
DigitalTrust effectively amounts to incremental tariffs on the internet with 
Mozilla de-facto promoting anti-competitive behavior in what was once a vaunted 
open Trust community.  Mozilla is now effectively forcing the UAE to protect 
its citizens by relying on another nation or commercial CA – despite 
DigitalTrust meeting all of Mozilla’s previously published criteria – thus 
protecting a select number of operators and excluding or forcing newcomers to 
pay a premium without the added benefit of control.

In conclusion we see only two possible paths going forward.

Under the first path, we demand that Mozilla’s new standard be explicitly 
disclosed and symmetrically applied to every other existing member of the 
Mozilla Trust Program, with immediate effect. This would cover, based on the 
precedent of the DigitalTrust case, any CA deemed to be a risk to the Trust 
community, despite lacking substantive evidence. This would suggest that any CA 
that serves a national function, is working closely with governments to secure 
the internet for its citizens, or is associated to other practices covering 
cyber security capabilities (which would include a large group of countries and 
companies) would have to be removed.

Under the second path, we call on Mozilla to honor its founding principles 
outlined in its Manifesto that ‘individuals’ security and privacy on the 
internet are fundamental and must not be treated as optional’.  We firmly 
believe this applies to citizens and residents of the UAE and we demand that 
Mozilla reverses its decision.

In following the second path, Mozilla can right yesterday’s wrong that inspires 
little confidence in the due process applied in the case of DigitalTrust as it 
seems to favor a subjective criterion based on a falsely established bias at 
the expense of rigorous technical controls and policy compliance. In reversing 
its decision, Mozilla can fulfil its core purpose to protect individual 
security and privacy on the Internet – in this case for UAE citizens - by 
enabling the UAE Roots as trusted in their products. And finally, by reversing 
its decision, Mozilla can find a path back to a balanced and objective approach 
that will demonstrate integrity to the world and the Trust community.

Regards,
-Scott

Re: DarkMatter Concerns

2019-07-10 Thread fabio.pietrosanti--- via dev-security-policy
I understand the Nadim points, there's a lot of subjective biased "popular 
judgement".

While from a security standpoint perspective "better safe than sorry" is a good 
statement, from a rights and fairness perspective that's a very bad.

So further conversation is needed.

Following DarkMatter removal i would love to bring to the attention of Mozilla 
the removal of a list of Companies that does as a main business other stuff, 
but also does offensive security and surveillance with public "credible 
evidences" .

I've analysed Intermediate CA list where DarkMatter is here 
https://ccadb-public.secure.force.com/mozilla/PublicAllIntermediateCerts .

In this list is possible to find the following company operating against 
"people's safety" and there's "credible evidences" they are doing so:


* Saudi Telecom Company

This company is publicly known to ask to surveil and intercept people as per 
"credible evidences" on:
https://moxie.org/blog/saudi-surveillance/
https://citizenlab.ca/2014/06/backdoor-hacking-teams-tradecraft-android-implant/


* German Rohde & Schwarz

This company do produce, install and support surveillance systems for 
intelligence agencies in Regimes such as Turkmenistan:
https://www.rferl.org/a/german-tech-firm-s-turkmen-ties-trigger-surveillance-concerns/29759911.html

They sell solutions to intelligence agencies such as IMSI Catchers and massive 
internet surveillance tools:
https://www.rohde-schwarz.com/en/solutions/aerospace-defense-security/overview/aerospace-defense-overview_229832.html


* US "Computer Sciences Corporation"

The CSC is a US Intelligence and Defense Contractors that does CNE (Computer 
Network Exploitation) like the WikiLeaks ICWatch show out

Read the profile of a former employee of CSC, doing CNE like Snowden was doing:
https://icwatch.wikileaks.org/docs/rLynnette-Jackson932c7871cb1e83f3%3Fsp=0ComputerSciencesCorporationCyberSecurityAnalystSystemsEngineerRemoteSystemAdministrator2008-09-01icwatch_indeed

Additionally from their wikipedia they acknowledge working for US Intel:
https://en.wikipedia.org/wiki/Computer_Sciences_Corporation

CSC provided services to the United States Department of Defense,[23] law 
enforcement and intelligence agencies (FBI,[24] CIA, Homeland Security[23]), 
aeronautics and aerospace agencies (NASA). In 2012, U.S. federal contracts 
accounted for 36% of CSC total revenue.[25]


* Australia's Attorney-General's Department

The Australia's Attorney-General's Department is a government agencies that 
wants to permit the Australian Security Intelligence Organisation (ASIO) to 
hack IT systems belonging to non-involved, non-targeted parties.

It operate against people safety and there's credible evidence of their 
behaviour in supporting ASIO to hack people, so they are very likely to abuse 
their intermediate CA:
http://www.h-online.com/security/news/item/Australian-secret-services-to-get-licence-to-hack-1784139.html


* US "National Geospatial-Intelligence Agency" https://www.nga.mil

The NGA is a US Military Intelligence Agency, equivalent to NSA, but operating 
on space GEOINT and SIGINT in serving intelligence and defense US agencies.

NGA is the Space partner of NSA:
https://www.nsa.gov/news-features/press-room/Article/1635467/joint-document-highlights-nga-and-nsa-collaboration/

I think that no-one would object to shutdown an NSA operated Intermediate CA, i 
am wondering if Mozilla would consider this removal.


Said that, given the approach that has been following with DarkMatter about 
"credible evidence" and "people safety" principles, i would strongly argue that 
Mozilla should take action against the subject previously documented.

I will open a thread on those newsgroup for each of those company to understand 
what's the due process and how it will compare to this.

Fabio Pietrosanti (naif)

Il giorno martedì 9 luglio 2019 18:19:36 UTC+2, Nadim Kobeissi ha scritto:
> Dear Wayne,
> 
> I fully respect Mozilla's mission and I fully believe that everyone here is
> acting in good faith.
> 
> That said, I must, in my capacity as a private individual, decry what I
> perceive as a dangerous shortsightedness and lack of intellectual rigor
> underlying your decision. I do this as someone with a keen interest in
> Internet freedom issues and not as someone who is in any way partisan in
> this debate: I don't care for DarkMatter as a company in any way whatsoever
> and have no relationship with anyone there.
> 
> I sense enough urgency in my concerns to pause my work schedule today and
> respond to this email. I will do my best to illustrate why I sense danger
> in your decision. Essentially there are three specific points I take issue
> with:
> 
> -
> 1: Waving aside demands for objective criteria.
> -
> You say that "if we rigidly applied our existing criteria, we would deny
> most inclusion requests." Far from being an excuse to put more weight (or
> in this case, perhaps almost all weight) on 

Re: DarkMatter Concerns

2019-07-10 Thread Fabio Pietrosanti via dev-security-policy
I understand the Nadim points, there's a lot of subjective biased "popular 
judgement".

While from a security standpoint perspective "better safe than sorry" is a good 
statement, from a rights and fairness perspective that's a very bad.

So further conversation is needed.

Following DarkMatter removal i would love to bring to the attention of Mozilla 
the removal of a list of Companies that does as a main business other stuff, 
but for which there's "public credible evidences" that does also some kind of 
offensive security that goes "against people's safety" (as defined by Mozilla 
principles).

I've analysed Intermediate CA list where DarkMatter is here 
https://ccadb-public.secure.force.com/mozilla/PublicAllIntermediateCerts .

In this list is possible to find the following company operating against 
"people's safety" and there's "credible evidences" they are doing so:


* Saudi Telecom Company

This company is publicly known to ask to surveil and intercept people as per 
"credible evidences" on:
https://moxie.org/blog/saudi-surveillance/
https://citizenlab.ca/2014/06/backdoor-hacking-teams-tradecraft-android-implant/


* German Rohde & Schwarz

This company do produce, install and support surveillance systems for 
intelligence agencies in Regimes such as Turkmenistan:
https://www.rferl.org/a/german-tech-firm-s-turkmen-ties-trigger-surveillance-concerns/29759911.html

They sell solutions to intelligence agencies such as IMSI Catchers and massive 
internet surveillance tools:
https://www.rohde-schwarz.com/en/solutions/aerospace-defense-security/overview/aerospace-defense-overview_229832.html


* US "Computer Sciences Corporation"

The CSC is a US Intelligence and Defense Contractors that does CNE (Computer 
Network Exploitation) like the WikiLeaks ICWatch show out
Read the profile of a former employee of CSC, doing CNE like Snowden was doing:

https://icwatch.wikileaks.org/docs/rLynnette-Jackson932c7871cb1e83f3%3Fsp=0ComputerSciencesCorporationCyberSecurityAnalystSystemsEngineerRemoteSystemAdministrator2008-09-01icwatch_indeed

Additionally from their wikipedia they acknowledge working for US Intel:

https://en.wikipedia.org/wiki/Computer_Sciences_Corporation

CSC provided services to the United States Department of Defense,[23] law 
enforcement and intelligence agencies (FBI,[24] CIA, Homeland Security[23]), 
aeronautics and aerospace agencies (NASA). In 2012, U.S. federal contracts 
accounted for 36% of CSC total revenue.[25]


* Australia's Attorney-General's Department

The Australia's Attorney-General's Department is a government agencies that 
wants to permit the Australian Security Intelligence Organisation (ASIO) to 
hack IT systems belonging to non-involved, non-targeted parties.

It operate against people safety and there's credible evidence of their 
behaviour in supporting ASIO to hack people, so they are very likely to abuse 
their intermediate CA:
http://www.h-online.com/security/news/item/Australian-secret-services-to-get-licence-to-hack-1784139.html


* US "National Geospatial-Intelligence Agency" https://www.nga.mil

The NGA is a US Military Intelligence Agency, equivalent to NSA, but operating 
on space GEOINT and SIGINT in serving intelligence and defense US agencies.

NGA is the Space partner of NSA:

https://www.nsa.gov/news-features/press-room/Article/1635467/joint-document-highlights-nga-and-nsa-collaboration/

I think that no-one would object to shutdown an NSA operated Intermediate CA, i 
am wondering if Mozilla would consider this removal.


Said that, given the approach that has been followed with DarkMatter about 
"credible evidence" and "people safety" principles, i would strongly argue that 
Mozilla should take action against the subject previously documented.

I will open a thread on those newsgroup for each of those company to understand 
what's the due process and how it will compare to this.

Fabio Pietrosanti (naif)

Il giorno venerdì 22 marzo 2019 17:49:17 UTC+1, Nadim Kobeissi ha scritto:
> What a strange situation.
> 
> On the one hand, denying DarkMatter's CA bid because of these press
> articles would set the precedent of refusing to accept the engagement and
> apparent good faith of a member of the industry, based only on hearsay and
> with no evidence.
> 
> On the other hand, deciding to move forward with a good-faith, transparent
> and evidence-based approach actually risks creating a long-term undermining
> of public confidence in the CA inclusion process.
> 
> It really seems to me that both decisions would cause damage to the CA
> inclusion process. The former would make it seem discriminatory (and to
> some even somewhat xenophobic, although I don't necessarily agree with
> that) while the latter would cast a serious cloud of uncertainty above the
> safety of the CA root store in general that I have no idea how anyone could
> or will eventually dispel.
> 
> As a third party observer I genuinely don't know what could be considered a
> good move by Mozilla 

Re: DarkMatter Concerns

2019-07-10 Thread Nadim Kobeissi via dev-security-policy
Dear Nex,

I doubt that anyone seriously believes that "reporters are lying out of their 
teeth." It is far more likely that the reporters are working within the realm 
of reason and covering things as they see them. So far all the actors in this 
appear to be behaving in ways that make sense given their perspectives on the 
issue, which are wildly different.

I am pointing to the fact that the journalistic reporting on this matter has so 
far operated under a fundamentally different dimension of rigor than the one I 
would assume is necessary for making this sort of decision with regards to the 
Mozilla CA process. For example, Reuters can allow itself to publish an article 
that kicks off with the claim that Mozilla blocked the United Arab Emirates 
government (not DarkMatter!), from becoming an "Internet security guardian" or 
"Internet security gatekeeper", and that Mozilla claimed to have "credible 
evidence" for doing this. In the Reuters world, this isn't egregious because it 
still covers the gist of what's going on and communicates it abstractly to a 
mainstream global audience. It's not "lying" as much as it is lacking in rigor.

Similarly, the Intercept bills itself as an "adversarial journalism" outfit and 
has had a serious anti-surveillance and activist bent from day one. That's not 
at all a bad thing, and their work is important. But it's still the case that 
it doesn't meet the standard of objectivity and evidence that I would 
personally prefer to see mandated in such decisions.  

My contention is that, similarly as I wouldn't base my decision on which 
dentist to go to for a root canal on an article in People magazine, Mozilla 
shouldn't base these decisions on reporting from the New York Times or the 
Intercept. People magazine's profile of a brilliant dentist is likely a fair 
one all things considered, but it's still not how informed decisions should be 
made. Another example: I wouldn't expect the mayor of a village to decide to 
ban video games from being sold based on him reading in the town newspaper that 
they cause violence and addiction. Maybe they do, maybe they don't -- it's just 
that such decisions shouldn't be made based on that kind of source material.

I agree that not all of the sources on the DarkMatter story were anonymous and 
I was incorrect in implying that this was the case. But I still believe that it 
is in everyone's interest to, moving forward, improve our objective procedures 
such that they are applicable, relevant and sufficient, and to place more value 
on evidence. I personally hope that evidence shows up that proves every single 
one of the claims against DarkMatter true, just so that we can actually finally 
know for sure and leave this behind us once and for all!

I want to reiterate that I am not trying to defend DarkMatter here. My interest 
lies in trying to warn about a potential for decay in objective and correct 
procedure, especially when it comes to something this important. My contentions 
are likely to be unpopular with all sides: they don't excuse DarkMatter, they 
criticize a legitimately brilliant vanguard of Internet freedom (Mozilla), etc. 
etc. -- I'm sorry for having to make you all put up with this; I just genuinely 
think it's important to not dismiss these concerns and to keep them in mind for 
next time.

On Wednesday, July 10, 2019 at 9:45:07 AM UTC+2, Nex wrote:
> I think that dismissing as baseless investigations from 9 different
> reporters, on 3 different newspapers (add one more, FP, if consider
> this[1]) is misleading. Additionally, it is just false to say all the
> articles only relied on anonymous sources (of which they have many, by
> the way), but there are clearly sources on record as well, such as
> Simone Margaritelli and Jonathan Cole for The Intercept, and Lori Stroud
> for Reuters.
> 
> While obviously there is no scientific metric for this, I do think the
> number of sources (anonymous and not) and the variety of reporters and
> of newspapers (with their respective editors and verification processes)
> do qualify the reporting as "credible" and "extensively sourced".
> 
> Additionally, details provided by sources on record directly matched
> attacks documented by technical researchers. For example, Lori Stroud
> talking details over the targeting of Donaghy, which was also proven in
> Citizen Lab's "Stealth Falcon" report. Lastly, Reuters reporters make
> repeated mentions of documents they had been able to review supporting
> the claims of their sources. Unless you have good reasons to believe
> reporters are just lying out of their teeth, I don't see how all of this
> can't be considered credible.
> 
> [1]
> https://foreignpolicy.com/2017/12/21/deep-pockets-deep-cover-the-uae-is-paying-ex-cia-officers-to-build-a-spy-empire-in-the-gulf/
> 
> On 7/9/19 6:09 PM, Nadim Kobeissi via dev-security-policy wrote:
> > Dear Wayne,
> > 
> > I fully respect Mozilla's mission and I fully believe that everyone here is
> > 

Re: DarkMatter Concerns

2019-07-10 Thread Ryan Sleevi via dev-security-policy
On Wed, Jul 10, 2019 at 12:29 PM fabio.pietrosanti--- via
dev-security-policy  wrote:

> Said that, given the approach that has been following with DarkMatter
> about "credible evidence" and "people safety" principles, i would strongly
> argue that Mozilla should take action against the subject previously
> documented.
>
> I will open a thread on those newsgroup for each of those company to
> understand what's the due process and how it will compare to this.
>

It sounds like you've not done the research to actually analyze which of
the listed organizations are similar in substance. For example, which of
these organizations is in control of the private key and/or the CP/CPS and
issuance control.

This is a very basic and essential understanding to have, if proposing such
a discussion. For each of the organizations listed, my queries show that
they are not controlled or operated by such organizations, merely branded
as such.

It is noteworthy, because this was similarly the case for DarkMatter;
QuoVadis controlled the private key, issuance, and core activities.
Transfer of control happened late 2017, which became publicly known
February 2018, although not formally disclosed as such for a non-trivial
amount of time after. The policies are in the process of being updated,
which will incidentally ensure such actions do not happen again.

However, without understanding the relevant audits or CP/CPS, this is not a
productive line of argument. If I've overlooked something with respect to
the specific audits mentioned, and you weren't just pulling names out of
certificates, please highlight the relevant audits.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New intermediate certs and Audit Statements

2019-07-10 Thread Kathleen Wilson via dev-security-policy

On 7/9/19 3:17 PM, Ryan Sleevi wrote:
On Tue, Jul 9, 2019 at 5:50 PM Kathleen Wilson via dev-security-policy 

I propose that to handle this situation, the CA may enter the
subordinate CA's current audit statements and use the Public Comment
field to indicate that the new certificate will be included in the next
audit statements. 



To support this, we have added the "Comments" column to these two reports:
https://ccadb-public.secure.force.com/mozilla/IntermediateCertsSeparateAudits
https://ccadb-public.secure.force.com/mozilla/IntermediateCertsSeparateAuditsCSV



Note that if the same policies do not apply to the new sub-CA, it has
seemed uncontroversial that some form of new audit is required. Is that
consistent with your understanding as well?



That is consistent with my understanding as well. I'll make a note to 
look into this in regards to enforcement in the CCADB, and the above 
listed reports should probably also be updated to show the CP/CPS data.


Thanks,
Kathleen
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-10 Thread Nadim Kobeissi via dev-security-policy
I would like to support the statements made by both Fabio and Scott to the
extent that if Mozilla is to go forward with this decision, then I fully
expect them to review their existing CAs and to revoke onto OneCRL every
one of them that has some news report of blog post linking them to
nefarious activities without evidence. The examples given by Fabio (Saudi
Telecom, Australia's Attorney General Department, etc.) seem to have as
much "evidence" (if not more) than DarkMatter out there. Will they also be
revoked? And if not, why not? In fact, why didn't Mozilla itself bring this
up before Fabio and Scott chimed in?

As I predicted, we are now in a situation where DarkMatter can correctly,
and at length, chide Mozilla for a short-sighted and illegitimate
implementation of a critical process. It doesn't please me to be unable to
find any holes in Scott's email; on the contrary, it worries me. Because we
are now in a position where Mozilla can't defend its decision making
against an entity that may in the end still turn out to be involved in
aggressive surveillance and hacking behavior, despite the current lack of
evidence.

Nadim Kobeissi
Symbolic Software • https://symbolic.software
Sent from office


On Wed, Jul 10, 2019 at 6:43 PM Scott Rea  wrote:

> G’day Folks,
>
> DigitalTrust first learned of the Mozilla decision via Reuters. We believe
> this is emblematic of Mozilla’s approach to our application which appears
> to have been predetermined from the outset.
>
> We believe yesterday’s decision is unfair and demonstrates an anti-UAE
> bias where a 2016 media report referring to a single claimed event that
> aims to falsely implicate DarkMatter (and repeatedly echoed over a span of
> 4 years) has now outranked Mozilla’s established process of demonstrated
> technical compliance. This very same compliance has been met by
> DigitalTrust for three consecutive years with full transparency.
>
> The emerging principle here seems to be that 508 WebTrust audit controls
> are not sufficient to outweigh a single media allegation referring to work
> we as well as DarkMatter simply don’t do. In fact DarkMatter’s work is
> focused on the exact opposite of the false claim as evidenced by the
> continuous work to protect all internet users, for example through on-going
> disclosure of zero day vulnerabilities to the likes of Cisco, Sony, ABB and
> others.
>
> Mozilla’s new process, based on its own admission, is to ignore technical
> compliance and instead base its decisions on some yet to be disclosed
> subjective criterion which is applied selectively.  We think everybody in
> the Trust community should be alarmed by the fact that the new criterion
> for inclusion of a commercial CA now ignores any qualification of the CA or
> its ability to demonstrate compliant operations. We fear that in doing so
> Mozilla is abandoning its foundational principles of supporting safe and
> secure digital interactions for everyone on the internet.  This new process
> change seems conveniently timed to derail DigitalTrust’s application.
>
> By Mozilla’s own admission, DigitalTrust is being held to a new standard
> which seems to be associated with circular logic – a media bias based on a
> single claimed event that aims to falsely implicate DarkMatter is then used
> to inform Mozilla’s opinion, and the media seizes on this outcome to
> substantiate the very same bias it aimed to introduce in the first place.
> Additionally, in targeting DigitalTrust and in particularly DarkMatter’s
> founder Faisal Al Bannai, on the pretense that two companies can’t operate
> independently if they have the same owner, we fear another dangerous
> precedent has been set.
>
> What’s at stake here is not only denial of the UAE’s Roots but also
> Mozilla’s denial of the UAE’s existing issuing CAs. This means the nation’s
> entire Public Trust customer base is now denied the same digital
> protections that everyone else enjoys.
>
> We fear that Mozilla’s action to apply this subjective process selectively
> to DigitalTrust effectively amounts to incremental tariffs on the internet
> with Mozilla de-facto promoting anti-competitive behavior in what was once
> a vaunted open Trust community.  Mozilla is now effectively forcing the UAE
> to protect its citizens by relying on another nation or commercial CA –
> despite DigitalTrust meeting all of Mozilla’s previously published criteria
> – thus protecting a select number of operators and excluding or forcing
> newcomers to pay a premium without the added benefit of control.
>
> In conclusion we see only two possible paths going forward.
>
> Under the first path, we demand that Mozilla’s new standard be explicitly
> disclosed and symmetrically applied to every other existing member of the
> Mozilla Trust Program, with immediate effect. This would cover, based on
> the precedent of the DigitalTrust case, any CA deemed to be a risk to the
> Trust community, despite lacking substantive evidence. This would suggest

Re: DarkMatter Concerns

2019-07-10 Thread Michael Casadevall via dev-security-policy
I appreciate the ground work Fabio put into this thus far, and want to
see further discussion on it.

I think the safest way to quantity and frame the discussion is asking if
a CA (or subCA) has a vested interest in surveillance, other business
interest, or government ties which would put a CA to be more likely to
abuse the trust, or has a history of business practices related to
surveillance or practices against the public interest in regards to WebPKI.

I recognize the points Scott brought up, but trust is always a
subjective thing. As previously pointed out, Mozilla has always retained
the ability to choose what to include or disallow based on community
input, and this entire thread shows there is a lot of community input here.

The problem with auditing in general is its only going to catch
information that is logged and archived in a corporation. It's an
assurance step but in and of itself is not enough to establish trust; it
not uncommon for misissues and other issues to be noted by the community
from information in the wild
Michael

On 7/10/19 3:59 AM, fabio.pietrosanti--- via dev-security-policy wrote:
> I understand the Nadim points, there's a lot of subjective biased "popular 
> judgement".
> 
> While from a security standpoint perspective "better safe than sorry" is a 
> good statement, from a rights and fairness perspective that's a very bad.
> 
> So further conversation is needed.
> 
> Following DarkMatter removal i would love to bring to the attention of 
> Mozilla the removal of a list of Companies that does as a main business other 
> stuff, but also does offensive security and surveillance with public 
> "credible evidences" .
> 
> I've analysed Intermediate CA list where DarkMatter is here 
> https://ccadb-public.secure.force.com/mozilla/PublicAllIntermediateCerts .
> 
> In this list is possible to find the following company operating against 
> "people's safety" and there's "credible evidences" they are doing so:
> 
> 
> * Saudi Telecom Company
> 
> This company is publicly known to ask to surveil and intercept people as per 
> "credible evidences" on:
> https://moxie.org/blog/saudi-surveillance/
> https://citizenlab.ca/2014/06/backdoor-hacking-teams-tradecraft-android-implant/
> 
> 
> * German Rohde & Schwarz
> 
> This company do produce, install and support surveillance systems for 
> intelligence agencies in Regimes such as Turkmenistan:
> https://www.rferl.org/a/german-tech-firm-s-turkmen-ties-trigger-surveillance-concerns/29759911.html
> 
> They sell solutions to intelligence agencies such as IMSI Catchers and 
> massive internet surveillance tools:
> https://www.rohde-schwarz.com/en/solutions/aerospace-defense-security/overview/aerospace-defense-overview_229832.html
> 
> 
> * US "Computer Sciences Corporation"
> 
> The CSC is a US Intelligence and Defense Contractors that does CNE (Computer 
> Network Exploitation) like the WikiLeaks ICWatch show out
> 
> Read the profile of a former employee of CSC, doing CNE like Snowden was 
> doing:
> https://icwatch.wikileaks.org/docs/rLynnette-Jackson932c7871cb1e83f3%3Fsp=0ComputerSciencesCorporationCyberSecurityAnalystSystemsEngineerRemoteSystemAdministrator2008-09-01icwatch_indeed
> 
> Additionally from their wikipedia they acknowledge working for US Intel:
> https://en.wikipedia.org/wiki/Computer_Sciences_Corporation
> 
> CSC provided services to the United States Department of Defense,[23] law 
> enforcement and intelligence agencies (FBI,[24] CIA, Homeland Security[23]), 
> aeronautics and aerospace agencies (NASA). In 2012, U.S. federal contracts 
> accounted for 36% of CSC total revenue.[25]
> 
> 
> * Australia's Attorney-General's Department
> 
> The Australia's Attorney-General's Department is a government agencies that 
> wants to permit the Australian Security Intelligence Organisation (ASIO) to 
> hack IT systems belonging to non-involved, non-targeted parties.
> 
> It operate against people safety and there's credible evidence of their 
> behaviour in supporting ASIO to hack people, so they are very likely to abuse 
> their intermediate CA:
> http://www.h-online.com/security/news/item/Australian-secret-services-to-get-licence-to-hack-1784139.html
> 
> 
> * US "National Geospatial-Intelligence Agency" https://www.nga.mil
> 
> The NGA is a US Military Intelligence Agency, equivalent to NSA, but 
> operating on space GEOINT and SIGINT in serving intelligence and defense US 
> agencies.
> 
> NGA is the Space partner of NSA:
> https://www.nsa.gov/news-features/press-room/Article/1635467/joint-document-highlights-nga-and-nsa-collaboration/
> 
> I think that no-one would object to shutdown an NSA operated Intermediate CA, 
> i am wondering if Mozilla would consider this removal.
> 
> 
> Said that, given the approach that has been following with DarkMatter about 
> "credible evidence" and "people safety" principles, i would strongly argue 
> that Mozilla should take action against the subject previously documented.
> 
> I will open a 

Re: DarkMatter Concerns

2019-07-10 Thread Nadim Kobeissi via dev-security-policy
Dear Ryan,

Thank you very much for pointing out that in the examples listed by Fabio,
none of them actually control the private key. I did not know this and
assumed that the opposite would be the case for at least some of the
entities listed.

I am indeed a new participant and I have an infinitesimal amount of
experience in this specific topic compared to you, who does this for a
living indeed as a guardian for one of the most important entities on the
Internet. But I did make effort a few months ago, at the outset of this
discussion, to understand how the CA process works, and I do not believe
that citing the clause "Mozilla MAY, at its sole discretion, decide to
disable (partially or fully) or remove a certificate at any time and for
any reason" is a particularly insightful way in which to veer the
discussion back towards policy, except if the intent is to outline that the
policy does technically allow Mozilla to do whatever it wants, policy be
damned.

Indeed I would much rather focus on the rest of the elements in the Mozilla
Root Store Policy (
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/)
which are less vapidly authoritarian than the single clause you quote, and
which focus more on a set of audits, confirmations and procedures that give
everyone a fair chance at proving the honesty of their role as a
certificate authority. For example, I find policy points 2.2 (Validation
Practices), 3.1.1 (Audit Criteria) and 3.1.4 (Public Audit Information) to
be much more of a fertile ground for future discussion.

Finally, I don't think anyone here has expressed interest in those "pay to
play" schemes, as you call them. Rather, my argument is that the continued
dismissal of auditing practices and transparent procedures, especially by
substituting them with newspaper reports that offer no evidence, is not a
good path to take for Mozilla. This is especially true when this dismissal
is largely cushioned with such elements as "you didn't see that Mozilla has
as clause that lets it do anything for any reason", "after 30 years of
experience, we decided that trust is subjective" and that it's
"unfortunate" to ask for due process as the main gatekeeper for what is
perhaps the most critical deliberative process for the safety of the world
wide web.

With my sincere appreciation for your continued engagement,

Nadim Kobeissi
Symbolic Software • https://symbolic.software
Sent from office


On Wed, Jul 10, 2019 at 7:33 PM Ryan Sleevi  wrote:

>
>
> On Wed, Jul 10, 2019 at 1:07 PM Nadim Kobeissi via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> I would like to support the statements made by both Fabio and Scott to the
>> extent that if Mozilla is to go forward with this decision, then I fully
>> expect them to review their existing CAs and to revoke onto OneCRL every
>> one of them that has some news report of blog post linking them to
>> nefarious activities without evidence. The examples given by Fabio (Saudi
>> Telecom, Australia's Attorney General Department, etc.) seem to have as
>> much "evidence" (if not more) than DarkMatter out there. Will they also be
>> revoked? And if not, why not? In fact, why didn't Mozilla itself bring
>> this
>> up before Fabio and Scott chimed in?
>>
>
> Hi Nadim,
>
> I realize you're a new participant in this Forum, and thus are not very
> familiar with PKI or how it works. As I responded, Fabio's remarks
> misunderstand both Mozilla Policy and how CAs work and operate, as well as
> audits and controls. I realize this may be confusing for new participants,
> and I hope my drawing attention to your confusion can help you learn more.
>
> Similarly, as a new participant, you probably aren't familiar with how
> root programs work, based on your replies. For example, Mozilla's policy
> has always contained a very explicit provision:
> Mozilla MAY, at its sole discretion, decide to disable (partially or
> fully) or remove a certificate at any time and for any reason.
>
> I realize you may be unhappy with that language, based on your replies,
> but it's important to recognize that Mozilla is tasked with, among other
> things, the safety and security of its users. However, as noted, it may
> remove them for any reason, even those without security requirements.
> Mozilla understandably strives to balance this in its mission, but I think
> it's important to recognize that it's a very clear policy which every CA
> trusted or applying to be trusted must acknowledge and agree with.
>
> It's also unfortunate that you seem to be looking for objective controls
> here. In the 30 years of PKI discussions, one of the key themes in both the
> legal and technical analysis is that trust is, functionally, a subjective
> thing. Audits are one mechanism to try to improve certainty, but they are
> not a substitute. The choice of audit schemes currently used - which rely
> on third-party audits with criteria developed by other organizations - is
> 

Re: Logotype extensions

2019-07-10 Thread Ryan Sleevi via dev-security-policy
On Wed, Jul 10, 2019 at 2:41 PM housley--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> People find logos very helpful.  That is why many browsers display a tiny
> logo in the toolbar.
>

Are you talking the favicon? An attacker controlled resource which should
not be used for trust, and in many browsers, is no longer displayed in in
the toolbar, except for sites the user has visited?


> I would suggest that a better way forward is to start the hard work on the
> validation process.  It will not be difficult for that to become more
> robust and accessible than the logos in the toolbar.
>

You're right, it would be fairly easy, because there is no such validation,
nor is there need for it. Thus, it seems any suggestion at validation is
significantly less robust, accessible, or useful.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-10 Thread Ryan Sleevi via dev-security-policy
On Wed, Jul 10, 2019 at 3:17 PM Nadim Kobeissi 
wrote:

> Many times in this discussion, we have all been offered a choice between
> two paths. The first path would be to examine difficult problems and
> shortcomings together and attempting to present incremental--often
> onerous--improvements. The second path would be to just say that someone
> should trust us based on years of subjective experience. In many, many
> cases, the latter really is a wise thing to say and a correct thing to say
> (and I truly mean this); it offers a path through which judicious decisions
> are often made. Furthermore, it is often a necessary path to take when time
> is of the essence. But it is seldom the rigorous path to take, seldom the
> path that serves future engineers and practitioners in the field, and
> seldom the path that gives institutions the foundation and the standing
> that they will need in the decades to come.
>

Hi Nadim,

There's a phrase to capture the essence of what you propose doing. It is
that the perfect is the enemy of the good. Wikipedia even helpfully
contains a useful quote in the context of Robert Watson-Watt.

It is important that, while these flaws are recognized and being worked on,
there is still a duty of care and community responsibility. There's clearly
a school of thinking, which you appear to be advocating, that the best
solution when something is less than perfect is to not do it at all, since
doing nothing is the only 'fair' choice. Perhaps that's not your intent,
but I want to highlight, you've repeatedly admonished the folks who have
spent years into understanding and improving the ecosystem that they're not
doing enough, or that it isn't rigorous enough.

By way of analogy, which is admittedly a poor way to argue, it would be
akin to someone arguing that out-of-band writes should not be fixed,
because fixing OOB writes is not rigorous, and instead it should be
rewritten in Rust. While it's certainly true that rewritting in Rust is
likely to improve things, that's a bit of what we in the industry term a
"long term" effort. In the mean time, as pragmatic professionals who care
about security, long-term participants on this list are approaching both
pragmatic and long-term solutions.

There's not much I can say about the claimed lack of rigor. It appears that
you were not familiar with long-standing policies or discussions, the means
of approaching both the short-term risks and the long-term, the efforts to
ensure consistency and reliability, and the acknowledged near-term gaps
that necessitate a pragmatic approach. It's a bit like arguing that, since
you have an OOB Write, the best path to take is to either do nothing to fix
it, and in fact continue writing more code in unsafe languages, or do
nothing until you replace it all. Neither, of course, are paths of rigor,
and neither are paths that serve future engineers and practitioners in the
field, nor do they give foundation and standing to the trust and safety of
users.

A different parallel to take would be that ignoring these well-known,
well-documented limits to understanding would be a bit like ignoring the
well-known, well-documented limits of JavaScript cryptography, and
attempting to write a chat application in Javascript and promoting it for
human rights or dissident safety. While it's certainly a path one could
take, it's hardly a responsible one, just like it would be irresponsible to
ignore both the limits of audits and the fundamental role in subjective,
but thoughtful, evaluation of the risks comparative to the benefits.

[1] https://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-10 Thread Cynthia Revström via dev-security-policy
Hi Scott,
Below is my personal view on it, I acknowledge that it is highly subjective.

For one, people and companies in the UAE could get certs from non-UAE CAs.
I live in Sweden, yet I have certs from Norwegian, British, and American
CAs.

Another issue I have is that I think there is a difference between a
government such as the US, UK, etc and the UAE due to the UAE not being an
electoral democracy and doesn't really have much transparency from what I
know.

As I have said before, if Mozilla and the community considers it a risk, it
may not be worth it.
It would be different in a more isolated industry, but in the "CA
Industry", one CA's mistake will be felt by the entire world.

Now, my personal view is that we shouldn't really have CAs that are
connected with non-democratic governments (such as China Financial
Certification Authority) at all.

- Cynthia

On Wed, Jul 10, 2019 at 6:43 PM Scott Rea via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> G’day Folks,
>
> DigitalTrust first learned of the Mozilla decision via Reuters. We believe
> this is emblematic of Mozilla’s approach to our application which appears
> to have been predetermined from the outset.
>
> We believe yesterday’s decision is unfair and demonstrates an anti-UAE
> bias where a 2016 media report referring to a single claimed event that
> aims to falsely implicate DarkMatter (and repeatedly echoed over a span of
> 4 years) has now outranked Mozilla’s established process of demonstrated
> technical compliance. This very same compliance has been met by
> DigitalTrust for three consecutive years with full transparency.
>
> The emerging principle here seems to be that 508 WebTrust audit controls
> are not sufficient to outweigh a single media allegation referring to work
> we as well as DarkMatter simply don’t do. In fact DarkMatter’s work is
> focused on the exact opposite of the false claim as evidenced by the
> continuous work to protect all internet users, for example through on-going
> disclosure of zero day vulnerabilities to the likes of Cisco, Sony, ABB and
> others.
>
> Mozilla’s new process, based on its own admission, is to ignore technical
> compliance and instead base its decisions on some yet to be disclosed
> subjective criterion which is applied selectively.  We think everybody in
> the Trust community should be alarmed by the fact that the new criterion
> for inclusion of a commercial CA now ignores any qualification of the CA or
> its ability to demonstrate compliant operations. We fear that in doing so
> Mozilla is abandoning its foundational principles of supporting safe and
> secure digital interactions for everyone on the internet.  This new process
> change seems conveniently timed to derail DigitalTrust’s application.
>
> By Mozilla’s own admission, DigitalTrust is being held to a new standard
> which seems to be associated with circular logic – a media bias based on a
> single claimed event that aims to falsely implicate DarkMatter is then used
> to inform Mozilla’s opinion, and the media seizes on this outcome to
> substantiate the very same bias it aimed to introduce in the first place.
> Additionally, in targeting DigitalTrust and in particularly DarkMatter’s
> founder Faisal Al Bannai, on the pretense that two companies can’t operate
> independently if they have the same owner, we fear another dangerous
> precedent has been set.
>
> What’s at stake here is not only denial of the UAE’s Roots but also
> Mozilla’s denial of the UAE’s existing issuing CAs. This means the nation’s
> entire Public Trust customer base is now denied the same digital
> protections that everyone else enjoys.
>
> We fear that Mozilla’s action to apply this subjective process selectively
> to DigitalTrust effectively amounts to incremental tariffs on the internet
> with Mozilla de-facto promoting anti-competitive behavior in what was once
> a vaunted open Trust community.  Mozilla is now effectively forcing the UAE
> to protect its citizens by relying on another nation or commercial CA –
> despite DigitalTrust meeting all of Mozilla’s previously published criteria
> – thus protecting a select number of operators and excluding or forcing
> newcomers to pay a premium without the added benefit of control.
>
> In conclusion we see only two possible paths going forward.
>
> Under the first path, we demand that Mozilla’s new standard be explicitly
> disclosed and symmetrically applied to every other existing member of the
> Mozilla Trust Program, with immediate effect. This would cover, based on
> the precedent of the DigitalTrust case, any CA deemed to be a risk to the
> Trust community, despite lacking substantive evidence. This would suggest
> that any CA that serves a national function, is working closely with
> governments to secure the internet for its citizens, or is associated to
> other practices covering cyber security capabilities (which would include a
> large group of countries and companies) would have to 

Re: DarkMatter Concerns

2019-07-10 Thread Ryan Sleevi via dev-security-policy
On Wed, Jul 10, 2019 at 1:07 PM Nadim Kobeissi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I would like to support the statements made by both Fabio and Scott to the
> extent that if Mozilla is to go forward with this decision, then I fully
> expect them to review their existing CAs and to revoke onto OneCRL every
> one of them that has some news report of blog post linking them to
> nefarious activities without evidence. The examples given by Fabio (Saudi
> Telecom, Australia's Attorney General Department, etc.) seem to have as
> much "evidence" (if not more) than DarkMatter out there. Will they also be
> revoked? And if not, why not? In fact, why didn't Mozilla itself bring this
> up before Fabio and Scott chimed in?
>

Hi Nadim,

I realize you're a new participant in this Forum, and thus are not very
familiar with PKI or how it works. As I responded, Fabio's remarks
misunderstand both Mozilla Policy and how CAs work and operate, as well as
audits and controls. I realize this may be confusing for new participants,
and I hope my drawing attention to your confusion can help you learn more.

Similarly, as a new participant, you probably aren't familiar with how root
programs work, based on your replies. For example, Mozilla's policy has
always contained a very explicit provision:
Mozilla MAY, at its sole discretion, decide to disable (partially or fully)
or remove a certificate at any time and for any reason.

I realize you may be unhappy with that language, based on your replies, but
it's important to recognize that Mozilla is tasked with, among other
things, the safety and security of its users. However, as noted, it may
remove them for any reason, even those without security requirements.
Mozilla understandably strives to balance this in its mission, but I think
it's important to recognize that it's a very clear policy which every CA
trusted or applying to be trusted must acknowledge and agree with.

It's also unfortunate that you seem to be looking for objective controls
here. In the 30 years of PKI discussions, one of the key themes in both the
legal and technical analysis is that trust is, functionally, a subjective
thing. Audits are one mechanism to try to improve certainty, but they are
not a substitute. The choice of audit schemes currently used - which rely
on third-party audits with criteria developed by other organizations - is
similarly lacking in suitability, if that's the position to take.
Alternative schemes, which have been or are practiced by other root
programs, includes charging CAs that wish to apply, and using that to fund
efforts for the development and analysis of organizations. However, that
sort of "pay for play" scheme, as some perceive it, runs the risk of
further encouraging those with deep pockets to pursue bad behaviour.

If you're looking to understand a bit more about the basics of PKI, which
seems a good opportunity given the challenges you're struggling with on the
discussion, perhaps you'd like to examine how the Mozilla Policy developed
[1]. You can note the issues [2] at the time with audits. Indeed, some of
the earlier messages on this thread include good primers that potential
participants should be familiar with, in order to ensure their
contributions are most useful and informed.

[1] http://hecker.org/mozilla/ca-certificate-metapolicy
[2] http://hecker.org/mozilla/cert-policy-submitted
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-10 Thread Ryan Sleevi via dev-security-policy
On Wed, Jul 10, 2019 at 2:15 PM Nadim Kobeissi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Indeed I would much rather focus on the rest of the elements in the Mozilla
> Root Store Policy (
>
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/
> )
> which are less vapidly authoritarian than the single clause you quote, and
> which focus more on a set of audits, confirmations and procedures that give
> everyone a fair chance at proving the honesty of their role as a
> certificate authority. For example, I find policy points 2.2 (Validation
> Practices), 3.1.1 (Audit Criteria) and 3.1.4 (Public Audit Information) to
> be much more of a fertile ground for future discussion.
>

I appreciate that attempt to focus. However, it does again fundamentally
misunderstand things in ways that are critical in demonstrating why this
discussion is not productive or fruitful, and your suggestions are quite
misguided.

For example, judging by your replies, it seems you may not understand
audits, what they are, or how they work.

During an audit, someone who agrees to a voluntary set of professional
standards, such as a Chartered Public Accountant, agrees to perform an
audit using a specific set of Principles and Criteria. The Principles are
very broad - for example, the three principles are "CA Business Practices
Disclosure", "Service Integrity", and "CA Environmental Controls". These
don't tell you very much at all, so then there are individual Criteria.

However, the Criteria are very broad: for example: "The CA maintains
controls to provide reasonable assurance that its Certification Practice
Statement (CPS) management processes are effective."

Now, you may not realize, but "reasonable assurance" and "effective" are
not layman's terms, but refer to specific procedures that vary by country
and professional standards (e.g. AICPA standards like the AT series or CPA
Canada standards like CSAE)

During the process of an audit, the auditors role is primarily to look at
things and say "Yeah, that sounds right". It is not, for example,
adversarial and look for counterfactuals. It does not, for example, include
specific steps the auditor must perform; those steps are merely
illustrative. A CA may actually significantly fail in its management
processes, but the auditor might determine that, even despite those
failures, the assurance provided was still "reasonable" so as to be
effective.

The process starts with the auditor assuming they're doing nothing, and the
CA showing positive evidence that supports each thing. Negative evidence
can and is overlooked, if there are other positive controls to be used.
Mozilla, in collaboration with Google and others, has been working for
years to address this gap, but I believe it's reasonable to say that the
existence of an audit is by no means a positive sign for a CA; it merely
serves as a filtering function for those too bad to operate, and even then,
only barely.

You might expect that the auditor have skill and familiarity with PKI.
However, that's a very subjective measurement itself. The WebTrust
licensing body may or may not perform an examination as to the skills of
the auditor. Like some participants here, the auditor might feel they're
skilled in PKI and have a well-formed opinion, based solely on reading
m.d.s.p. and thinking they understand stuff. It's a very wild-west.

It's important to note that, at the end of this process, in which the
auditor has been shown all this evidence, they make a subjective
determination about whether or not they think it was "reasonable". Auditors
are not required to reach the same conclusion, and indeed, professional
standards discourage auditors from "checking eachother's work". Unskilled
auditors, of which there are many, are indistinguishable from skilled
auditors. In all cases, their fiduciary relationship is with the CA, and
thus they are bound by confidentiality and professionally prohibited from
disclosing knowledge of adverse events, such as the CA "misleading" the
public, provided that the CA made sure to exclude such things from their
scope of the engagement.

I mention all of this, because it seems you have a mistaken belief that PKI
rests on objective criteria. It has not, nor has it ever. It has simply
been "someone" (in this case, chosen by the CA) to offer their opinion on
whether it's likely that the CA will end up doing what they said they would
do. It does not measure that what the CA says they'll do is what people
trusting the CA may expect. It does not permit the auditor to disclose
deception. And, at the end of the day, it's merely the auditors
"professional judgement", and with a whole host of disclaimers so that
they're not personally responsible, should someone rely on that judgement.

Perhaps, if you've read this far, you've come to realize that the thing
you're taking unnecessary and unfounded umbrage over, which is the
'subjectivity' based on 'reasonable 

Re: Logotype extensions

2019-07-10 Thread housley--- via dev-security-policy
On Friday, July 5, 2019 at 7:53:45 PM UTC-4, Wayne Thayer wrote:
> Based on this discussion, I propose adding the following statement to the
> Mozilla Forbidden Practices wiki page [1]:
> 
> ** Logotype Extension **
> Due to the risk of misleading Relying Parties and the lack of defined
> validation standards for information contained in this field, as discussed
> here [2], CAs MUST NOT include the RFC 3709 Logotype extension in CA or
> Subscriber certificates.
> 
> Please respond if you have concerns with this change. As suggested in this
> thread, we can discuss removing this restriction if/when a robust
> validation process emerges.
> 
> - Wayne
> 
> [1] https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices
> [2]
> https://groups.google.com/d/msg/mozilla.dev.security.policy/nZoK5akw2c8/ZtF0WZY8AgAJ

People find logos very helpful.  That is why many browsers display a tiny logo 
in the toolbar.

I would suggest that a better way forward is to start the hard work on the 
validation process.  It will not be difficult for that to become more robust 
and accessible than the logos in the toolbar.

Russ
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-07-10 Thread Phillip Hallam-Baker via dev-security-policy
On Wed, Jul 10, 2019 at 4:54 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Russ,
>
> >
> Perhaps one of us is confused because I think we're saying the same thing -
> that  rules around inclusion of Logotype extensions in publicly-trusted
> certs should be in place before CAs begin to use this extension.
>

I don't see how your proposed ban on logotypes is consistent. What that
would do is set up a situation in which it was impossible for CABForum to
develop rules for logotypes because one of the browsers had already banned
their use.

A better way to state the requirement is that CAs should only issue
logotypes after CABForum has agreed validation criteria. But I think that
would be a mistake at this point because we probably want to have
experience of running the issue process before we actually try to
standardize it.

I can't see Web browsing being the first place people are going to use
logotypes. I think they are going to be most useful in other applications.
And we actually have rather a lot of those appearing right now. But they
are Applets consisting of a thin layer on top of a browser and the logotype
stuff is relevant to the thin layer rather than the substrate.


For example, I have lots of gadgets in my house. Right now, every different
vendor who does an IoT device has to write their own app and run their own
service. And the managers are really happy with that at the moment because
they see it as all upside.

I think they will soon discover that most devices that are being made to
Internet aren't actually very useful if the only thing they connect to is a
manufacturer site and those start to cost money to run. So I think we will
end up with an open interconnect approach to IoT in the end regardless of
what a bunch of marketing VPs think should happen. Razor and blades models
are really profitable but they are also vanishingly rare because the number
2 and 3 companies have an easy way to enter the market by opening up.

Authenticating those devices to the users who bought them, authenticating
the code updates. Those are areas where the logotypes can be really useful.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-07-10 Thread Phillip Hallam-Baker via dev-security-policy
On Wed, Jul 10, 2019 at 6:11 PM Wayne Thayer  wrote:

> On Wed, Jul 10, 2019 at 2:31 PM Phillip Hallam-Baker <
> ph...@hallambaker.com> wrote:
>
>> On Wed, Jul 10, 2019 at 4:54 PM Wayne Thayer via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>> Russ,
>>>
>>> >
>>> Perhaps one of us is confused because I think we're saying the same
>>> thing -
>>> that  rules around inclusion of Logotype extensions in publicly-trusted
>>> certs should be in place before CAs begin to use this extension.
>>>
>>
>> I don't see how your proposed ban on logotypes is consistent. What that
>> would do is set up a situation in which it was impossible for CABForum to
>> develop rules for logotypes because one of the browsers had already banned
>> their use.
>>
>>
> How exactly does a Browser banning the use of an extension prevent the CAB
> Forum from developing rules to govern the use of said extension? If
> anything, it would seem to encourage the CAB Forum to take on that work.
> Also, as has been discussed, it is quite reasonable to argue that the
> inclusion of this extension is already forbidden in a BR-compliant
> certificate.
>

Because then the Mozilla ban will be used to prevent any work on logotypes
in CABForum and the lack of CABForum rules will be used as pretext for not
removing the ban.

I have been doing standards for 30 years. You know this is exactly how that
game always plays out.

If you don't want to use the extension, that is fine. But if you attempt to
prohibit anything, ruin it by your lawyers first and ask them how it is not
an a restriction on trade.

It is one thing for CABForum to make that requirement, quite another for
Mozilla to use its considerable market power to prevent other browser
providers making use of LogoTypes.




> A better way to state the requirement is that CAs should only issue
>> logotypes after CABForum has agreed validation criteria. But I think that
>> would be a mistake at this point because we probably want to have
>> experience of running the issue process before we actually try to
>> standardize it.
>>
>>
> I would be amenable to adding language that permits the use of the
> Logotype extension after the CAB Forum has adopted rules governing its use.
> I don't see that as a material change to my proposal because, either way,
> we have the option to change Mozilla's position based on our assessment of
> the rules established by the CAB Forum, as documented in policy section 2.3
> "Baseline Requirements Conformance".
>
> I do not believe that changing the "MUST NOT" to "SHOULD NOT" reflects the
> consensus reached in this thread.
>
> I also do not believe that publicly-trusted certificates are the safe and
> prudent vehicle for "running the issue process before we actually try to
> standardize it".
>

You are free to ignore any information in a certificate. But if you attempt
to limit information in the certificate you are not intending to use in
your product, you are arguably crossing the line.




> I can't see Web browsing being the first place people are going to use
>> logotypes. I think they are going to be most useful in other applications.
>> And we actually have rather a lot of those appearing right now. But they
>> are Applets consisting of a thin layer on top of a browser and the logotype
>> stuff is relevant to the thin layer rather than the substrate
>>
>
> If the use case isn't server auth or email protection, then publicly
> trusted certificates shouldn't be used. Full stop. How many times do we
> need to learn that lesson?
>

That appears to be an even more problematic statement. There have always
been more stakeholders than just the browser providers on the relying
applications side.

Those applets are competing with your product. Again, talk to your legal
people. If you use your market power to limit the functionalities that your
competitors can offer, you are going to have real problems.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-10 Thread Nadim Kobeissi via dev-security-policy
Dear Ryan,

Thanks very much for this very insightful email. There really is a lot that
I and others don't know about how these decisions are made.

The silver lining here is that we agree on where some of the gaps are in
this process, and that Mozilla, Google and others are working on filling in
these gaps, as you say. I would argue that the existence of so many
conflicts of interests, intricacies and complexities between the multiple
stakeholders in such decisions make it more urgent to fill in these gaps
quickly and completely.

If the existing documentation are insufficient in order to provide a full
set of distinguishers on the intricacies of this process, then it stands to
reason that they should be improved. If a certain terminology is too broad,
it stands to reason that it can be made less broad. If layman's terms are
deployed for non-layman concepts, it stands to reason that this should be
modified and its underlying concept elucidated. If incompetent auditors
cannot be differentiated from competent auditors, it stands to reason that
this can be addressed. If areas exist where conflicts of interests are
likely, it stands to reason that policies can be expanded to avoid these
conflicts of interests from occurring.

So long as we can continue to point to specific problems and shortcomings,
which you do masterfully and to great public service in your email, it will
always stand to reason that we can improve our policies such that the gaps
are filled. And again, it's wonderful that Mozilla, Google etc. are working
on this.

Many times in this discussion, we have all been offered a choice between
two paths. The first path would be to examine difficult problems and
shortcomings together and attempting to present incremental--often
onerous--improvements. The second path would be to just say that someone
should trust us based on years of subjective experience. In many, many
cases, the latter really is a wise thing to say and a correct thing to say
(and I truly mean this); it offers a path through which judicious decisions
are often made. Furthermore, it is often a necessary path to take when time
is of the essence. But it is seldom the rigorous path to take, seldom the
path that serves future engineers and practitioners in the field, and
seldom the path that gives institutions the foundation and the standing
that they will need in the decades to come.

I sincerely appreciate the formidable passion with which you argue for your
positions, and am glad that someone like you holds the responsibility that
you do.

Thank you,

Nadim Kobeissi
Symbolic Software • https://symbolic.software
Sent from office


On Wed, Jul 10, 2019 at 8:42 PM Ryan Sleevi  wrote:

>
>
> On Wed, Jul 10, 2019 at 2:15 PM Nadim Kobeissi via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> Indeed I would much rather focus on the rest of the elements in the
>> Mozilla
>> Root Store Policy (
>>
>> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/
>> )
>> which are less vapidly authoritarian than the single clause you quote, and
>> which focus more on a set of audits, confirmations and procedures that
>> give
>> everyone a fair chance at proving the honesty of their role as a
>> certificate authority. For example, I find policy points 2.2 (Validation
>> Practices), 3.1.1 (Audit Criteria) and 3.1.4 (Public Audit Information) to
>> be much more of a fertile ground for future discussion.
>>
>
> I appreciate that attempt to focus. However, it does again fundamentally
> misunderstand things in ways that are critical in demonstrating why this
> discussion is not productive or fruitful, and your suggestions are quite
> misguided.
>
> For example, judging by your replies, it seems you may not understand
> audits, what they are, or how they work.
>
> During an audit, someone who agrees to a voluntary set of professional
> standards, such as a Chartered Public Accountant, agrees to perform an
> audit using a specific set of Principles and Criteria. The Principles are
> very broad - for example, the three principles are "CA Business Practices
> Disclosure", "Service Integrity", and "CA Environmental Controls". These
> don't tell you very much at all, so then there are individual Criteria.
>
> However, the Criteria are very broad: for example: "The CA maintains
> controls to provide reasonable assurance that its Certification Practice
> Statement (CPS) management processes are effective."
>
> Now, you may not realize, but "reasonable assurance" and "effective" are
> not layman's terms, but refer to specific procedures that vary by country
> and professional standards (e.g. AICPA standards like the AT series or CPA
> Canada standards like CSAE)
>
> During the process of an audit, the auditors role is primarily to look at
> things and say "Yeah, that sounds right". It is not, for example,
> adversarial and look for counterfactuals. It does not, for example, include
> specific steps the auditor 

Re: DarkMatter Concerns

2019-07-10 Thread Nadim Kobeissi via dev-security-policy
Dear Ryan,

In outlining the two paths that I presented at the end of my previous
email, I made sure to illustrate the choice between them as one that comes
repeatedly -- a conscious choice that every time produces a small,
incremental improvement, often through a tiresome and onerous process.
Indeed I was trying to support slow, grinding iterations towards the better
-- and that's not at all something that sounds to me like sticking out for
only the perfect solution. And indeed I supported the subjective and
deliberative path as often necessary and wise when time is of the essence.
I find it very surprising that you seem to believe that I was arguing for
perfection -- quite the opposite, in fact.

I do still believe that when we fall back to relying on mainstream news
articles, with no obvious fallback in procedure, then it's reasonable for
people like me to chime in and wonder about a potential lack of rigor.
Every potential participant in this thread comes with their own
misconceptions and lack of information, and I'm no exception, but I still
find that my original source of concern holds. My impression at least is
that it's produced a worthwhile and valuable discussion for everyone (in no
small part thanks to your own time and effort). And of course, I don't mean
to admonish anyone here with the points of discussion that I've raised, and
I would certainly like to think that nobody feels admonished by anyone else
so far in this thread.

I am very glad that others are working slowly on the long term effort for
better policy. I think these issues are fundamental to the Internet's
safety and hope that I'll be able to help out more one day in whatever way
I can volunteer.

Thank you,

Nadim Kobeissi
Symbolic Software • https://symbolic.software
Sent from office


On Wed, Jul 10, 2019 at 9:59 PM Ryan Sleevi  wrote:

>
>
> On Wed, Jul 10, 2019 at 3:17 PM Nadim Kobeissi 
> wrote:
>
>> Many times in this discussion, we have all been offered a choice between
>> two paths. The first path would be to examine difficult problems and
>> shortcomings together and attempting to present incremental--often
>> onerous--improvements. The second path would be to just say that someone
>> should trust us based on years of subjective experience. In many, many
>> cases, the latter really is a wise thing to say and a correct thing to say
>> (and I truly mean this); it offers a path through which judicious decisions
>> are often made. Furthermore, it is often a necessary path to take when time
>> is of the essence. But it is seldom the rigorous path to take, seldom the
>> path that serves future engineers and practitioners in the field, and
>> seldom the path that gives institutions the foundation and the standing
>> that they will need in the decades to come.
>>
>
> Hi Nadim,
>
> There's a phrase to capture the essence of what you propose doing. It is
> that the perfect is the enemy of the good. Wikipedia even helpfully
> contains a useful quote in the context of Robert Watson-Watt.
>
> It is important that, while these flaws are recognized and being worked
> on, there is still a duty of care and community responsibility. There's
> clearly a school of thinking, which you appear to be advocating, that the
> best solution when something is less than perfect is to not do it at all,
> since doing nothing is the only 'fair' choice. Perhaps that's not your
> intent, but I want to highlight, you've repeatedly admonished the folks who
> have spent years into understanding and improving the ecosystem that
> they're not doing enough, or that it isn't rigorous enough.
>
> By way of analogy, which is admittedly a poor way to argue, it would be
> akin to someone arguing that out-of-band writes should not be fixed,
> because fixing OOB writes is not rigorous, and instead it should be
> rewritten in Rust. While it's certainly true that rewritting in Rust is
> likely to improve things, that's a bit of what we in the industry term a
> "long term" effort. In the mean time, as pragmatic professionals who care
> about security, long-term participants on this list are approaching both
> pragmatic and long-term solutions.
>
> There's not much I can say about the claimed lack of rigor. It appears
> that you were not familiar with long-standing policies or discussions, the
> means of approaching both the short-term risks and the long-term, the
> efforts to ensure consistency and reliability, and the acknowledged
> near-term gaps that necessitate a pragmatic approach. It's a bit like
> arguing that, since you have an OOB Write, the best path to take is to
> either do nothing to fix it, and in fact continue writing more code in
> unsafe languages, or do nothing until you replace it all. Neither, of
> course, are paths of rigor, and neither are paths that serve future
> engineers and practitioners in the field, nor do they give foundation and
> standing to the trust and safety of users.
>
> A different parallel to take would be that ignoring these 

Re: Logotype extensions

2019-07-10 Thread Phillip Hallam-Baker via dev-security-policy
On Wed, Jul 10, 2019 at 2:41 PM housley--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, July 5, 2019 at 7:53:45 PM UTC-4, Wayne Thayer wrote:
> > Based on this discussion, I propose adding the following statement to the
> > Mozilla Forbidden Practices wiki page [1]:
> >
> > ** Logotype Extension **
> > Due to the risk of misleading Relying Parties and the lack of defined
> > validation standards for information contained in this field, as
> discussed
> > here [2], CAs MUST NOT include the RFC 3709 Logotype extension in CA or
> > Subscriber certificates.
> >
> > Please respond if you have concerns with this change. As suggested in
> this
> > thread, we can discuss removing this restriction if/when a robust
> > validation process emerges.
> >
> > - Wayne
> >
> > [1] https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices
> > [2]
> >
> https://groups.google.com/d/msg/mozilla.dev.security.policy/nZoK5akw2c8/ZtF0WZY8AgAJ
>
> People find logos very helpful.  That is why many browsers display a tiny
> logo in the toolbar.
>
> I would suggest that a better way forward is to start the hard work on the
> validation process.  It will not be difficult for that to become more
> robust and accessible than the logos in the toolbar.
>

[I am not currently employed by a CA. Venture Cryptography does not operate
one or plan to.]

I agree with Russ.

The Logotype extension has technical controls to protect the integrity of
the referenced image by means of a digest value.

I do find the discussion of the usability factors rather odd when I am
looking at my browser tabs decorated with completely unauthenticated
favicons. Why is it that browser providers have no problem putting that
information in front of users?

If Mozilla or Chrome or the like don't see the value of using the logotype
information, don't. But if you were to attempt to prevent others making use
of this capability, that looks a lot like anti-Trust to me.

The validation scheme I proposed when we discussed this some years back was
to build on the Madrid Treaty for registration of trademarks. International
business is already having to deal with the issue of logos being used in
multiple jurisdiction. It is a complex, difficult problem but one that the
international system is very much aware of and working to address. They
will take time but we can leave the hard issues to them.

I see multiple separate security levels here:

1) Anyone can create a Web page that appears to look like Ethel's Bank

2) Ethel's Bank Carolina and Ethel's Bank Spain both have trademarks in
their home countries and can register credentials showing they are Ethel's
Bank.

3) When someone goes to Ethel's Bank online they are assured that it is the
canonical Ethel's Bank and no other.

There are obvious practical problems that make (3) unreachable. Not least
the fact that one of the chief reasons that trademarks are often fractured
geographically is that they were once part of a single business that split.
Cadbury's chocolate sold in the US is made by a different company to that
sold in the UK which is why some people import the genuine article at
significant expense.

But the security value lies in moving from level 1 to level 2. Stopping a
few million Internet thieves easily setting up fake web sites that look By
Ethel's bank is the important task. The issue of which Ethel's Bank is the
real one is something they can sort out (expensively) between themselves,
20 paces with loaded lawyers.


For the record, I am not sure that we can graft logotypes onto the current
Web browser model as it stands. I agree with many of Ryan's criticisms, but
not his conclusions. Our job is to make the Internet safe for the users. I
am looking at using logotypes but in a very different interaction model.
The Mesh does have a role for CAs but it is a very different role.

I will be explaining that model elsewhere. But the basic idea here is that
the proper role of the CA is primarily as an introducer. One of the reasons
the Web model is fragile today is that every transaction is essentially
independent as far as the client is concerned. The server has cookies that
link the communications together but the client starts from scratch each
time.

So imagine that I have a Bookmarks catalog that I keep my financial service
providers in and this acts as a local name provider for all of my Internet
technology. When I add Ethel's bank to my Bookmarks catalog, I see the
Ethel's bank logo as part of that interaction. A nice big fat logo, not a
small one. And I give it my pet name 'Ethel'. And when I tell Siri, or
Alexa or Cortana, 'call ethel', it call's Ethel's bank for me. Or if I type
'Ethel' into a toolbar, that is the priority.

Given where we have come from, the CA will have to continue to do the trust
management part of the WebPKI indefinitely. And I probably want the CA to
also have the role of warning me when a party I previously trusted has
defaulted in some way.

Re: Logotype extensions

2019-07-10 Thread Wayne Thayer via dev-security-policy
Russ,

On Wed, Jul 10, 2019 at 11:41 AM housley--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, July 5, 2019 at 7:53:45 PM UTC-4, Wayne Thayer wrote:
> > Based on this discussion, I propose adding the following statement to the
> > Mozilla Forbidden Practices wiki page [1]:
> >
> > ** Logotype Extension **
> > Due to the risk of misleading Relying Parties and the lack of defined
> > validation standards for information contained in this field, as
> discussed
> > here [2], CAs MUST NOT include the RFC 3709 Logotype extension in CA or
> > Subscriber certificates.
> >
> > Please respond if you have concerns with this change. As suggested in
> this
> > thread, we can discuss removing this restriction if/when a robust
> > validation process emerges.
> >
> > - Wayne
> >
> > [1] https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices
> > [2]
> >
> https://groups.google.com/d/msg/mozilla.dev.security.policy/nZoK5akw2c8/ZtF0WZY8AgAJ
>
> People find logos very helpful.  That is why many browsers display a tiny
> logo in the toolbar.
>
> I would suggest that a better way forward is to start the hard work on the
> validation process.  It will not be difficult for that to become more
> robust and accessible than the logos in the toolbar.
>
>
Perhaps one of us is confused because I think we're saying the same thing -
that  rules around inclusion of Logotype extensions in publicly-trusted
certs should be in place before CAs begin to use this extension.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-10 Thread Matthew Hardeman via dev-security-policy
On Wed, Jul 10, 2019 at 11:43 AM Scott Rea via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Mozilla’s new process, based on its own admission, is to ignore technical
> compliance and instead base its decisions on some yet to be disclosed
> subjective criterion which is applied selectively.  We think everybody in
> the Trust community should be alarmed by the fact that the new criterion
> for inclusion of a commercial CA now ignores any qualification of the CA or
> its ability to demonstrate compliant operations. We fear that in doing so
> Mozilla is abandoning its foundational principles of supporting safe and
> secure digital interactions for everyone on the internet.  This new process
> change seems conveniently timed to derail DigitalTrust’s application.
>
> By Mozilla’s own admission, DigitalTrust is being held to a new standard
> which seems to be associated with circular logic – a media bias based on a
> single claimed event that aims to falsely implicate DarkMatter is then used
> to inform Mozilla’s opinion, and the media seizes on this outcome to
> substantiate the very same bias it aimed to introduce in the first place.
> Additionally, in targeting DigitalTrust and in particularly DarkMatter’s
> founder Faisal Al Bannai, on the pretense that two companies can’t operate
> independently if they have the same owner, we fear another dangerous
> precedent has been set.
>

I broadly concur with these points.

In other significant risk management disciplines and domains in which a
plurality of diverse applicants seek trust, objectivity and strong
data-backed alignment of specific risk factors associated to specific bad
outcomes are prized above practically all else.  An obvious example is
consumer credit lending and particularly large loans like mortgages.

As an analogy, consider that at least in a broad directional sense, the
change in Mozilla's decisioning and underlying reasoning is akin to moving
from a mechanism where one particular FICO score means one particular
outcome regardless of the color of your skin or sexuality and toward a
mechanism in which despite having matching FICO scores two applicants and
their applications share dissimilar fates: one of them is declined not for
falling outside of objective risk management criteria but because they
"seem shady" or "fit the description of someone who did something bad" or
"just aren't a good match for our offering".  In finance, such decisioning
wouldn't survive the most cursory and forgiving review.  That "fact"
pattern wouldn't overcome a claim of racism even if the lender and the
applicant whose loan was declined were of the same race.

Please let me be quite specific in that I am not suggesting that there is
racial or national animus expressed in this decision by Mozilla.  I used
the parallel to racism in finance because it's exceedingly well documented
that strong objective systems of risk management and decisioning led to
better overall financial outcomes AND significantly opened the door to
credit (aka trust) to otherwise improperly maligned and underserved
communities.

To my mind, this decision is regression from a more formal standard and
better compliance monitoring than has ever been available (CT, etc.) to a
subjective morass with handwringing and feelings and bias.

I can not see how one reconciles taking pride in their risk management and
compliance acumen while making such a regression.  That kind of dissonance
would eat at my soul.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-07-10 Thread Wayne Thayer via dev-security-policy
On Wed, Jul 10, 2019 at 2:31 PM Phillip Hallam-Baker 
wrote:

> On Wed, Jul 10, 2019 at 4:54 PM Wayne Thayer via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> Russ,
>>
>> >
>> Perhaps one of us is confused because I think we're saying the same thing
>> -
>> that  rules around inclusion of Logotype extensions in publicly-trusted
>> certs should be in place before CAs begin to use this extension.
>>
>
> I don't see how your proposed ban on logotypes is consistent. What that
> would do is set up a situation in which it was impossible for CABForum to
> develop rules for logotypes because one of the browsers had already banned
> their use.
>
>
How exactly does a Browser banning the use of an extension prevent the CAB
Forum from developing rules to govern the use of said extension? If
anything, it would seem to encourage the CAB Forum to take on that work.
Also, as has been discussed, it is quite reasonable to argue that the
inclusion of this extension is already forbidden in a BR-compliant
certificate.

A better way to state the requirement is that CAs should only issue
> logotypes after CABForum has agreed validation criteria. But I think that
> would be a mistake at this point because we probably want to have
> experience of running the issue process before we actually try to
> standardize it.
>
>
I would be amenable to adding language that permits the use of the Logotype
extension after the CAB Forum has adopted rules governing its use. I don't
see that as a material change to my proposal because, either way, we have
the option to change Mozilla's position based on our assessment of the
rules established by the CAB Forum, as documented in policy section 2.3
"Baseline Requirements Conformance".

I do not believe that changing the "MUST NOT" to "SHOULD NOT" reflects the
consensus reached in this thread.

I also do not believe that publicly-trusted certificates are the safe and
prudent vehicle for "running the issue process before we actually try to
standardize it".

I can't see Web browsing being the first place people are going to use
> logotypes. I think they are going to be most useful in other applications.
> And we actually have rather a lot of those appearing right now. But they
> are Applets consisting of a thin layer on top of a browser and the logotype
> stuff is relevant to the thin layer rather than the substrate.
>
>
If the use case isn't server auth or email protection, then publicly
trusted certificates shouldn't be used. Full stop. How many times do we
need to learn that lesson?


> For example, I have lots of gadgets in my house. Right now, every
> different vendor who does an IoT device has to write their own app and run
> their own service. And the managers are really happy with that at the
> moment because they see it as all upside.
>
> I think they will soon discover that most devices that are being made to
> Internet aren't actually very useful if the only thing they connect to is a
> manufacturer site and those start to cost money to run. So I think we will
> end up with an open interconnect approach to IoT in the end regardless of
> what a bunch of marketing VPs think should happen. Razor and blades models
> are really profitable but they are also vanishingly rare because the number
> 2 and 3 companies have an easy way to enter the market by opening up.
>
> Authenticating those devices to the users who bought them, authenticating
> the code updates. Those are areas where the logotypes can be really useful.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-10 Thread Matthew Hardeman via dev-security-policy
Even if we stipulated that all those accounts were fully accurate, all
those reports are about a separate business that happens to be owned by the
same owner.

Furthermore, in as far as none of those directly speak to their ability to
own or manage a publicly trusted CA, I would regard those issues as
immaterial.  Perhaps they also indiscriminately kill puppies?  That would
be awful.  Still, I do not see how that would be disqualifying.

On Wed, Jul 10, 2019 at 2:45 AM Nex via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I think that dismissing as baseless investigations from 9 different
> reporters, on 3 different newspapers (add one more, FP, if consider
> this[1]) is misleading. Additionally, it is just false to say all the
> articles only relied on anonymous sources (of which they have many, by
> the way), but there are clearly sources on record as well, such as
> Simone Margaritelli and Jonathan Cole for The Intercept, and Lori Stroud
> for Reuters.
>
> While obviously there is no scientific metric for this, I do think the
> number of sources (anonymous and not) and the variety of reporters and
> of newspapers (with their respective editors and verification processes)
> do qualify the reporting as "credible" and "extensively sourced".
>
> Additionally, details provided by sources on record directly matched
> attacks documented by technical researchers. For example, Lori Stroud
> talking details over the targeting of Donaghy, which was also proven in
> Citizen Lab's "Stealth Falcon" report. Lastly, Reuters reporters make
> repeated mentions of documents they had been able to review supporting
> the claims of their sources. Unless you have good reasons to believe
> reporters are just lying out of their teeth, I don't see how all of this
> can't be considered credible.
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-10 Thread Nex via dev-security-policy
I think that dismissing as baseless investigations from 9 different
reporters, on 3 different newspapers (add one more, FP, if consider
this[1]) is misleading. Additionally, it is just false to say all the
articles only relied on anonymous sources (of which they have many, by
the way), but there are clearly sources on record as well, such as
Simone Margaritelli and Jonathan Cole for The Intercept, and Lori Stroud
for Reuters.

While obviously there is no scientific metric for this, I do think the
number of sources (anonymous and not) and the variety of reporters and
of newspapers (with their respective editors and verification processes)
do qualify the reporting as "credible" and "extensively sourced".

Additionally, details provided by sources on record directly matched
attacks documented by technical researchers. For example, Lori Stroud
talking details over the targeting of Donaghy, which was also proven in
Citizen Lab's "Stealth Falcon" report. Lastly, Reuters reporters make
repeated mentions of documents they had been able to review supporting
the claims of their sources. Unless you have good reasons to believe
reporters are just lying out of their teeth, I don't see how all of this
can't be considered credible.

[1]
https://foreignpolicy.com/2017/12/21/deep-pockets-deep-cover-the-uae-is-paying-ex-cia-officers-to-build-a-spy-empire-in-the-gulf/

On 7/9/19 6:09 PM, Nadim Kobeissi via dev-security-policy wrote:
> Dear Wayne,
> 
> I fully respect Mozilla's mission and I fully believe that everyone here is
> acting in good faith.
> 
> That said, I must, in my capacity as a private individual, decry what I
> perceive as a dangerous shortsightedness and lack of intellectual rigor
> underlying your decision. I do this as someone with a keen interest in
> Internet freedom issues and not as someone who is in any way partisan in
> this debate: I don't care for DarkMatter as a company in any way whatsoever
> and have no relationship with anyone there.
> 
> I sense enough urgency in my concerns to pause my work schedule today and
> respond to this email. I will do my best to illustrate why I sense danger
> in your decision. Essentially there are three specific points I take issue
> with:
> 
> -
> 1: Waving aside demands for objective criteria.
> -
> You say that "if we rigidly applied our existing criteria, we would deny
> most inclusion requests." Far from being an excuse to put more weight (or
> in this case, perhaps almost all weight) on subjective decision making,
> this should be a rallying cry for Mozilla to investigate why it is that an
> objective and democratic decision-making process is failing, and what can
> be done to make it work better. Waving aside objective procedures as
> "checklists" dismisses a core procedural element of how such critical
> decisions should be made in the future and is explicitly undemocratic and
> therefore dangerous.
> 
> -
> 2: Calling allegations "credible" and "extensively sourced" with almost no
> basis whatsoever.
> -
> You cite four articles: two are from the Intercept, one is from Reuters and
> one is from the New York Times. You claim that the fact that they are years
> apart bolsters their credibility; why is this the case? In fact, these
> articles all parrot almost exactly the same story, with some minor
> additions, updates and modifications. They all almost read like the same
> article, despite their temporal distribution. Furthermore, the notion that
> the articles are "extensively sourced" is simply incorrect: all of the
> articles are based on anonymous sources and none of them provide a shred of
> evidence, which is why we are in this debate to begin with (or so I've been
> thinking).
> 
> It should also be noted that both The Intercept and the New York Times have
> published misleading and incorrect information many times in their history.
> The Intercept in particular has a very spotty credibility record.
> 
> It is also is not difficult to theorize how a politically trendy topic
> (cyberattacks) against the world's most easy-to-villainize company (an
> Arabic offensive cybersecurity company operating within a true monarchic
> state) would be appealing to American journalists. This sort of thing isn't
> new, and American "digital rights" groups have previously linked malicious
> cyberattacks to Middle Eastern countries without providing something that
> is even close to the same standard of evidence that they almost always
> provide when naming American or European actors.
> 
> Is is indeed unfortunate that this issue was dealt with in a single
> paragraph: I would have expected it to be the brunt of the email given its
> importance, and it is impossible to qualify that reporting as "credible"
> and "extensively sourced" so summarily.
> 
> -
> 3: Culminating in an argument that simply boils down to "the people's
> safety", a trope that is often overused and that leads to