Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-23 Thread Tom Ritter via dev-security-policy
On Fri, 23 Aug 2019 at 22:53, Daniel Marschall via dev-security-policy
 wrote:
>
> Am Freitag, 23. August 2019 00:50:35 UTC+2 schrieb Ronald Crane:
> > On 8/22/2019 1:43 PM, kirkhalloregon--- via dev-security-policy wrote:
> >
> > Whatever the merits of EV (and perhaps there are some -- I'm not
> > convinced either way) this data is negligible evidence of them. A DV
> > cert is sufficient for phishing, so there's no reason for a phisher to
> > obtain an EV cert, hence very few phishing sites use them, hence EV
> > sites are (at present) mostly not phishing sites.
>
> Can you proove that your assumption "very few phishing sites use EV (only) 
> because DV is sufficient" is correct?

As before, the first email in the thread references the studies performed.

"By dividing these users into three groups, our controlled study
measured both the effect of extended validation certificates that
appear only at legitimate sites and the effect of reading a help file
about security features in Internet Explorer 7. Across all groups, we
found that picture-in-picture attacks showing a fake browser window
were as effective as the best other phishing technique, the homograph
attack. Extended validation did not help users identify either
attack."

https://www.adambarth.com/papers/2007/jackson-simon-tan-barth.pdf

"Our results showed that the identity indicators used in the
unmodified FF3browser did not influence decision-making for the
participants in our study interms of user trust in a web site. These
new identity indicators were ineffectivebecause none of the
participants even noticed their existence."

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.543.2117=rep1=pdf

DV is sufficient. Why pay for something you don't need?

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-23 Thread Tom Ritter via dev-security-policy
On Fri, 23 Aug 2019 at 05:00, Leo Grove via dev-security-policy
 wrote:
>
> On Thursday, August 22, 2019 at 5:50:35 PM UTC-5, Ronald Crane wrote:
> > On 8/22/2019 1:43 PM, kirkhalloregon--- via dev-security-policy wrote:
> > > I can tell you that anti-phishing services and browser phishing filters 
> > > have also have concluded that EV sites are very unlikely to be phishing 
> > > sites and so are safer for users.
> >
> > Whatever the merits of EV (and perhaps there are some -- I'm not
> > convinced either way) this data is negligible evidence of them. A DV
> > cert is sufficient for phishing, so there's no reason for a phisher to
> > obtain an EV cert, hence very few phishing sites use them, hence EV
> > sites are (at present) mostly not phishing sites.
> >
> > -R
>
> So you agree it's safe to assume with high probability that when I come 
> across a site displaying an EV SSL, it's not a phishing site. I think that is 
> one of the purposes of EV.
>
> Or should we remove the EV bling because phishing sites prefer to use DV?

Correlation does not imply causation.

There are studies that show phishing sites tend not to be EV - yes.
That's a correlation.

If we studied phishing sites and domain name registration fees I'm
sure we'd find a correlation there too - I'd bet the .cfd TLD (which
apparently costs $16K to register) has a low incident of pishing as
well.

There are also studies that indicate users don't pay attention to the
(positive) security indicators. To phish users, it's unnecessary to
get an EV indicator vs a DV indicator. The simpler explanation for the
correlation is that EV is more expensive (both in direct cost, and in
effort to get misleading documents), so why would you pay for
something you don't need?

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-15 Thread Tom Ritter via dev-security-policy
On Thu, Aug 15, 2019, 7:46 AM Doug Beattie via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Peter,
>
> Do you have any empirical data to backup the claims that there is no
> benefit
> from EV certificates?  From the reports I've seen, the percentage of
> phishing and malware sites that use EV is drastically lower than DV (which
> are used to protect the cesspool of websites).
>

I don't doubt that at all. However see the first email in this thread
citing research showing that users don't notice the difference.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use of Certificate/Public Key Pinning

2019-08-13 Thread Tom Ritter via dev-security-policy
PKP is a footgun. Deploying it without being prepared for the
situations you've described is ill-advised.  There's a few options
available for organizations who want to pin, in increasing order of
sophistication:


Enforce Certificate Transparency. You're not locked into any CA or
key, only that the certificate has been published publicly.

Pin to a CA or a couple of CAs - this reduces the
operational/availability risk while increasing the security risk.
(Although still a reduction from the entire set of CAs of course.)

Pin to leaf *keys*, as you suggest, and ensure that they cannot all be
compromised at once through the use of offline storage and careful key
mangement. Use the keys to get certificates when needed. As you note,
if you can't manage these keys securely and separately, you need to go
to something less sophisticated, like pinning to CAs.

Pin to a locally managed trust anchor, and operate a root CA oneself,
managing it as one would a public CA (offline root, possibly offline
intermediates, etc)


-tom

On Tue, 13 Aug 2019 at 15:12, Nuno Ponte via dev-security-policy
 wrote:
>
> Dear m.d.s.p.,
>
> I would like to bring into discussion the use of certificate/public key 
> pinning and the impacts on the 5-days period for certificate revocation 
> according to BR §4.9.1.1.
>
> Recently, we (Multicert) had to rollout a general certificate replacement due 
> to the serial number entropy issue. Some of the most troubled cases to 
> replace the certificates were customers doing certificate pinning on mobile 
> apps. Changing the certificate in these cases required configuration changes 
> in the code base, rebuild app, QA testing, submission to App stores, call for 
> expedited review of each App store, wait for review to be completed and only 
> then the new app version is made available for installation by end users 
> (which is turn are required to update the app the soonest).
>
> Meeting the 5-days deadline with this sort of process is “challenging”, at 
> best.
>
> A first approach is to move from certificate pinning to public key pinning 
> (PKP). This prevents the need to update the app in many of the certificate 
> replacement operations, where the public key is reused and the certificate 
> can be replaced transparently to the app (generically, an “User Agent” doing 
> PKP).
>
> However, in the event of a serious security incident that requires re-key 
> (such as key compromise), the certificate must be revoked in less than 24 
> hours (for the benefit of everyone – subscriber, relying parties, issuing CA, 
> etc). It’s virtually impossible to release a new app version within this 
> timeframe. And this, I think, make a very strong point against the use of PKI.
>
> On the other side, PKP is a simple yet powerful and effective technique to 
> protect against MITM and other attacks. It seems to be widely used in apps 
> with advanced threat models (mobile banking, sensitive personal information, 
> etc) and there are many frameworks available (including native support in 
> Android via Network Security Configuration [1]).
>
> There are several possible mitigation actions, such as pinning more than one 
> public key to have more than one certificate to quickly rollover in case of a 
> revocation. Even then, it is very likely that all the redundant key pairs 
> were generated and maintained by the same systems and procedures, and thus 
> all of them will become effectively compromised.
>
> Ultimately, it may become common practice that 1) PKP frameworks are set to 
> bypass revocation checks or 2) PKP is done with private certificates 
> (homemade, self-signed, managed ad-hoc with no CRL/OCSP services). Does any 
> of this leads to a safer Internet?
>
> I don’t expect this thread to end up into an absolute conclusion advocating 
> for or against, but opening it to discussion and contributions may help to 
> document possible strategies, mitigations, alternatives, pros & cons, and 
> hopefully provide guidance for an educated decision.
>
> Best regards,
>
> Nuno Ponte
> Multicert SA
>
> [1] https://developer.android.com/training/articles/security-config
>
>
>
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mitigating DNS fragmentation attacks

2018-10-15 Thread Tom Ritter via dev-security-policy
On Mon, 15 Oct 2018 at 04:51, Paul Wouters via dev-security-policy
 wrote:
>
> On Oct 14, 2018, at 21:09, jsha--- via dev-security-policy 
>  wrote:
> >
> > There’s a paper from 2013 outlining a fragmentation attack on DNS that 
> > allows an off-path attacker to poison certain DNS results using IP 
> > fragmentation[1]. I’ve been thinking about mitigation techniques and I’m 
> > interested in hearing what this group thinks.
> >
>
> The mitigation is dnssec. Ensure your data is cryptographically protected.

That would be nice, but as that is not available to everyone, a
comprehensive solution is also desirable.

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible violation of CAA by nazwa.pl

2018-07-27 Thread Tom Ritter via dev-security-policy
Thanks Jakob, I think you summed things up well.

-tom

On 27 July 2018 at 01:46, Jakob Bohm via dev-security-policy
 wrote:
> On 26/07/2018 23:04, Matthew Hardeman wrote:
>>
>> On Thu, Jul 26, 2018 at 2:23 PM, Tom Delmas via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>>
 The party actually running the authoritative DNS servers is in control
>>>
>>> of the domain.
>>>
>>> I'm not sure I agree. They can control the domain, but they are supposed
>>> to be subordinate of the domain owner. If they did something without the
>>> owner consent/approval, it really looks like a domain hijacking.
>>
>>
>>
>> But the agreement under which they're supposed to be subordinate to the
>> domain owner is a private matter between the domain owner and the party
>> managing the authoritative DNS.  Even if this were domain hijacking, a
>> certificate issued that relied upon a proper domain validation method is
>> still proper issuance, technically.  Once this comes to light, there may
>> be
>> grounds for the proper owner to get the certificate revoked, but the
>> initial issuance was proper as long as the validation was properly
>> performed.
>>
>>
>>>
>>>
 I'm not suggesting that the CA did anything untoward in issuing this
 certificate.  I am not suggesting that at all.
>>>
>>>
>>> My opinion is that if the CA was aware that the owner didn't ask/consent
>>> to that issuance, If it's not a misissuance according to the BRs, it
>>> should
>>> be.
>>
>>
>>
>> Others can weigh in, but I'm fairly certain that it is not misissuance
>> according to the BRs.  Furthermore, with respect to issuance via domain
>> validation, there's an intentional focus on demonstrated control rather
>> than ownership, as ownership is a concept which can't really be securely
>> validated in an automated fashion.  As such, I suspect it's unlikely that
>> the industry or browsers would accept such a change.
>>
>>
>
> I see this as a clear case of the profound confusion caused by the
> community sometimes conflating "formal rule violation" with
> "misissuance".
>
> It would be much more useful to keep these concepts separate but
> overlapping:
>
>  - A BR/MozPolicy/CPS/CP violation is when a certificate didn't follow
> the official rules in some way and must therefore be revoked as a matter
> of compliance.
>
>  - An actual misissuance is when a certificate was issued for a private
> key held by a party other than the party identified in the certificate
> (in Subject Name, SAN etc.), or to a party specifically not authorized
> to hold such a certificate regardless of the identity (typically applies
> to SubCA, CRL-signing, OCSP-signing, timestamping or other certificate
> types where relying party trust doesn't check the actual name in the
> certificate).
>
> From these concepts, revocation requirements could then be reasonably
> classified according to the combinations (in addition to any specifics
> of a situation):
>
>  - Rule violation plus actual misissuance.  This is bad, the 24 hours or
> faster revocation rule should definitely be invoked.
>
>  - Rule compliant misissuance.  This will inevitably happen some times,
> for example if an attacker successfully spoofs all the things checked by
> a CA or exploits a loophole in the compliant procedures.  This is the
> reason why there must be an efficient revocation process for these
> cases.
>
>  - Rule violation, but otherwise correct issuance.  This covers any kind
> of formal violation where the ground truth of the certified matter can
> still be proven.  Ranging from formatting errors (like having "-" in a
> field that should just be omitted, putting the real name with spaces in
> the common name as originally envisioned in X.509, encoding CA:False
> etc.) over potentially dangerous errors (like having a 24 byte serial
> number, which prevents some clients from checking revocation should it
> ever become necessary) to directly dangerous errors (like having an
> unverified DNS-syntax name in CN, or not including enough randomness in
> the serial number of an SHA-1 certificate).
>
>  - Situation-changed no-longer valid issuance.  This is when (as
> recently discussed in a concrete case) a completely valid certificate
> contains information which is no longer true due to later events, such
> as a domain being sold without transfer of certificate private keys or a
> certified entity (in OV/EV certs) ceasing to exist (company dissolved,
> person dead and estate disbursed).
>
>  - Situation unchanged, but subject requests revocation.  Also common.
>
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org

Re: How do you handle mass revocation requests?

2018-02-28 Thread Tom Ritter via dev-security-policy
On 28 February 2018 at 11:37, Jeremy Rowley via dev-security-policy
 wrote:
> What kind of transparency would the Mozilla community like around this
> issue? There aren't many more facts than I shared above, but there is a lot
> of speculation. Let me know what I can share to help alleviate confusion and
> answer questions.

Have you contacted the customers whose certificates you have not
revoked; but which were in the original batch? It seems likely they're
going to wind up revoked too.

Is there any way to identify these certificates through crt.sh or
through a manual cert search? (Some special
Intermediate/CRL/OID/string...?)

Has Trustico said anything about whether or not they will provide more
information in the future?

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Allowing WebExtensions to Override Certificate Trust Decisions

2018-02-28 Thread Tom Ritter via dev-security-policy
On 27 February 2018 at 10:23, Alex Gaynor via dev-security-policy
 wrote:
> A reasonable compromise that jumps out to me is allowing extensions to make
> an otherwise-secure connection fail, but not allow them to rehabilitate an
> insecure connection. This would allow experimenting with stricter controls
> while avoiding some of the really scary risks.

I'm obviously the person who filed that bug and began this discussion,
but I think this compromise is one of those compromises where no one
gets what they want.

Firefox gets a complicated API that gets shimmed into
security-sensitive code and can disrupt TLS handshakes.

Web Extension developers get something that doesn't do the most
valuable thing they would like to do: experiment with new Server
Authentication modes.

Of the examples I gave (Cert Patrol, Perspectives, Convergence, DANE,
DNSSEC-Stapling) - every single one of them would not actually allow
experimenting with Server Authentication modes if all they could do is
reject certificates and not accept them. And in many cases, it will
completely prevent any such experimentation, because you can't ask a
CA to sign a cert saying "No really, I just want you to include this
weird data under this weird not-documented/not-standardized x509
extension".


Unless people show up claiming that that functionality is sufficient
for them to do things they want to do; I don't think it would be
valuable to implement this compromise.

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Investigating validations & issuances - The high value IP space BGP Hijacks on 2017-12-12

2017-12-15 Thread Tom Ritter via dev-security-policy
This is an extremely good point. I wonder:

1. If Mozilla should ask/require CAs to perform this check.
2. If Mozilla should ask/require CAs to invest in the capability to
make this check for future requests in the future (where we would
require responses within a certain time period.)

-tom

On 14 December 2017 at 22:16, Matthew Hardeman via dev-security-policy
 wrote:
> Has anyone started looking into CA issuances -- or even more importantly -- 
> CA domain validations performed successfully and yet without issuing a 
> certificate (say, wanting to cache the validation) for the brief periods in 
> which much of the internet saw alternative target destinations for a great 
> deal of high value organization IP space?
>
> For those CAs with workflows which allow for expressly requesting a domain 
> validation but not necessarily requiring that it be immediately utilized 
> (say, for example LetsEncrypt or another CA running ACME protocol or similar) 
> it might be of interest to review the validations performed successfully 
> during those time windows.
>
> Additionally, it may be of value for various CAs to check their issuances 
> upon domain validation for those periods.
>
> You can find the time periods and details about some of the IP space hijacked 
> at bgpmon.net
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Private key corresponding to public key in trusted Cisco certificate embedded in executable

2017-06-19 Thread Tom Ritter via dev-security-policy
On 19 June 2017 at 08:28, Samuel Pinder via dev-security-policy
 wrote:
> Therefore the newly re-issued
> certificate *will* end up with it's private key compromised *again*,
> no matter how well it may be obfuscated in the application, it is
> still against the very principle.

I'm pretty confused by this as well.

First off, while people have proposed multiple solutions to this
problem, they are not trivially implementable, nor are they
widespread. I think if you shook the tree with some automation, you'd
find on the order of 50 or more publicly trustable private keys
embedded in firmware pretty quickly.

So at what point does the CA become culpable to misissuance in a case
like this? Is it okay that we let them turn a blind eye to issuing or
reissuing certificates where they have a strong reason to believe the
private key will be published in firmware?

Clearly we wouldn't require them to vet every use of every certificate
they issue, but if they revoke a certificate for being used in this
fashion, it seems reasonable for them to ask the customer to at least
give them an explanation of how they've changed things such that a
newly issue certificate for the same domain will not be used in the
exact same way.

Is it reasonable for us to ask a CA to do this (that is, to ask their
customer)? Is it reasonable to require it?

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla CT Policy

2016-11-05 Thread Tom Ritter
On 4 November 2016 at 07:19, Gervase Markham  wrote:
> * Are there any CT-related services Mozilla should consider running or
> supporting, for the good of the ecosystem?

Part answer, part question, but I don't want to forget it: Besides an
Auditor, perhaps Mozilla should run a DNS log query front-end to
provide diversity from Google's.

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla CT Policy

2016-11-04 Thread Tom Ritter
On 4 November 2016 at 07:19, Gervase Markham  wrote:
> * How do we decide when to un-trust a log? What reasons are valid
> reasons for doing so?

Do we want different types of distrust for a log? That is, a "We don't
trust you at all anymore" distrust vs a "We don't trust signatures
issued after this date" distrust.


> * Do we want to require a certain number of SCTs for certificates of
> particular validity periods?

Do we want to trest different types of SCTs differently for this
purpose? (precert vs OCSP vs TLS Extension.)

> * Do we want to allow some CAs to opt into CT before those dates?

Do we want to allow some websites to opt into CT before those dates?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Cerificate Concern about Cloudflare's DNS

2016-11-02 Thread Tom Ritter
On 2 November 2016 at 11:24, Jeremy Rowley  wrote:
> Revocation support for non-subscribers is sort of implied...sort of:
>
> Section 4.9.3:
> The CA SHALL provide Subscribers, Relying Parties, Application Software 
> Suppliers, and other third parties with
> clear instructions for reporting suspected Private Key Compromise, 
> Certificate misuse, or other types of fraud,
> compromise, misuse, inappropriate conduct, or any other matter related to 
> Certificates. The CA SHALL publicly
> disclose the instructions through a readily accessible online means.
>

This was the text I was imagining being triggered by this scenario.

I certainly accept the fact that a CA has a reasonable reason to doubt
random incoming "Please revoke this certificate" requests, and could
or should require additional verification before taking action. I
would imagine that for DV revocations, such verification would be
pretty much identical to DV verification. The hard part is merely
automating the process for scale like they do for DV issuance. (But if
a CA got enough of these requests it could save some engineering by
reusing that verification infrastructure!)

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Cerificate Concern about Cloudflare's DNS

2016-11-02 Thread Tom Ritter
On 2 November 2016 at 09:44, Jakob Bohm  wrote:
> The only thing that might be a CA / BR issue would be this:

There's been (some) mention that even if a user moves off Cloudflare,
the CA is not obligated to revoke.  I don't agree with that. If a user
purchased a domain from someone (or bought a recently expired domain)
and a TLS certificate was still valid for it, would the new owner not
be able to get it revoked?  If so, how is this different?

Aside, it would be very interesting to watch domain renewals + contact
info changes (if one can do this at scale) and pair it up with the CT
logs to see how much of an issue this is/could be.

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Remediation Plan for WoSign and StartCom

2016-10-19 Thread Tom Ritter
On 19 October 2016 at 02:58, Kurt Roeckx  wrote:
> On 2016-10-19 01:37, Rob Stradling wrote:
>>
>> On 18/10/16 23:49, Gervase Markham wrote:
>>>
>>> On 18/10/16 15:42, Ryan Hurst wrote:

 I do not understand the desire to require StartCom / WoSign to not
 utilize their own logs as part of the associated quorum policy.
>>>
>>>
>>> My original logic was that it could be seen that the log owner is
>>> trustworthy. However, you are right that CT does not require this.
>>
>>
>> A log operator could offer a split view of their log, and this might go
>> undetected.  That's why we need CT gossip to exist.
>
>
> I at least have some concerns about the current gossip draft and talked a
> little to dkg about this. I should probably bring this up on the trans list.


Please do!  For those not aware, the CT Gossip draft is in 'pre-final
review' in the sense that (we think) we're pretty much done but need
people to finally read it now.  Draft is at:
https://datatracker.ietf.org/doc/draft-ietf-trans-gossip/


Because we're talking about a CA which used their private keys to get
around baseline requirements/prohibitions by backdating, I would not
be comfortable trusting them with operating a log where they could do
the same thing. The addition of the Google log prevents this to some
degree. So I would prefer the requirement either be 'one google and
one non-google/non-self-operated log' or just 'one google log'.

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Mozilla Root Store Elsewhere (Was Re: StartCom & Qihoo Incidents)

2016-10-18 Thread Tom Ritter
On 18 October 2016 at 08:00, Jakob Bohm  wrote:
> On 18/10/2016 14:35, Gervase Markham wrote:
>>
>> On 17/10/16 16:35, Jakob Bohm wrote:
>>>
>>> In the not so distant past, the Mozilla root program was much more
>>> useful due to different behavior:
>>>
>>> 1. Mozilla managed the root program based on an assumption that relying
>>>   parties would use the common standard revocation checking methods
>>>   *only* (regular CRLs as present since Netscape created SSL and OCSP).
>>
>>
>> Now is not the time to re-debate the failings of those methods, but
>> please don't pretend you don't know why this change was made.
>>
>
> I wasn't in this instance, simply noting the following problem: By
> assuming all relying parties run code that implements Mozilla's other
> revocation methods (OneCRL, custom notBefore checks etc.), the root
> list published by Mozilla becomes less useful for relying parties whose
> applications do not.


I'm sympathetic to this - it's unfortunate that we've had to bolt on
certificate validation checks that do not appear in any standardized
form, in any central location, and differ from client to client. It
makes 'the most dangerous code in the world' even harder to get
correct. It seems like more and more of CA/HTTPS related ecosystem
could benefit from an equivalent to caniuse.com

But I'm not sure what Mozilla could do to help downstream users of the
root store? What would make you happier? Projects who blindly import
and trust it will be subject to the default decision Mozilla takes -
sure. (And hopefully they have the prescience to delineate between
default-trust certs and default-don't-trust certs!)

Two things I can see would be:
- Carefully choose the action taken with the root store to be a 'keep
in the root store' choice or a 'remove from the root store' choice.
For example, WoSign would probably be in the 'remove' category and
then FF gets special processing code to accept WoSign under specific
circumstances.
- Have some additional comments or tooling that are designed for
downstream importers. Maybe a script that runs over the official file
converting it to other frmats (Java keystore, directory-of-certs, etc)
and prompts people 'This CA is subject to specific filtering in FF, do
you want to include it in the export?'

Are there other actionable things Mozilla could do to make things
better-by-default for downstream users?

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Deficiencies in the Web PKI and Mozilla's shepherding thereof, exposed by the WoSign affair

2016-10-05 Thread Tom Ritter
On 4 October 2016 at 06:12, Eric Rescorla  wrote:
> with the exception of the end-entity
> certificate which MUST be first.

After testing, this part seems to be the component that stops my idea.
I could build paths to arbitrary roots with extra chains contained in
the list... but only if the correct leaf was specified first. (Kind of
surprised by that, I'd have imagined that be a more common
misconfiguration, but I guess not.)

Tested with Chrome/Firefox/IE/Edge on Windows 10. (Seems Edge doesn't
honor the HSTS hard fail mechanism!)

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Deficiencies in the Web PKI and Mozilla's shepherding thereof, exposed by the WoSign affair

2016-10-03 Thread Tom Ritter
On 3 October 2016 at 19:24, Jakob Bohm  wrote:
> On 03/10/2016 20:41, Kyle Hamilton wrote:
>> 2. There is only One Certificate Path that can be proven in TLS, which
>> prevents risk management by end-entities.
>>
>
> Are you sure, I thought the standard TLS protocol transmitted a *set*
> of certificates in which the client could/should search for a chain
> leading to a client trusted CA.

I've seen interesting bugs result from client (e.g. browser)
processing of the 'bag of certs' approach - but these bugs are
security vulnerabilities and should be handled correctly. So I don't
see any reason why one could not send multiple chains right now, and
have a client correctly process it.  Shouldn't be too hard to actually
test with Firefox or whatever. Just get a couple chains from different
CAs and start distrusting roots locally...

I guess the main thing I'd wonder about is if a client has a root
marked as untrusted, it may build a chain to that root for the
purposes of *not* trusting it. (As opposed to building a chain to a
completely unknown root.)

Not that I think this is a good idea.

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartEncrypt considered harmful today

2016-06-30 Thread Tom Ritter
On 30 June 2016 at 11:10, Peter Kurrasch  wrote:
> Very interesting. This is exactly the sort of thing I'm concerned about with 
> respect to Let's Encrypt and ACME.
>
> I would think that all CA's should issue some sort of statement regarding the 
> security testing of any similar, Internet-facing API interface they might be 
> using. I would actually like to see a statement regarding any interface, 
> including browser-based, but one step at a time. Let's at least know that all 
> the other interfaces undergo regular security scans--or when a CA might start 
> doing them.
>
> Anyone proposing updates in CABF?

In theory I would support this, in practice it has no teeth. There is
no (real) accreditation for security reviews, and the accreditations
that exist do not, in practice, ensure one with the accreditation is
skilled. You can say "APIs must have a security review" or an
"adversarial security scan" or a "vulnerability scan", or "manual
penetration test", or a "red team assessment" - but the definition of
the terms and the skillsets of people performing them vary so widely
that it would not guarantee very much in practice.

I believe that the CAs who want to be a leader in this niche already
are, and the CAs who cannot afford to do so (because I assume every CA
wants to take security seriously, but is confined in practice) will
wind up meeting the requirement in a way that does not significantly
improve their security. (And various shades in between)

But I'm biased, being a security consultant and all.

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Name-constraining government CAs, or not

2015-06-12 Thread Tom Ritter
Are https://technet.microsoft.com/en-us/library/cc751157.aspx and
http://aka.ms/auditreqs the MSFT components (previously?) under NDA?



Government CAs must restrict server authentication to .gov domains and
may only issues other certificates to the ISO3166 country codes that
the country has sovereign control over (see http://aka.ms/auditreqs
section III for the definition of a “Government CA”).

Government CAs that also operate as commercial, non-profit, or other
publicly-issuing entities must use a different root for all such
certificate issuances (see http://aka.ms/auditreqs section III for the
definition of a “Commercial CA”).



Effective July 1, 2015, Government CAs may choose to either obtain the
above WebTrust or ETSI-based audit(s) required of Commercial CAs, or
to use an Equivalent Audit. If a Government CA chooses to obtain a
WebTrust or ETSI-based audit, Microsoft will treat the Government CA
as a Commercial CA. The Government CA can then operate without
limiting the certificates it issues, provided it issues commercial
(including non-profit) certificates from a different root than its
government certificates and it signs a commercial CA contract with
Microsoft.

... more about audits ...



A “Government CA” is an entity that is established by the sovereign
government of the jurisdiction in which the entity operates, and whose
existence and operations are directly or indirectly subject to the
control of the sovereign government anywhere in the PKI chain.

A “Commercial CA” is an entity that is legally recognized in the
jurisdiction(s) in which the entity operates (e.g., corporation or
other legal person), that operates on a for-profit basis, and that
issues digital certificates to other CAs or to the general public.

“Certification Authority” or “CA” means an entity that issues digital
certificates in accordance with Local Laws and Regulations.

“Local Laws and Regulations” means the laws and regulations applicable
to a CA under which the CA is authorized to issue digital
certificates, which set forth the applicable policies, rules, and
standards for issuing, maintaining, or revoking certificates,
including audit frequency and procedure.



-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Consequences of mis-issuance under CNNIC

2015-04-02 Thread Tom Ritter
On 2 April 2015 at 03:49,  c.le...@gmail.com wrote:
 It would be a golden opportunity for Chinese gov to push for a home-grown 
 browser that is not under the control of western imperialism governments for 
 sure.

You mean 360 Browser? Hard to get good statistics it seems, but there
are reports of it being pretty darn popular:
http://www.chinainternetwatch.com/8757/top-web-browsers-china/
(It also does not validate certificates:
https://cabforum.org/pipermail/public/2015-April/005441.html ,
although that is a discussion for another list)


I guess I missed the cutoff for the decision, but I am supportive of
removing CNNIC entirely and whitelisting existing certificiates
issued. As others have said, I am nervous the plans of simply
enforcing a cutoff date and asking the community to detect misissuance
will not be a sufficient detection mechanism for misissuance. Unless
I'm mistaken, despite all the efforts in detecting misissuance
(Perspectives, Decentralized Observatory, HPKP reporting, etc) - all
recent misissued certificates were found via Chrome's PKP in Chrome.
The community does not have a good track record on this.

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy