Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-23 Thread Tom Ritter via dev-security-policy
On Fri, 23 Aug 2019 at 22:53, Daniel Marschall via dev-security-policy
 wrote:
>
> Am Freitag, 23. August 2019 00:50:35 UTC+2 schrieb Ronald Crane:
> > On 8/22/2019 1:43 PM, kirkhalloregon--- via dev-security-policy wrote:
> >
> > Whatever the merits of EV (and perhaps there are some -- I'm not
> > convinced either way) this data is negligible evidence of them. A DV
> > cert is sufficient for phishing, so there's no reason for a phisher to
> > obtain an EV cert, hence very few phishing sites use them, hence EV
> > sites are (at present) mostly not phishing sites.
>
> Can you proove that your assumption "very few phishing sites use EV (only) 
> because DV is sufficient" is correct?

As before, the first email in the thread references the studies performed.

"By dividing these users into three groups, our controlled study
measured both the effect of extended validation certificates that
appear only at legitimate sites and the effect of reading a help file
about security features in Internet Explorer 7. Across all groups, we
found that picture-in-picture attacks showing a fake browser window
were as effective as the best other phishing technique, the homograph
attack. Extended validation did not help users identify either
attack."

https://www.adambarth.com/papers/2007/jackson-simon-tan-barth.pdf

"Our results showed that the identity indicators used in the
unmodified FF3browser did not influence decision-making for the
participants in our study interms of user trust in a web site. These
new identity indicators were ineffectivebecause none of the
participants even noticed their existence."

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.543.2117&rep=rep1&type=pdf

DV is sufficient. Why pay for something you don't need?

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-23 Thread Tom Ritter via dev-security-policy
On Fri, 23 Aug 2019 at 05:00, Leo Grove via dev-security-policy
 wrote:
>
> On Thursday, August 22, 2019 at 5:50:35 PM UTC-5, Ronald Crane wrote:
> > On 8/22/2019 1:43 PM, kirkhalloregon--- via dev-security-policy wrote:
> > > I can tell you that anti-phishing services and browser phishing filters 
> > > have also have concluded that EV sites are very unlikely to be phishing 
> > > sites and so are safer for users.
> >
> > Whatever the merits of EV (and perhaps there are some -- I'm not
> > convinced either way) this data is negligible evidence of them. A DV
> > cert is sufficient for phishing, so there's no reason for a phisher to
> > obtain an EV cert, hence very few phishing sites use them, hence EV
> > sites are (at present) mostly not phishing sites.
> >
> > -R
>
> So you agree it's safe to assume with high probability that when I come 
> across a site displaying an EV SSL, it's not a phishing site. I think that is 
> one of the purposes of EV.
>
> Or should we remove the EV bling because phishing sites prefer to use DV?

Correlation does not imply causation.

There are studies that show phishing sites tend not to be EV - yes.
That's a correlation.

If we studied phishing sites and domain name registration fees I'm
sure we'd find a correlation there too - I'd bet the .cfd TLD (which
apparently costs $16K to register) has a low incident of pishing as
well.

There are also studies that indicate users don't pay attention to the
(positive) security indicators. To phish users, it's unnecessary to
get an EV indicator vs a DV indicator. The simpler explanation for the
correlation is that EV is more expensive (both in direct cost, and in
effort to get misleading documents), so why would you pay for
something you don't need?

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-15 Thread Tom Ritter via dev-security-policy
On Thu, Aug 15, 2019, 7:46 AM Doug Beattie via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Peter,
>
> Do you have any empirical data to backup the claims that there is no
> benefit
> from EV certificates?  From the reports I've seen, the percentage of
> phishing and malware sites that use EV is drastically lower than DV (which
> are used to protect the cesspool of websites).
>

I don't doubt that at all. However see the first email in this thread
citing research showing that users don't notice the difference.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use of Certificate/Public Key Pinning

2019-08-13 Thread Tom Ritter via dev-security-policy
PKP is a footgun. Deploying it without being prepared for the
situations you've described is ill-advised.  There's a few options
available for organizations who want to pin, in increasing order of
sophistication:


Enforce Certificate Transparency. You're not locked into any CA or
key, only that the certificate has been published publicly.

Pin to a CA or a couple of CAs - this reduces the
operational/availability risk while increasing the security risk.
(Although still a reduction from the entire set of CAs of course.)

Pin to leaf *keys*, as you suggest, and ensure that they cannot all be
compromised at once through the use of offline storage and careful key
mangement. Use the keys to get certificates when needed. As you note,
if you can't manage these keys securely and separately, you need to go
to something less sophisticated, like pinning to CAs.

Pin to a locally managed trust anchor, and operate a root CA oneself,
managing it as one would a public CA (offline root, possibly offline
intermediates, etc)


-tom

On Tue, 13 Aug 2019 at 15:12, Nuno Ponte via dev-security-policy
 wrote:
>
> Dear m.d.s.p.,
>
> I would like to bring into discussion the use of certificate/public key 
> pinning and the impacts on the 5-days period for certificate revocation 
> according to BR §4.9.1.1.
>
> Recently, we (Multicert) had to rollout a general certificate replacement due 
> to the serial number entropy issue. Some of the most troubled cases to 
> replace the certificates were customers doing certificate pinning on mobile 
> apps. Changing the certificate in these cases required configuration changes 
> in the code base, rebuild app, QA testing, submission to App stores, call for 
> expedited review of each App store, wait for review to be completed and only 
> then the new app version is made available for installation by end users 
> (which is turn are required to update the app the soonest).
>
> Meeting the 5-days deadline with this sort of process is “challenging”, at 
> best.
>
> A first approach is to move from certificate pinning to public key pinning 
> (PKP). This prevents the need to update the app in many of the certificate 
> replacement operations, where the public key is reused and the certificate 
> can be replaced transparently to the app (generically, an “User Agent” doing 
> PKP).
>
> However, in the event of a serious security incident that requires re-key 
> (such as key compromise), the certificate must be revoked in less than 24 
> hours (for the benefit of everyone – subscriber, relying parties, issuing CA, 
> etc). It’s virtually impossible to release a new app version within this 
> timeframe. And this, I think, make a very strong point against the use of PKI.
>
> On the other side, PKP is a simple yet powerful and effective technique to 
> protect against MITM and other attacks. It seems to be widely used in apps 
> with advanced threat models (mobile banking, sensitive personal information, 
> etc) and there are many frameworks available (including native support in 
> Android via Network Security Configuration [1]).
>
> There are several possible mitigation actions, such as pinning more than one 
> public key to have more than one certificate to quickly rollover in case of a 
> revocation. Even then, it is very likely that all the redundant key pairs 
> were generated and maintained by the same systems and procedures, and thus 
> all of them will become effectively compromised.
>
> Ultimately, it may become common practice that 1) PKP frameworks are set to 
> bypass revocation checks or 2) PKP is done with private certificates 
> (homemade, self-signed, managed ad-hoc with no CRL/OCSP services). Does any 
> of this leads to a safer Internet?
>
> I don’t expect this thread to end up into an absolute conclusion advocating 
> for or against, but opening it to discussion and contributions may help to 
> document possible strategies, mitigations, alternatives, pros & cons, and 
> hopefully provide guidance for an educated decision.
>
> Best regards,
>
> Nuno Ponte
> Multicert SA
>
> [1] https://developer.android.com/training/articles/security-config
>
>
>
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mitigating DNS fragmentation attacks

2018-10-15 Thread Tom Ritter via dev-security-policy
On Mon, 15 Oct 2018 at 04:51, Paul Wouters via dev-security-policy
 wrote:
>
> On Oct 14, 2018, at 21:09, jsha--- via dev-security-policy 
>  wrote:
> >
> > There’s a paper from 2013 outlining a fragmentation attack on DNS that 
> > allows an off-path attacker to poison certain DNS results using IP 
> > fragmentation[1]. I’ve been thinking about mitigation techniques and I’m 
> > interested in hearing what this group thinks.
> >
>
> The mitigation is dnssec. Ensure your data is cryptographically protected.

That would be nice, but as that is not available to everyone, a
comprehensive solution is also desirable.

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible violation of CAA by nazwa.pl

2018-07-27 Thread Tom Ritter via dev-security-policy
Thanks Jakob, I think you summed things up well.

-tom

On 27 July 2018 at 01:46, Jakob Bohm via dev-security-policy
 wrote:
> On 26/07/2018 23:04, Matthew Hardeman wrote:
>>
>> On Thu, Jul 26, 2018 at 2:23 PM, Tom Delmas via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>>
 The party actually running the authoritative DNS servers is in control
>>>
>>> of the domain.
>>>
>>> I'm not sure I agree. They can control the domain, but they are supposed
>>> to be subordinate of the domain owner. If they did something without the
>>> owner consent/approval, it really looks like a domain hijacking.
>>
>>
>>
>> But the agreement under which they're supposed to be subordinate to the
>> domain owner is a private matter between the domain owner and the party
>> managing the authoritative DNS.  Even if this were domain hijacking, a
>> certificate issued that relied upon a proper domain validation method is
>> still proper issuance, technically.  Once this comes to light, there may
>> be
>> grounds for the proper owner to get the certificate revoked, but the
>> initial issuance was proper as long as the validation was properly
>> performed.
>>
>>
>>>
>>>
 I'm not suggesting that the CA did anything untoward in issuing this
 certificate.  I am not suggesting that at all.
>>>
>>>
>>> My opinion is that if the CA was aware that the owner didn't ask/consent
>>> to that issuance, If it's not a misissuance according to the BRs, it
>>> should
>>> be.
>>
>>
>>
>> Others can weigh in, but I'm fairly certain that it is not misissuance
>> according to the BRs.  Furthermore, with respect to issuance via domain
>> validation, there's an intentional focus on demonstrated control rather
>> than ownership, as ownership is a concept which can't really be securely
>> validated in an automated fashion.  As such, I suspect it's unlikely that
>> the industry or browsers would accept such a change.
>>
>>
>
> I see this as a clear case of the profound confusion caused by the
> community sometimes conflating "formal rule violation" with
> "misissuance".
>
> It would be much more useful to keep these concepts separate but
> overlapping:
>
>  - A BR/MozPolicy/CPS/CP violation is when a certificate didn't follow
> the official rules in some way and must therefore be revoked as a matter
> of compliance.
>
>  - An actual misissuance is when a certificate was issued for a private
> key held by a party other than the party identified in the certificate
> (in Subject Name, SAN etc.), or to a party specifically not authorized
> to hold such a certificate regardless of the identity (typically applies
> to SubCA, CRL-signing, OCSP-signing, timestamping or other certificate
> types where relying party trust doesn't check the actual name in the
> certificate).
>
> From these concepts, revocation requirements could then be reasonably
> classified according to the combinations (in addition to any specifics
> of a situation):
>
>  - Rule violation plus actual misissuance.  This is bad, the 24 hours or
> faster revocation rule should definitely be invoked.
>
>  - Rule compliant misissuance.  This will inevitably happen some times,
> for example if an attacker successfully spoofs all the things checked by
> a CA or exploits a loophole in the compliant procedures.  This is the
> reason why there must be an efficient revocation process for these
> cases.
>
>  - Rule violation, but otherwise correct issuance.  This covers any kind
> of formal violation where the ground truth of the certified matter can
> still be proven.  Ranging from formatting errors (like having "-" in a
> field that should just be omitted, putting the real name with spaces in
> the common name as originally envisioned in X.509, encoding CA:False
> etc.) over potentially dangerous errors (like having a 24 byte serial
> number, which prevents some clients from checking revocation should it
> ever become necessary) to directly dangerous errors (like having an
> unverified DNS-syntax name in CN, or not including enough randomness in
> the serial number of an SHA-1 certificate).
>
>  - Situation-changed no-longer valid issuance.  This is when (as
> recently discussed in a concrete case) a completely valid certificate
> contains information which is no longer true due to later events, such
> as a domain being sold without transfer of certificate private keys or a
> certified entity (in OV/EV certs) ceasing to exist (company dissolved,
> person dead and estate disbursed).
>
>  - Situation unchanged, but subject requests revocation.  Also common.
>
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org

Re: How do you handle mass revocation requests?

2018-02-28 Thread Tom Ritter via dev-security-policy
On 28 February 2018 at 11:37, Jeremy Rowley via dev-security-policy
 wrote:
> What kind of transparency would the Mozilla community like around this
> issue? There aren't many more facts than I shared above, but there is a lot
> of speculation. Let me know what I can share to help alleviate confusion and
> answer questions.

Have you contacted the customers whose certificates you have not
revoked; but which were in the original batch? It seems likely they're
going to wind up revoked too.

Is there any way to identify these certificates through crt.sh or
through a manual cert search? (Some special
Intermediate/CRL/OID/string...?)

Has Trustico said anything about whether or not they will provide more
information in the future?

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Allowing WebExtensions to Override Certificate Trust Decisions

2018-02-28 Thread Tom Ritter via dev-security-policy
On 27 February 2018 at 10:23, Alex Gaynor via dev-security-policy
 wrote:
> A reasonable compromise that jumps out to me is allowing extensions to make
> an otherwise-secure connection fail, but not allow them to rehabilitate an
> insecure connection. This would allow experimenting with stricter controls
> while avoiding some of the really scary risks.

I'm obviously the person who filed that bug and began this discussion,
but I think this compromise is one of those compromises where no one
gets what they want.

Firefox gets a complicated API that gets shimmed into
security-sensitive code and can disrupt TLS handshakes.

Web Extension developers get something that doesn't do the most
valuable thing they would like to do: experiment with new Server
Authentication modes.

Of the examples I gave (Cert Patrol, Perspectives, Convergence, DANE,
DNSSEC-Stapling) - every single one of them would not actually allow
experimenting with Server Authentication modes if all they could do is
reject certificates and not accept them. And in many cases, it will
completely prevent any such experimentation, because you can't ask a
CA to sign a cert saying "No really, I just want you to include this
weird data under this weird not-documented/not-standardized x509
extension".


Unless people show up claiming that that functionality is sufficient
for them to do things they want to do; I don't think it would be
valuable to implement this compromise.

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Investigating validations & issuances - The high value IP space BGP Hijacks on 2017-12-12

2017-12-15 Thread Tom Ritter via dev-security-policy
This is an extremely good point. I wonder:

1. If Mozilla should ask/require CAs to perform this check.
2. If Mozilla should ask/require CAs to invest in the capability to
make this check for future requests in the future (where we would
require responses within a certain time period.)

-tom

On 14 December 2017 at 22:16, Matthew Hardeman via dev-security-policy
 wrote:
> Has anyone started looking into CA issuances -- or even more importantly -- 
> CA domain validations performed successfully and yet without issuing a 
> certificate (say, wanting to cache the validation) for the brief periods in 
> which much of the internet saw alternative target destinations for a great 
> deal of high value organization IP space?
>
> For those CAs with workflows which allow for expressly requesting a domain 
> validation but not necessarily requiring that it be immediately utilized 
> (say, for example LetsEncrypt or another CA running ACME protocol or similar) 
> it might be of interest to review the validations performed successfully 
> during those time windows.
>
> Additionally, it may be of value for various CAs to check their issuances 
> upon domain validation for those periods.
>
> You can find the time periods and details about some of the IP space hijacked 
> at bgpmon.net
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Private key corresponding to public key in trusted Cisco certificate embedded in executable

2017-06-19 Thread Tom Ritter via dev-security-policy
On 19 June 2017 at 08:28, Samuel Pinder via dev-security-policy
 wrote:
> Therefore the newly re-issued
> certificate *will* end up with it's private key compromised *again*,
> no matter how well it may be obfuscated in the application, it is
> still against the very principle.

I'm pretty confused by this as well.

First off, while people have proposed multiple solutions to this
problem, they are not trivially implementable, nor are they
widespread. I think if you shook the tree with some automation, you'd
find on the order of 50 or more publicly trustable private keys
embedded in firmware pretty quickly.

So at what point does the CA become culpable to misissuance in a case
like this? Is it okay that we let them turn a blind eye to issuing or
reissuing certificates where they have a strong reason to believe the
private key will be published in firmware?

Clearly we wouldn't require them to vet every use of every certificate
they issue, but if they revoke a certificate for being used in this
fashion, it seems reasonable for them to ask the customer to at least
give them an explanation of how they've changed things such that a
newly issue certificate for the same domain will not be used in the
exact same way.

Is it reasonable for us to ask a CA to do this (that is, to ask their
customer)? Is it reasonable to require it?

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy