Re: Firefox security too strict (HSTS?)?

2015-09-14 Thread Chris Palmer
On Sun, Sep 13, 2015 at 2:56 PM, AnilG  wrote:

Thanks Chris, I'll follow up with IT on this question.
>

You can check yourself if the chain you see chains up to the right root. In
Chrome, click on the lock icon in the location bar, click the Connection
Tab, and then click "Certificate information". This opens the Certificate
Viewer. There, click the Details Tab and inspect the Certificate Hierarchy
and each certificate's Certificate Fields. The root certificate should
match the certificate your IT department gave you.

Sounds like something basic but perhaps not so obvious if the IT preferred
> (and test) browser (Chrome) is more permissive? But surely this is so basic
> that (even) Chrome can't pretend a site is secured if there's no link to
> the root certificate?
>

Chrome is not known for being permissive about certificate checking. :) And
no, it's (I hope) very unlikely that Chrome is calling a certificate OK
even without being able to chain to a root in your machine's root
certificate store. You can verify that by following the steps above.

Also, what does Safari do?

I'm also following this up on evangelism@moz. I've got the impression that
> there's global dissatisfaction with FF being "too strict" and it *seems*
> like it's harder to get FF to "work" for IT? Or perhaps they just know
> Chrome and not FF?
>

I also would not blame Firefox for being "too strict" here. Firefox'
certificate validation policies are in line with industry norms. You
shouldn't want any browser to blindly allow you to visit sites that should
be secure but can't be validated as such due to a problem with the
certificate chain.

Keep in mind, your deployment scenario (enterprise MITM — presumably
predicated on 'anti-virus' or 'data loss prevention') is identical to an
actual attack, except that the IT department owns the computer and
therefore it is OK for them to install this new root certificate. But no
browser can 'know' that, except by seeing and using the certificate. So the
good browser fails closed.


> For me I'm currently working in Chrome because I *can't* work in FF. It's
> been days now so this probably means I'm the last guy in my organisation
> still hanging on to FF. I'm worried that this may be a global issue cutting
> FF out of commercial (firewalled) use.
>

That is unlikely. Firefox is fine for these uses, and I'm sure it will turn
out to be a glitch in the deployment or configuration.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-08 Thread Chris Palmer
On Fri, Jun 5, 2015 at 8:04 AM, Peter Kurrasch fhw...@gmail.com wrote:

 Certificate Transparency gets us what we want, I think. CT works
 globally, and is safer, and significantly changes the trust equation:
 ‎
 * Reduces to marginal/effectively destroys the attack value of mis-issuance

 Please clarify this statement because, as written, this is plainly not true. 
 The only way to reduce the value is if someone detects the mis-issuance and 
 then takes action to resolve it.

Yes, I am assuming that — it's the foundational and necessary
assumption of any audit system.

The Googles, Facebooks, PayPals, ... of the world care very much about
mis-issuance for their domains. Activists and security experts and
bloggers and reporters are always looking for fun stuff, and are
generally capable of writing shell scripts.

 From what I've seen so far, both are major gaps in CT as a security feature.

What have you seen so far that leads you to believe that? Are there
mis-issuances in the existing CT logs that nobody has called attention
to...?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Re: Organization info in certs not being properly recognized byFirefox

2014-10-30 Thread Chris Palmer
On Wed, Oct 29, 2014 at 2:02 PM, Dean Coclin dean.j.coc...@verizon.net wrote:

  But many people do in fact look at the security indicators. If that 
 statement were true, why do fraudsters bother to get SSL certs (mostly DV) 
 for their phishing websites? It's because they know that people are trained 
 to look for the lock and https.  Granted not all the people know this but a 
 percentage of the population does and it dictates the behavior of 
 cybercriminals.

Some people do look at the security indicators some of the time. Since
it's easy and affordable to get a certificate — as it should be! Thank
you! And help me convince all the web developers out there who believe
otherwise :) — phishers and fraudsters might as well pay the small
price if they can soothe the concerns of some potential victims some
of the time.

Related:

https://www.ccsl.carleton.ca/people/theses/Sobey_Master_Thesis_08.pdf


5.4 Time Spent Gazing at Browser Chrome

One of the more interesting findings in the eye tracking data was how long users
spent gazing at the content of the web pages as opposed to gazing at the browser
chrome. For each participant, we compared the amount of time the participant’s
gaze data contained co-ordinates within the browser chrome during the
study tasks
with the amount of time the participant’s gaze data contained
co-ordinates in the
page content. On average, the 11 participants who were classified as
gazers spent
about 9.5% of time gazing at any part of the browser chrome. The remaining 17
participants who did not gaze at indicators spent only 4.3% of their
time focusing on
browser chrome as opposed to content (some spent as little as 1%).


Tangentially related:

http://eprints.qut.edu.au/55714/1/Main-ACM.pdf
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Organization info in certs not being properly recognized byFirefox

2014-10-27 Thread Chris Palmer
On Mon, Oct 27, 2014 at 10:58 AM, John Nagle na...@sitetruth.com wrote:

 It's appropriate for browsers to show that new information with
 users.  In the browser, there are two issues: 1) detecting OV
 certs, which requires a list of per-CA OIDs, and 2) displaying
 something in the GUI.

If users perceive the new information — and that's a big if — what do
you expect that they will do with it?

While formulating your response, please keep these facts in mind:

* Users understand their task well enough to complete it, but are also
distracted (including by security indicators and their numerous false
positives), and busy. 0 people in the world understand 100% of the ins
and outs of X.509 and TLS; normal people have no chance and should not
have to. X.509-style PKI is an engineering failure in part because of
its absurd complexity.

* Users are, quite reasonably, focused on the viewport. After all,
that's where the content is and where the task is. Many people simply
never see the Location Bar or its security indicators.

* The only security boundary on the web is the origin (the tuple
scheme, host, port).

* URLs are incredibly hard to parse, both for engineers (search the
web for hundreds of attempts to parse URLs with regular expressions!)
and for normal people.

* The only part of the origin that users understand is the hostname;
it's better if the hostname is just effective TLD + 1 label below
(e.g. example + co.sg, or example + com). Long hostnames look phishy.

* User who look away from the viewport have a chance to understand 1
bit of security status information: Secure or Not secure.
Currently, the guaranteed-not-safe schemes like http, ws, and ftp are
the only ones guaranteed to never incur any warning or bad indicator,
leading people to reasonably conclude that they are safe. Fixing that
is the/a #1 priority for me; it ranks far higher than
ever-more-fine-grained noise about organization names, hostnames,
OV/DV/EV, and so on.

* You can try to build a square by cramming a bunch of different
Zooko's Triangles together, but it's probably going to be a major
bummer. After all, that's the status quo; why would more triangles
help?

* We have to design products that work for most people in the world
most of the time, and which are not egregiously unsafe or egregiously
hard to understand. It's good to satisfy small populations of power
users if we can, but not at the expense of normal every day use.

* There are some threat models for which no defense can be computed.
For example, attempts to get to the true business entity, and to
ensure that they are not a proxy for some service behind, start to
look a lot like remote attestation. RA is not really possible even on
closed networks; on the internet it's really not happening.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Indicators for high-security features

2014-09-23 Thread Chris Palmer
On Tue, Sep 23, 2014 at 11:08 AM,  fhw...@gmail.com wrote:

 ‎So what is the reason to use HSTS over a server initiated redirect? Seems to 
 me the latter would provide greater security whereas the former is easy to 
 bypass.

You have it backwards.

http://www.thoughtcrime.org/software/sslstrip/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Indicators for high-security features

2014-09-22 Thread Chris Palmer
On Sat, Sep 20, 2014 at 1:10 AM, Anne van Kesteren ann...@annevk.nl wrote:

 My point is that UI indicators should reflect the reality of actual
 technical security boundaries. Unless we actually create a boundary,
 we shouldn't show that we have.

 So why do you show special UI for EV?

For historical reasons, i.e. It Was Like That When I Got Here.
(Similar to how getUserMedia does not (yet) require secure origins.)

 * What's a stable cryptographic identity in the web PKI? Is it the
 public key in the end-entity certificate, or the public key in any of
 the issuing certificates?
 * Or maybe the union of all keys?
 * Or maybe the presence of any 1 key in the set?
 * What about the sometimes weird and hard-to-predict certificate
 path-building behavior across platforms?
 * What about key rotation that happens legitimately?
 * Do we convince CAs to issue name-constrained issuing certificates to
 each site operator (with the constrained name being the origin's exact
 hostname), that cert's key becomes the origin's key, and site
 operators issue end entities from that?
 ** There'd still be a need to re-issue that key, from time to time.

 It seems for same-origin checks where the origin is derived from a
 resource and not a URL, we could in fact do one or more of those,
 today. E.g. if https://example.com/ fetches https://example.org/image
 we'd check if they're same-origin and if their certificate matches.
 Now as connections grow more persistent this will likely be the case
 anyway, no?

Perhaps an origin's cryptographic identity might be stable over the
course of a page-load, or even over the course of a browsing session.
But we'd need a stronger guarantee of lifetime than that.

 ** Could the TACK key be the origin key?

 Is TACK still going anywhere? The mailing list suggests it's dead.

But one could imagine it being resuscitated, if it were a way to get a
long-lived cryptographic identity for an origin.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Indicators for high-security features

2014-09-22 Thread Chris Palmer
On Mon, Sep 22, 2014 at 5:56 AM, Henri Sivonen hsivo...@hsivonen.fi wrote:

 -- HTTP Strict Transport Security

 Yes, but I think this requirement shouldn't apply to subresources for
 the page to qualify, since top-level HSTS together with the No mixed
 content requirement mean that there's no sslstrip risk for embedded
 resources even if they are served from a non-HSTS CDN.

These days we're blocking loads of active mixed content, but passive
mixed content is still a concern to me. E.g. an attacker can mangle a
web app's UI pretty badly, including to perform attacks, if the app
gets its icons and buttons via SSLstrip-able sources.

 -- HTTP Public Key Pinning

 I'm a bit worried about this one. I'd like the bar for this indicator
 to be such that it can motivate anyone with nginx to configure it
 right. This way, the new indicator could have a broad impact beyond
 just the largest sites. It's not clear to me if HPKP is practical for
 sites without Google/Twitter-level ops teams.

HPKP is indeed dangerous.

I don't anticipate any additional UI for it, let alone additional UI
that would motivate a not-ready-yet ops team to turn it on.

 It seems to me that it's at least currently impractical for small
 sites to get CAs to commit to issue future certs from a particular
 root or intermediate, so it seems to me that especially pinning an
 intermediate is hazardous unless you are big enough a customer of a CA
 to get commitments regarding future issuance practices.

Intermediates move slowly, and roots even more slowly. It's fairly
safe to assume that, for the lifetime if your end-entity cert, the CA
will still be operating, and if that they can and will cross-sign in
cases where they re-key heavily-used issuing certs.

But, yeah, have a backup pin, and pin at various places in the
certificate chain. I'd advise people to look at
net/http/transport_security_state_static.json and consider what
Dropbox, Google, Twitter, and Tor have done, and why.

 It's unclear to me if HPKP makes it safe and practical to use without
 actually forming a business relationship with two CAs in advance
 (which would be impractical for many small sites). It seems to me that
 HPKP makes it possible to generate a backup end-entity key pair in
 advance of having it certified. However, the spec itself discourages
 end-entity pinning altogether and it's pretty scary to pin a key
 before you know for sure you can get it certified by a CA later.

I wouldn't say we discourage EE pinning; but I would discourage
pinning EEs *exclusively*.

 (In some way, it seems like HPKP is the simplest thing that makes
 sense for Google, which has its own intermediate, but for the rest of
 us, being able to maintain a TACK-style signing key *to the side of*
 the CA-rooted chain would be better. What's the outlook of us
 supporting TACK or some other mechanism that allows pinning a
 site-specific signing key that's not part of the CA-rooted chain?)

I consider a backup pin to be enough like an on the side pin. But,
however, you may not.

 -- Certificate Transparency

 Are we planning to support CT now? (I'm not stating an opinion for or
 against. I'm merely surprised to see CT mentioned as if it was
 something we'd support, since I don't recall seeing previous
 indications that we'd support it.)

I devoutly hope Mozilla does support CT.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Indicators for high-security features

2014-09-19 Thread Chris Palmer
On Fri, Sep 19, 2014 at 4:52 AM, Anne van Kesteren ann...@annevk.nl wrote:

 Please keep in mind that the origin is the security boundary on the
 web, and is defined as being (scheme, host, port).

 And optional additional data:
 https://html.spec.whatwg.org/multipage/browsers.html#origin

I haven't seen any origin checks lately that use any optional additional data.

 Assuming we don't expand the definition of the origin, unless we
 implement mixed-everything blocking — mixed EV  non-EV, mixed TLS 1.2
  1.1, mixed AES-128  AES-256, mixed pinned keys  non-pinned, et c.
 — then I don't think we should make any increased promise to the user.
 After all, the promise wouldn't be true.

 I'm not sure I follow. If there's mixed content you no longer get a
 lock at all in Firefox. Obviously we should not revert that.

My point is that UI indicators should reflect the reality of actual
technical security boundaries. Unless we actually create a boundary,
we shouldn't show that we have.

And yet, a hypothetical boundary between TLS 1.1 and TLS 1.2 would not
almost certainly not fly, for compatibility reasons (as much as we all
might like to have such a boundary).

 The hair I'd much rather split, by the way, is making each
 cryptographic identity a separate origin. Ponder for a moment how
 enjoyably impossible that will be...

 What are the issues?

* What's a stable cryptographic identity in the web PKI? Is it the
public key in the end-entity certificate, or the public key in any of
the issuing certificates?
* Or maybe the union of all keys?
* Or maybe the presence of any 1 key in the set?
* What about the sometimes weird and hard-to-predict certificate
path-building behavior across platforms?
* What about key rotation that happens legitimately?
* Do we convince CAs to issue name-constrained issuing certificates to
each site operator (with the constrained name being the origin's exact
hostname), that cert's key becomes the origin's key, and site
operators issue end entities from that?
** There'd still be a need to re-issue that key, from time to time.
* Do we use the web PKI to establish a distinct origin key?
** Could the TACK key be the origin key?

 (There's also an idea floating around about checking certificates
 first when doing a same-origin check, potentially allowing distinct
 origins that share a certificate through alternate names, to be
 same-origin. However, with CORS it might not really be needed
 anymore.)

That's terrifying. :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Indicators for high-security features

2014-09-18 Thread Chris Palmer
On Thu, Sep 18, 2014 at 5:15 PM,  diaf...@gmail.com wrote:

 Instead of trying to pile on more clutter to the lock/warning/globe states, 
 how about letting the user determine the threshold of those states?

 The default would be what they are now, but perhaps in about:config you could 
 set the lock state to require perfect forward secrecy, otherwise drop to a 
 warning state.

In Chrome, we are (very) gradually ratcheting up the cipher
suite/other crypto parameter requirements. It has proven quite
fruitful. I can imagine a future in which non-PFS gets treated as
non-secure. But not just yet.

Even experts, in my experience, get hung up on the complexity of about:flags.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Allow Redaction of issues detailed in BR Audit statements?

2014-08-26 Thread Chris Palmer
On Tue, Aug 26, 2014 at 5:18 PM, Matt Palmer mpal...@hezmatt.org wrote:

 On an unrelated point, I'd like to thank you, Kathleen, for the work you do
 in this area.  Going over the minutiae of audit reports can't be a
 particularly fun job, but it *is* a very necessary one, so thanks for being
 the one who does it.

And a hearty +1 to that!
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal: Switch generic icon to negative feedback for non-https sites

2014-08-13 Thread Chris Palmer
FWIW, that's a misquote; I didn't write that.
On Aug 12, 2014 4:38 AM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:

 [Apologies if you've seen this before, it looks like up to a week's worth
 of
  mail from here has been lost, this is a resend of the backlog]

 Chris Palmer pal...@google.com writes:

 Firefox 31 data:
 
 on desktop the median successful OCSP validation took 261ms, and the 95th
 percentile (looking at just the universe of successful ones) was over
 1300ms.
 9% of all OCSP requests on desktop timed out completely and aren't
 counted in
 those numbers.

 Do you have equivalent data for the TLS connect times?  In other words how
 much was TLS being slowed down by including OCSP?

 Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal: Switch generic icon to negative feedback for non-https sites

2014-08-06 Thread Chris Palmer
On Wed, Aug 6, 2014 at 12:02 AM,  andrew.be...@gmail.com wrote:

 I'm all for pushing people onto SSL, and of course if you stigmatise 
 non-secure connections the demand for SSL increases and CDNs will need to 
 compete on their ability to support it at a reasonable cost. But there's a 
 chicken and egg problem, to some extent.  Is there anything browser vendors 
 can do to make SSL easier and cheaper across the board before punishing you 
 for not using it?

The value proposition of CDNs has never been quite clear to me,
especially at volume and especially with any requirement of security.
If they choose to bilk people who ask for HTTPS, that just strengthens
the rent your own rack on each continent argument. But that's
another matter...

I don't know what browsers can do to make it easier for server
operators — I'm busy with Chrome; I don't work on Nginx or Apache.
There's work they need to do to make configuration easier.

That said, part of our activism campaign should probably involve
nagging server vendors to ship better configurations by default,
auto-generating keys and CSRs for each configured hostname/domain that
doesn't already have one, et c. The default configurations of a lot
servers are bad in a lot of ways, not even just HTTPS- or
security-related.

For getting certs, https://sslmate.com/ seems pretty good.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal: Switch generic icon to negative feedback for non-https sites

2014-07-22 Thread Chris Palmer
On Tue, Jul 22, 2014 at 2:00 PM, Brian Smith br...@briansmith.org wrote:

 Firefox's cert override mechanism uses a different pinning mechanism
 than the key pinning feature. Basically, Firefox saves a tuple
 (domain, port, cert fingerprint, isDomainMismatch,
 isValidityPeriodProblem, isUntrustedIssuer) into a database. When it
 encounters an untrsuted certificate, it computes that tuple and tries
 to find a matching one in the database; if so, it allows the
 connection.

Interesting! Thanks for the clue.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Exceptions to 1024-bit cert revocation requirement

2013-12-11 Thread Chris Palmer
On Wed, Dec 11, 2013 at 2:48 PM, Jeremy Rowley
jeremy.row...@digicert.com wrote:

 If you are granting more time, I have a whole bunch of customers who are not
 happy about the 2013 cutoff.  Extending it for some CAs is patently unfair
 to those of us who have taken a hard stance on the deadline and not
 requested extensions of time.  If you are granting some CAs an extension,
 you'll probably get a lot more requests from the rest of us.

Indeed, it would be unfair — and unwise.

No exceptions.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy