It's debatable if those are actually facts but perhaps some perspective will help the conversation. I'll use this case as a launching point: That is certainly a true statement and I would take it a step further to say the user should not be expected to check the address bar under normal circumstances. Or, to put it differently, any security feature which requires the user to pay close attention to the address bar should be considered ineffectual. That said, there is a use-case worth considering here: when the page being viewed doesn't look right. Examples include when I expected site A but end up at B, or when I go to login at C but instead have what looks like a phishing page. In such cases, seeing the organization info and so forth can be useful. Even if the best the browser can do is say "this site is owned by Google, although I really can't confirm it" there is utility in that. It just might give me a fighting chance at being secure--which is not to say the alternative is no chance but that my ability to make secure decisions are diminished without it.
On Mon, Oct 27, 2014 at 10:58 AM, John Nagle <[email protected]> wrote: > It's appropriate for browsers to show that new information with > users. In the browser, there are two issues: 1) detecting OV > certs, which requires a list of per-CA OIDs, and 2) displaying > something in the GUI. If users perceive the new information — and that's a big if — what do you expect that they will do with it? While formulating your response, please keep these facts in mind: * Users understand their task well enough to complete it, but are also distracted (including by security indicators and their numerous false positives), and busy. 0 people in the world understand 100% of the ins and outs of X.509 and TLS; normal people have no chance and should not have to. X.509-style PKI is an engineering failure in part because of its absurd complexity. * Users are, quite reasonably, focused on the viewport. After all, that's where the content is and where the task is. Many people simply never see the Location Bar or its security indicators. * The only security boundary on the web is the origin (the tuple scheme, host, port). * URLs are incredibly hard to parse, both for engineers (search the web for hundreds of attempts to parse URLs with regular expressions!) and for normal people. * The only part of the origin that users understand is the hostname; it's better if the hostname is just effective TLD + 1 label below (e.g. example + co.sg, or example + com). Long hostnames look phishy. * User who look away from the viewport have a chance to understand 1 bit of security status information: "Secure" or "Not secure". Currently, the guaranteed-not-safe schemes like http, ws, and ftp are the only ones guaranteed to never incur any warning or bad indicator, leading people to reasonably conclude that they are safe. Fixing that is the/a #1 priority for me; it ranks far higher than ever-more-fine-grained noise about organization names, hostnames, OV/DV/EV, and so on. * You can try to build a square by cramming a bunch of different Zooko's Triangles together, but it's probably going to be a major bummer. After all, that's the status quo; why would more triangles help? * We have to design products that work for most people in the world most of the time, and which are not egregiously unsafe or egregiously hard to understand. It's good to satisfy small populations of power users if we can, but not at the expense of normal every day use. * There are some threat models for which no defense can be computed. For example, attempts to get to the "true" business entity, and to ensure that they are not a proxy for some service behind, start to look a lot like remote attestation. RA is not really possible even on closed networks; on the internet it's really not happening. | ||||||
_______________________________________________ dev-security-policy mailing list [email protected] https://lists.mozilla.org/listinfo/dev-security-policy

