Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-07 Thread Roger Hågensen

On 2011-01-06 14:09, timeless wrote:

I'm kinda surprised that servers and CAs don't have better support for
reminding admins of this stuff.

I know for mozilla.org, nagios is responsible for warning admins.

The odd thing (to me) is that CAs make money selling certs, so one
would expect them to want to sell the renewed cert and get that new
booking by selling the new cert say 3-6 months before the old one
expires. And thus they're actually being customer oriented, providing
a useful service (possibly telling the customer about expired certs
they issued which are still running...).


This is why I like StartSSL.com so much (besides the free domain and 
email certs), is that the pay certs
are actually for the authentication/certification process, the actual 
certs themselves are free, and you can issue as many certs as you need 
for a certain amount of time.
Besides being cheap they also notify you a little while before the certs 
run out.


I know, I know, I'm almost sounding like a ad here, but StartCom the 
company behind startssl.com is leading by example here and I wish other 
CA's followed suit.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-07 Thread Glenn Maynard
On Fri, Jan 7, 2011 at 5:50 AM, Roger Hågensen resca...@emsai.net wrote:
 This is why I like StartSSL.com so much (besides the free domain and email
 certs), is that the pay certs
 are actually for the authentication/certification process, the actual certs
 themselves are free, and you can issue as many certs as you need for a
 certain amount of time.
 Besides being cheap they also notify you a little while before the certs run
 out.

I gave it a try earlier, since it was mentioned.  It created my
account, rejected my CSR, and I got a message saying that I somehow
failed to create a login certificate, that I'd no longer be able to
log in, and according to the FAQ the only way to continue would be to
create a whole new account on a different email address and to ask
them to manually merge the accounts.  That's broken in countless ways;
no CA should have such a brittle, half-baked account system.

-- 
Glenn Maynard


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-07 Thread Kornel Lesiński

On Fri, 07 Jan 2011 11:11:55 -, Glenn Maynard gl...@zewt.org wrote:


I gave it a try earlier, since it was mentioned.  It created my
account, rejected my CSR, and I got a message saying that I somehow
failed to create a login certificate, that I'd no longer be able to
log in, and according to the FAQ the only way to continue would be to
create a whole new account on a different email address and to ask
them to manually merge the accounts.  That's broken in countless ways;
no CA should have such a brittle, half-baked account system.


StartSSL uses client certificates to log in, which theoretically is a  
great idea, as account access (thus security of all its certificates)  
relies on strong cryptography, rather than some custom password-based  
mechanism.


In practice it's not so great, but maybe it's not StartSSL's fault, but  
due to complexity of certificates, inflexibility of keygen and very  
rough implementations of it.


--
regards, Kornel Lesiński


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-06 Thread timeless
On Thu, Jan 6, 2011 at 1:54 AM, Aryeh Gregor simetrical+...@gmail.com wrote:
 * You can typically only serve one domain per IP address, unless you
 can set up SNI (do all browsers support that yet?).

[1] Browsers with support for TLS server name indication:
* Internet Explorer 7 (Vista or higher, not XP) or later
* Mozilla Firefox 2.0 or later
* Opera 8.0 or later (the TLS 1.1 protocol must be enabled)
* Opera Mobile at least version 10.1 beta on Android
* Google Chrome (Vista or higher. XP on Chrome 6 or newer. OS X 10.5.7
or higher on Chrome 5.0.342.1 or newer)
* Safari 2.1 or later (Mac OS X 10.5.6 or higher and Windows Vista or higher)
* MobileSafari in Apple iOS 4.0 or later
* Windows Phone 7
* Maemo

So, basically the unsupported bits for SNI are:
iOS3 and below running Safari
 -- iiuc [2], iPod Touch [3] first generation (purchased roughly
before September 9, 2008) + original iPhone [4] are the only two which
can't run iOS4 (purchased roughly before July 11, 2008)
OS X 10.5.5 [5] and below running Safari
 -- iiuc [6][7], PowerPC G4 computers with CPU speed  867 MHz can't
run 10.5 ootb, these were obsoleted around August 13, 2002
XP [8] running IE 7-
 -- Users should upgrade to IE8 which is supported [9] (or any other browser)

For other desktop configurations (including the unsupported ones
listed above), users can use Firefox/Opera. For mobile configurations,
users can use SkyFire/Opera Mobile.

The coverage for SNI is thus, in fact, quite good.

I can't speak for Symbian, but assuming I'm reading [10] correctly,
Symbian 1 would not have SNI as there's a request against 417 [11] to
add it. Sybmian 2 [12] offers WebKit 525 [10] which should be new
enough to include SNI (as that's roughly what's in Safari 3 which
includes it). This doesn't cover many older models but Opera/SkyFire
should be available for most.

Similarly per [10], BlackBerry 6 [13] which is WebKit 534 should have
SNI. This of course doesn't cover many models, but Opera should be
available for most.

Probably worth doing is a study of SNI failure behavior. My experience
w/ mobile browsers and mobile users is that the warnings are ignored
anyway (especially on Symbian where you're constantly bombarded with
stupid dialogs and quickly learn to i-do-not-care through them),
which means that your users are probably used to the problem. But once
they get to your SNI page, you can include a note to mobile users of
browsers which don't have SNI explaining that if they want a more
secure experience they should switch to list browsers you know work
(the browsers are free, so the only cost to you is a quick test and
the only cost to the user is the download cost for a better browser).

[1] http://en.wikipedia.org/wiki/Server_Name_Indication
[2] 
http://en.wikipedia.org/wiki/IOS_version_history#4.x:_Fourth_major_release_of_the_OS
[3] http://en.wikipedia.org/wiki/IPod_Touch#Models
[4] http://en.wikipedia.org/wiki/IPhone#Models
[5] http://en.wikipedia.org/wiki/Mac_OS_X_v10.5#Release_history
[6] http://en.wikipedia.org/wiki/Mac_OS_X_v10.5#Usage_on_unsupported_hardware
[7] http://en.wikipedia.org/wiki/Power_Mac_G4#Four-slot_models
[8] http://en.wikipedia.org/wiki/Windows_XP#Support_lifecycle
[9] http://en.wikipedia.org/wiki/Internet_Explorer_8#OS_requirement
[10] http://www.quirksmode.org/webkit.html
[11] https://lists.webkit.org/pipermail/webkit-unassigned/2006-June/011657.html
[12] http://en.wikipedia.org/wiki/Symbian#Version_history
[13] http://en.wikipedia.org/wiki/BlackBerry_OS#Current_versions


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-06 Thread timeless
On Thu, Jan 6, 2011 at 1:54 AM, Aryeh Gregor simetrical+...@gmail.com wrote:
 * If your cert expires or you misconfigure the site something else
 goes wrong, all your users get scary error messages.

This isn't limited to SNI. I saw one server which had its certificate
expire at the end of Dec 30, 2010 (i.e. it was expired the morning of
the last day of last year). Renewing certificates is scheduled
maintenance which needs to be done and *planned for* anyway.

I'm kinda surprised that servers and CAs don't have better support for
reminding admins of this stuff.

I know for mozilla.org, nagios is responsible for warning admins.

The odd thing (to me) is that CAs make money selling certs, so one
would expect them to want to sell the renewed cert and get that new
booking by selling the new cert say 3-6 months before the old one
expires. And thus they're actually being customer oriented, providing
a useful service (possibly telling the customer about expired certs
they issued which are still running...).


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-06 Thread Aryeh Gregor
On Wed, Jan 5, 2011 at 7:47 PM, Glenn Maynard gl...@zewt.org wrote:
 Javascript injection is a harder problem, for example: it isn't
 prevented by SSL, can persist without maintenance (unlike an MITM
 attack), can be introduced untracably and without any special network
 access (you don't need to get in the middle), and so in practice are
 much more common than MITM attacks.

An XSS attack can still get IP address, and thus usually rough
location, so most of what I said still holds.

 It's bothered me for a long time that browsers treat self-signed
 certificates as *less* secure than plaintext, which is nonsense.

Lots of people have written extensive explanations of why browsers do
this.  Here's one I submitted as a comment to lwn.net a while back,
maybe it will clear things up: http://lwn.net/Articles/413600/

 By the way, another real-world issue with SSL is that it's
 considerably more computationally expensive: handling encrypted
 requests takes much more CPU, especially for high-bandwidth servers.
 Not every service can afford to buy extra or more powerful servers to
 handle this.

Apparently this isn't a real issue anymore in practice.  CPUs are fast
enough that SSL is no big deal.  Google saw only a small load increase
when it turned on HTTPS by default for all Gmail users:
http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html

On Thu, Jan 6, 2011 at 12:21 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 How do you revoke it?  Once someone knows where you are, they know it. You
 can't make them stop knowing it.

In the context of an attacker who has some particular notion of who
you are and wants to connect that to your location, yes.  But is this
likely to be a common threat?  It's all very well to consider worst
cases, but the default convenience/security tradeoff has to be
calculated according to the typical case, not the worst case.  Typical
users are the ones who determine market share, and if the web platform
refuses to add features that would benefit the typical user because
they would hurt atypical users, typical users will choose other
platforms.

The web platform is so intrinsically convenient that it can remain
competitive with conventional applications while erring far on the
side of security in convenience/security tradeoffs.  But comparably
convenient platforms like Flash or mobile app stores will gain more
users if the web trades away too much convenience by comparison.
Ideally we should try to accommodate all users' security needs without
sacrificing convenience, but in cases where that's not possible,
atypical users will inevitably have to reconfigure their browsers.

Of course, maybe I'm just missing the cases where a reasonably typical
user (not, e.g., the target of malicious governments, or stalkers who
happen to be hackers) would be attacked in a fashion where anyone
would be interested in learning their location once and remembering
it.

 http://www.technologyreview.com/web/26981/page1/ might be worth reading.

Users who use Tor for their web browsing are decidedly atypical, and
can be expected to remain so given the inherent performance penalty it
imposes.


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-05 Thread Roger Hågensen

On 2011-01-05 06:10, Boris Zbarsky wrote:

On 1/4/11 10:51 PM, Glenn Maynard wrote:

On Tue, Jan 4, 2011 at 10:53 PM, Boris Zbarskybzbar...@mit.edu  wrote:

Note that you keep comparing websites to desktop software, but desktop
software typically doesn't change out from under the user (possibly 
in ways
the original software developer didn't intend).  The desktop apps 
that do
update themselves have a lot of checks on the process precisely to 
avoid
issues like MITM injection of trojaned updates and whatnot.  So in 
practice,
they have a setup where you make a trust decision once, and then the 
code

that you already trusted verifies signatures on every change to itself.


HTTPS already prevents MITM attacks and most others


I've yet to see someone suggest restricting the asking UI to https 
sites (though I think it's something that obviously needs to happen).  
As far as I can tell, things like browser geolocation prompts are not 
thus restricted at the moment.



the major attack vector they don't prevent is a compromised server.


Or various kinds of cross-site script injection (which you may or may 
not consider as a compromised server).



I thnik the main difference is that the private keys needed to sign
with HTTPS are normally located on the server delivering the scripts,
whereas signed updates can keep their private keys offline.


Or fetch them over https from a server they trust sufficiently (e.g. 
because it's very locked down in terms of what it allows in the way of 
access and what it serves up), actually; I believe at least some 
update mechanisms do just that.


That's not a model web apps can mimic: all ways to execute scripts, 
in both

Javascript files and inline in HTML, would need to be signed, which is
impossible with templated HTML.


Agreed, but that seems like a problem for actual security here.


You don't really know that an installer you download from a server is
valid, either.  Most of the time--for most users and most
software--you have to take it on faith that the file on the server
hasn't been compromised.




Considering the fact that StartCOM ( https://www.startssl.com/ ) offers 
free domain based certificates that all major browsers support now 
(IE/Microsoft was a bit slow on this initially),
there is no longer any excuse not to make use of https for downloading 
securely or logging in/registering (forums etc), or using secure web 
apps.
So leveraging https in some way would be the best solution here, and all 
the https code is allready in the browser code bases anyway.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-05 Thread Roger Hågensen

On 2011-01-05 01:07, Seth Brown wrote:

I couldn't agree more that we should avoid turning this into vista's UAC.


The issue with UAC is not UAC.
UAC (especially the more dilligent one on Vista) merely exposed 
programmers and software expecting raised priviledges while they 
actually did not need them.
Linux has had UAC pretty much from day one so programmers and software 
has played nice from day one.
And UAC is not really security as it does not protect the user, UAC is 
intended to ensure that a user session won't fuck up anything else like 
other accounts or admin sessions or the OS/kernel.

UAC protects the system from potentially rogue user accounts.
So it's a shame that UAC's introduction in Vista brought such a stigma 
upon it as I actually like it.


Myself I have a fully separate normal user account (rather than the 
split token one that most here probably uses) so I actually have to 
enter the admin password each time,
but I do not find it annoying, and I actually develop under this normal 
user account.
only system updates or admin stuff need approval, and the odd software 
(but I try to avoid those instead).
Running software or installing software need to bring up any UAC at all, 
if it does it is simply lazy coding by the developers,

and any webapp stuff should also follow the same example in this case.

UAC is meant to help isolate an incident and prevent other parts of a 
system from being affected, or other users/accounts,

so a webapp should be secured under those same principles.
Considering all the issues with cross site exploits and so on it's 
obvious that the net is in dire need of some of those core principles,
so please do not so easily dismiss UAC due to how it's perceived, but 
rather judge it by what it actually is instead.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-05 Thread Roger Hågensen

On 2011-01-04 22:59, Seth Brown wrote:

That being said. Granting access to a particular script instead of an
entire site sounds like a reasonable security requirement to me. As
does using a hash to verify that the script you granted permission to
hasn't changed.

-Seth


A hash (any hash in fact, even secure ones) can only guarantee that 
two pieces of data are different!
A hash can NEVER guarantee that two pieces of data are the same, this is 
impossible.
A hash can only be used to make a quick assumption that the data 
probably are the same,
thus avoiding expensive byte by byte comparison in cases where the 
hashes differ.
If the hashes are the same then only a byte by byte comparison can 
guarantee the data are the same.

Any cryptography expert worth their salt will agree to the statements above.

HTTPS which is continually evolving is a much better solution than just 
relying on hashes and plain http,
I cringe each time I see a secure script that is delivered over http 
which purpose is to encrypt the password you enter and send it to the 
website.
HTTP authentication however isn't so bad if only the damn plaintext 
basic support was fully deprecated AND disallowed,
then again now that you can get domain certificates for free that are 
supported by the major browsers HTTP Authentication is kinda being 
overshadowed by HTTPS, which is fine I guess.


Just please don't slap a hash on it and think it's safe, that's all 
I'm saying really.



--
Roger Rescator Hågensen.
Freelancer - http://www.EmSai.net/



Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-05 Thread Boris Zbarsky

On 1/5/11 10:07 AM, Roger Hågensen wrote:

there is no longer any excuse not to make use of https for downloading
securely or logging in/registering (forums etc), or using secure web
apps.


Tell that to facebook?  They seem to feel there is an excuse for it, and 
they definitely have a cert.


-Boris


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-05 Thread Aryeh Gregor
On Wed, Jan 5, 2011 at 1:34 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 I wouldn't.  Just because a user trusts some particular entity to know
 exactly where they are, doesn't mean they trust their stalker with that
 information.  I picked geolocation specifically, because that involves an
 irrevocable surrender of personal information, not just annoyance like
 disabling the context menu.

It's not really irrevocable.  A MITM only has access to the info as
long as he's conducting the MITM.  As soon as the attack ends, the
attacker stops getting info.  Moreover, anyone who's intercepting your
Internet traffic could probably make a good guess at your location
anyway, such as by looking up your IP address or triangulating
latency.  So I don't think that it's such a big deal if MITMs can
compromise geolocation, relatively speaking.

On Wed, Jan 5, 2011 at 11:07 AM, Roger Hågensen resca...@emsai.net wrote:
 Considering the fact that StartCOM ( https://www.startssl.com/ ) offers free
 domain based certificates that all major browsers support now (IE/Microsoft
 was a bit slow on this initially),
 there is no longer any excuse not to make use of https for downloading
 securely or logging in/registering (forums etc), or using secure web
 apps.

There are lots of reasons.  Getting a cert is only the start.  Other
problems with HTTPS include:

* You can typically only serve one domain per IP address, unless you
can set up SNI (do all browsers support that yet?).  This is a blocker
issue for sites that are too small to have their own IPv4 address,
like at a big shared web host.
* Every connection involves extra round-trips, which hurts page response time.
* If your cert expires or you misconfigure the site something else
goes wrong, all your users get scary error messages.  In some cases
the browser will even refuse to let them proceed at all.  (Chrome does
this for revoked certificates and I've run into it a couple of times.
Of course, I wasn't submitting sensitive information to the site, so I
just used another browser.)

Overall, HTTPS in practice is fragile and a pain to set up.  These
problems mean that it's common to see scary errors due to
misconfiguration on even extremely large sites, like
https://amazon.com/.  It's just not worth it for most people.  Which
is a shame, but there you have it.

On Wed, Jan 5, 2011 at 11:29 AM, Roger Hågensen resca...@emsai.net wrote:
 A hash (any hash in fact, even secure ones) can only guarantee that two
 pieces of data are different!
 A hash can NEVER guarantee that two pieces of data are the same, this is
 impossible.

It's logically impossible, but that doesn't mean it's computationally
impossible.  Whether a hashing algorithm exists such that no efficient
algorithm can find a collision with non-negligible probability is, as
far as I know, an open question.  In practice, hash functions such as
SHA256 can be regarded as secure for the present time -- if there are
collisions in this sort of thing, a lot of stuff will break.  Like
HTTPS, as it happens (we've already seen some fallout from MD5
collisions).


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-05 Thread Glenn Maynard
On Wed, Jan 5, 2011 at 6:54 PM, Aryeh Gregor simetrical+...@gmail.com wrote:
 It's not really irrevocable.  A MITM only has access to the info as
 long as he's conducting the MITM.  As soon as the attack ends, the
 attacker stops getting info.  Moreover, anyone who's intercepting your
 Internet traffic could probably make a good guess at your location
 anyway, such as by looking up your IP address or triangulating
 latency.  So I don't think that it's such a big deal if MITMs can
 compromise geolocation, relatively speaking.

MITM is just one attack, and the solution is the easiest to define
(SSL), even if there are practical issues that should be improved.

Javascript injection is a harder problem, for example: it isn't
prevented by SSL, can persist without maintenance (unlike an MITM
attack), can be introduced untracably and without any special network
access (you don't need to get in the middle), and so in practice are
much more common than MITM attacks.

(Let's not get too caught up in how serious a geolocation attack is;
it's just one example, anyway.  The main thing that makes it
particularly notable is that it's actually a live, deployed API.)


 * If your cert expires or you misconfigure the site something else
 goes wrong, all your users get scary error messages.  In some cases
 the browser will even refuse to let them proceed at all.  (Chrome does
 this for revoked certificates and I've run into it a couple of times.
 Of course, I wasn't submitting sensitive information to the site, so I
 just used another browser.)

It's bothered me for a long time that browsers treat self-signed
certificates as *less* secure than plaintext, which is nonsense.
Despite being vulnerable to MITM attacks, even an untrusted
certificate helps prevent passive sniffing attacks.  Browsers should
accept self-signed certificates and present them as an insecure
connection, as if they were simply HTTP, not show warnings.

As for revoked certificates, while they're a major red flag--they're a
category of certificate issues that *should* show large, scary
warnings--it's completely unacceptable for a browser to actually
*refuse* to allow the user to proceed.

By the way, another real-world issue with SSL is that it's
considerably more computationally expensive: handling encrypted
requests takes much more CPU, especially for high-bandwidth servers.
Not every service can afford to buy extra or more powerful servers to
handle this.

-- 
Glenn Maynard


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-05 Thread Boris Zbarsky

On 1/5/11 3:54 PM, Aryeh Gregor wrote:

On Wed, Jan 5, 2011 at 1:34 AM, Boris Zbarskybzbar...@mit.edu  wrote:

I wouldn't.  Just because a user trusts some particular entity to know
exactly where they are, doesn't mean they trust their stalker with that
information.  I picked geolocation specifically, because that involves an
irrevocable surrender of personal information, not just annoyance like
disabling the context menu.


It's not really irrevocable.


How do you revoke it?  Once someone knows where you are, they know it. 
You can't make them stop knowing it.



A MITM only has access to the info as
long as he's conducting the MITM.


The above concern was in the context of site bugs allowing script 
injection of various sorts, not just MITM.



As soon as the attack ends, the
attacker stops getting info.  Moreover, anyone who's intercepting your
Internet traffic could probably make a good guess at your location
anyway, such as by looking up your IP address or triangulating
latency.


http://www.technologyreview.com/web/26981/page1/ might be worth reading.

-Boris



Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-04 Thread Seth Brown
When you download and run a program you are placing the same level of
trust in a website (unless it the program is also distributed by an
additional trusted site and you can verify the one you have is the
same) as you would when allowing them to access one of your devices.

Therefore, device element access should require the same level of
confirmation as installing a downloaded program.

That being said. Granting access to a particular script instead of an
entire site sounds like a reasonable security requirement to me. As
does using a hash to verify that the script you granted permission to
hasn't changed.

-Seth


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-04 Thread Glenn Maynard
On Tue, Jan 4, 2011 at 4:59 PM, Seth Brown lear...@gmail.com wrote:
 When you download and run a program you are placing the same level of
 trust in a website (unless it the program is also distributed by an
 additional trusted site and you can verify the one you have is the
 same) as you would when allowing them to access one of your devices.

 Therefore, device element access should require the same level of
 confirmation as installing a downloaded program.

 That being said. Granting access to a particular script instead of an
 entire site sounds like a reasonable security requirement to me. As
 does using a hash to verify that the script you granted permission to
 hasn't changed.

The issue of handling elevated permissions for scripts is a difficult
one, and I don't have a complete answer either, but re-confirming
every time the slightest change is made server-side is no solution.
Users aren't diffing scripts and verifying changes to see whether they
want to continue to grant permission.  Users aren't developers, and
most developers won't waste their time doing that, either (never mind
the issue of obfuscated Javascript code).

This would have exactly the same result as Vista's horrible UAC
mechanism: not only asking the user to confirm something he can't be
expected to understand, but asking in a constant, never-ending stream,
to the point where users either click yes without reading, or figure
out how to disable the prompt entirely (the worst end result possible,
if it causes a permissive default).

At some point, I do strongly believe that web apps should be able to
request elevated permission.  Many tasks that are still the domain of
native applications are stuck that way only because of security issues
like this, not because of any technical limitations of HTML or
Javascript.  This won't change without a reasonable security
mechanism--but asking the user every time a script changes is not an
answer.

-- 
Glenn Maynard


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-04 Thread Seth Brown
I couldn't agree more that we should avoid turning this into vista's UAC.

Maybe developers could make changes infrequent enough that users
wouldn't be bothered very often? They could encapsulate the device
access logic into one .js file that shouldn't be regularly changed.

Another option is to set default security to list changes to scripts
with device access in the device access preferences pane. Allow a
higher security setting that would send alert popups to notify
changes.

You could also use a third party CA like entity to verify your scripts
(this option would naturally have to develop over time).

-Seth


On Tue, Jan 4, 2011 at 5:20 PM, Glenn Maynard gl...@zewt.org wrote:
 On Tue, Jan 4, 2011 at 4:59 PM, Seth Brown lear...@gmail.com wrote:
 When you download and run a program you are placing the same level of
 trust in a website (unless it the program is also distributed by an
 additional trusted site and you can verify the one you have is the
 same) as you would when allowing them to access one of your devices.

 Therefore, device element access should require the same level of
 confirmation as installing a downloaded program.

 That being said. Granting access to a particular script instead of an
 entire site sounds like a reasonable security requirement to me. As
 does using a hash to verify that the script you granted permission to
 hasn't changed.

 The issue of handling elevated permissions for scripts is a difficult
 one, and I don't have a complete answer either, but re-confirming
 every time the slightest change is made server-side is no solution.
 Users aren't diffing scripts and verifying changes to see whether they
 want to continue to grant permission.  Users aren't developers, and
 most developers won't waste their time doing that, either (never mind
 the issue of obfuscated Javascript code).

 This would have exactly the same result as Vista's horrible UAC
 mechanism: not only asking the user to confirm something he can't be
 expected to understand, but asking in a constant, never-ending stream,
 to the point where users either click yes without reading, or figure
 out how to disable the prompt entirely (the worst end result possible,
 if it causes a permissive default).

 At some point, I do strongly believe that web apps should be able to
 request elevated permission.  Many tasks that are still the domain of
 native applications are stuck that way only because of security issues
 like this, not because of any technical limitations of HTML or
 Javascript.  This won't change without a reasonable security
 mechanism--but asking the user every time a script changes is not an
 answer.

 --
 Glenn Maynard



Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-04 Thread Glenn Maynard
On Tue, Jan 4, 2011 at 7:07 PM, Seth Brown lear...@gmail.com wrote:
 I couldn't agree more that we should avoid turning this into vista's UAC.

 Maybe developers could make changes infrequent enough that users
 wouldn't be bothered very often? They could encapsulate the device
 access logic into one .js file that shouldn't be regularly changed.

Please don't restrict my ability to update my software with an
annoyingly-designed security system.  Whether I believe that rapid
updates or slow, well-tested updates are a better model for my web
app, I shouldn't be forced into one or the other because of a security
model that annoys the user every time I change something.

And: it still doesn't help.  Asking a user whether changes to a
Javascript file are okay is meaningless.  Regular users don't know
Javascript; there's no way they can know whether to accept a change or
not.  No general security model can be built around requiring the user
to understand the technical issues behind the security.

-- 
Glenn Maynard


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-04 Thread Boris Zbarsky

On 1/4/11 6:15 PM, Glenn Maynard wrote:

 No general security model can be built around requiring the user
to understand the technical issues behind the security.


Agreed.

At the same time no general security model should be build around 
requiring users to make decisions based on no information.


So in brief, asking the user is just a bad security model...

Note that you keep comparing websites to desktop software, but desktop 
software typically doesn't change out from under the user (possibly in 
ways the original software developer didn't intend).  The desktop apps 
that do update themselves have a lot of checks on the process precisely 
to avoid issues like MITM injection of trojaned updates and whatnot.  So 
in practice, they have a setup where you make a trust decision once, and 
then the code that you already trusted verifies signatures on every 
change to itself.


Perhaps we need infrastructure like that for websites; I'm not quite 
sure how to make it work, though, since the code that the user trusted 
once is not known to still be ok, unlike the desktop app case.


-Boris


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-04 Thread Glenn Maynard
On Tue, Jan 4, 2011 at 10:53 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 Note that you keep comparing websites to desktop software, but desktop
 software typically doesn't change out from under the user (possibly in ways
 the original software developer didn't intend).  The desktop apps that do
 update themselves have a lot of checks on the process precisely to avoid
 issues like MITM injection of trojaned updates and whatnot.  So in practice,
 they have a setup where you make a trust decision once, and then the code
 that you already trusted verifies signatures on every change to itself.

HTTPS already prevents MITM attacks and most others; the major attack
vector they don't prevent is a compromised server.

I thnik the main difference is that the private keys needed to sign
with HTTPS are normally located on the server delivering the scripts,
whereas signed updates can keep their private keys offline.  That's
not a model web apps can mimic: all ways to execute scripts, in both
Javascript files and inline in HTML, would need to be signed, which is
impossible with templated HTML.

 Perhaps we need infrastructure like that for websites; I'm not quite sure
 how to make it work, though, since the code that the user trusted once is
 not known to still be ok, unlike the desktop app case.

You don't really know that an installer you download from a server is
valid, either.  Most of the time--for most users and most
software--you have to take it on faith that the file on the server
hasn't been compromised.  But, yes, you only have to do that once with
auto-updating systems, not on every update.

-- 
Glenn Maynard


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-04 Thread Boris Zbarsky

On 1/4/11 10:51 PM, Glenn Maynard wrote:

On Tue, Jan 4, 2011 at 10:53 PM, Boris Zbarskybzbar...@mit.edu  wrote:

Note that you keep comparing websites to desktop software, but desktop
software typically doesn't change out from under the user (possibly in ways
the original software developer didn't intend).  The desktop apps that do
update themselves have a lot of checks on the process precisely to avoid
issues like MITM injection of trojaned updates and whatnot.  So in practice,
they have a setup where you make a trust decision once, and then the code
that you already trusted verifies signatures on every change to itself.


HTTPS already prevents MITM attacks and most others


I've yet to see someone suggest restricting the asking UI to https sites 
(though I think it's something that obviously needs to happen).  As far 
as I can tell, things like browser geolocation prompts are not thus 
restricted at the moment.



the major attack vector they don't prevent is a compromised server.


Or various kinds of cross-site script injection (which you may or may 
not consider as a compromised server).



I thnik the main difference is that the private keys needed to sign
with HTTPS are normally located on the server delivering the scripts,
whereas signed updates can keep their private keys offline.


Or fetch them over https from a server they trust sufficiently (e.g. 
because it's very locked down in terms of what it allows in the way of 
access and what it serves up), actually; I believe at least some update 
mechanisms do just that.



That's not a model web apps can mimic: all ways to execute scripts, in both
Javascript files and inline in HTML, would need to be signed, which is
impossible with templated HTML.


Agreed, but that seems like a problem for actual security here.


You don't really know that an installer you download from a server is
valid, either.  Most of the time--for most users and most
software--you have to take it on faith that the file on the server
hasn't been compromised.


That really depends.  Publishing checksums is not all that uncommon. 
The point is that at least the remote possibility of due diligence on 
the user's part exists here.  So far, for web sites, it doesn't.



But, yes, you only have to do that once with auto-updating systems, not on 
every update.


Indeed.

-Boris


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-04 Thread Glenn Maynard
On Wed, Jan 5, 2011 at 12:10 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 HTTPS already prevents MITM attacks and most others

 I've yet to see someone suggest restricting the asking UI to https sites
 (though I think it's something that obviously needs to happen).  As far as I
 can tell, things like browser geolocation prompts are not thus restricted at
 the moment.

Well, there are at least two broad classes of elevated privileges:
things which are clearly useful to web pages but are disallowed or
limited because they're too easily misused, and things with more
serious security implications.  Fullscreening, mouse capturing,
stopping the context menu, bypassing local storage quotas, etc. are in
the former category.  Unrestricted file and network access (accepting
network connections for direct peer-to-peer connections, UDP) is in
the latter category.

Stricter requirements like SSL makes more sense for the latter case.
I'd put geolocation squarely in the first, lesser group.

Unblocking the lesser case is probably much easier, to allow elevating
a site to permit those things which are useful, and which are at worst
a nuisance if a script is hijacked.

 the major attack vector they don't prevent is a compromised server.

 Or various kinds of cross-site script injection (which you may or may not
 consider as a compromised server).

I suppose this is analogous to buffer overflows in native code.

-- 
Glenn Maynard


Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-04 Thread Boris Zbarsky

On 1/5/11 12:29 AM, Glenn Maynard wrote:

Stricter requirements like SSL makes more sense for the latter case.
I'd put geolocation squarely in the first, lesser group.


I wouldn't.  Just because a user trusts some particular entity to know 
exactly where they are, doesn't mean they trust their stalker with that 
information.  I picked geolocation specifically, because that involves 
an irrevocable surrender of personal information, not just annoyance 
like disabling the context menu.



Or various kinds of cross-site script injection (which you may or may not
consider as a compromised server).


I suppose this is analogous to buffer overflows in native code.


As opposed to a virus infection (which would be similar to a compromised 
server), say?  Yes, that seems like a good analogy.  One difference is 
that buffer overflows are primarily a problem insofar as you don't 
control your input.  With a website, you never control your input: 
anyone can point the user to any url on your site.  Even urls you didn't 
think of existing.


-Boris



Re: [whatwg] whatwg Digest, Vol 82, Issue 10

2011-01-04 Thread Glenn Maynard
On Wed, Jan 5, 2011 at 1:34 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 I wouldn't.  Just because a user trusts some particular entity to know
 exactly where they are, doesn't mean they trust their stalker with that
 information.  I picked geolocation specifically, because that involves an
 irrevocable surrender of personal information, not just annoyance like
 disabling the context menu.

It's a judgement call, of course; some things are easier to categorize
than others.

Geolocation seems to sit somewhere in the middle: some people don't
care if their location is public, and others care a lot.  By
comparison, *no* informed user would want to give every website
unrestricted local file access; hijackable elevated file permissions
is an inherently critical security failure.

-- 
Glenn Maynard