On Wed, 28 Dec 2005 04:18:52 +0600, Hallvord R M Steen <[EMAIL PROTECTED]> wrote:

Sorry to be slow at responding, Christmas and all that..

Merry Christmas and happy New Year to you!

1. The entire thing has to degrade SAFELY in existing browsers.

I'm not absolutely sure this is a requirement, since it improves on
today's situation which *is* no security at all once you include a
script from another server.
In this context "degrading safely" means "not being
backward-compatible". You can't add a safejavascript: uri scheme
without breaking backwards compatibility, do server-side sniffing and
send different code to different browsers by UA-name, which in itself
adds so many complications that it is a security problem in its own
right.

Safe degradation means that untrusted scripts should not be executed on older browsers at all. We don't trust an external script, and if we can't execute it in such a way that it doesn't break anything, then we shouldn't execute it at all.

2. The site author has to take care that the "sandbox" attribute is
included in every <script> element, even in user-supplied code.

Yes. I agree that perhaps an element is a better idea, so that
everything inside could live in its own environment.

In all cases the limitation would apply only to the thread created by
that SCRIPT tag. Functions defined in those scripts might be called
later and would run with normal privileges.

This is dangerous, too, because a malicious script can try to redefine
common JS functions like window.alert() to do something bad.

Yes, but origin-checking every function is too complex implementation-wise.

JS already has origin-checking in the sense that every function is bound to its parent namespace (class, window, whatever). No extra origin-checking is required beyond that. Functions inside the sandbox are just bound to their isolated namespace, just like normal functions are bound to the window namespace.

2.2. If the <sandbox> has a domain="..." attribute, then the scripts
inside the sandbox have access to cookies from the specified domain, can
interact with other sandboxes and frames from that domain, and are
otherwise restricted in a similar way as a regular content from that
domain (but not breaking out of 2.1 restriction). The "domain" attribute
can only specify the domain of the containing document or a subdomain
thereof.

For obvious reasons we can not allow a sandbox to specify
freely what domain it belongs to and change behaviour to for
example allow reading cookies or sending XMLHttpRequests to
that domain, because we have no way to verify that sandbox
contents are related to the domain it claims to be related
to. I basically agree with the restriction proposed above, I'm not
sure what exactly you mean by  "subdomain" though. Would you call
useraccount.livejournal.com a "subdomain" of www.livejournal.com ? If
the answer is yes, would you call example.org.uk a "subdomain" of
demo.org.uk, given that they also share two out of three labels?

If we say that the sandbox's domain can add server names to the parent
page's any sandbox that wants to claim it belongs to
useraccount.livejournal.com must be served from http://livejournal.com
without the www. Hard to impose such extra restrictions on existing
content.

document.domain can only be set to a dot-separated substring of
itself. We can not use that model either because we can't let content
on example.co.uk set document.domain to co.uk and contact all other
.co.uk domains.

The entire thing about allowing to set document.domain to its own suffix is a bit broken, don't you think? I think it's there mostly because of the www prefix. Some day long ago it seemed wise to introduce the www prefix for host names of web servers. I don't know what was the rationale for that, but nowadays it seems clear that the www prefix is redundanty. Vast majority of sites have www.domain.com aliased to domain.com, which means that they continue to support the tradition but don't put it to anything useful.

Anyway, the www prefix is giving us troubles now. Maybe an exception should be made for the exact string "www." so that www-prefixed domains are considered equivalent to those without the prefix, and a page on www.livejournal.com can declare sandboxes for username.livejournal.com. It sounds ugly, but the www prefix is the only case I can think of when my approach doesn't work.

2.3. The JS namespace in a sandbox is isolated. JS inside the sandbox
cannot see the variables and functions declared outside, and vice versa.
JS outside the sandbox can accesss JS variables and functions from inside the sandbox in an explicit way (like sandboxElement.sandbox['variable']).
If the outer JS needs to make several things (DOM nodes, JS variables)
 from the outside accessible to the inner JS, it can do so by putting
references to these into sandboxElement.sandbox array.

Perhaps unlimited access from parent to sandboxElement.contentDocument
would do? Or should we be more concerned about limiting access from
parent to sandbox?

What do you mean under sandbox.contentDocument?

Anyway, the parent should have full access to anything inside the sandbox, why not?

3. Sandboxes can be nested, with each inner one being additionally
restricted by the outer.

Not entirely sure what you mean by "additionally restricted". We
either keep JS environments separate or not..? :-)

Sorry, I wasn't clear enough. Under "additionally restricted" I mean that the inner sandbox can onlybe declared to a subdomain of the outer sandbox's domain.

5. There should be a discussion about what a sandboxed script can do. Can it set window.location? Can it do window.open()? Maybe these
permissions should be governed by additional attributes to <sandbox>.

Perhaps but I would rather not add too much complexity on permissions.
I'd be inclined to just set a restrictive but usable policy.
I'd disallow both window.location and window.open, and prevent sandbox
from targetting main window with form submits, link targets etc.

About link targets: by default, each link is targeted to the current window, with the new page replacing the current one. Does your last sentence mean that a sandbox can't contain such "regular" hyperlinks?

6. A sandbox can specify a single JS error handler for all enclosed
scripts (to address known cases of scripts which are not ready for the
unusual environment they are in).

Unsure, not all browsers support window.onerror and I'm not sure if it
is good design.

Otherwise, a malicious script can deliberately cause JS errors, which in some browsers will prevent other (legitimate) scripts on the same page from running. This can be regarded as a DoS. The sandbox concept in software security usually includes some sort of graceful error handling for sandboxed code.

7. Backward compatibility. The current browsers will ignore the unknown
<sandbox> element and give the enclosed scripts full access to everything.
This is not acceptable. As there is no way to disable scripting inside a
certain element in HTML 4, the HTML cleaners usually found on sites live
LiveJournal.com are still required. Here's what they should do.

7.1. There are new elements: <safe-script>, <safe-object>, <safe-iframe>
(did I forget something?). They are equivalent to their "unsafe"
counterparts, except that the existing browsers simply ignore them. HTML
cleaners should replace <script> with <safe-script> and likewise.

As I said above, this is IMO not ideal because it requires browser
sniffing and different code for different UAs.

No, it doesn't, and that's the point: older browsers receive <safe-script> that they don't know about, and they simply ignore them. This prevents untrusted scripts from running in these browsers. Consider the case of LiveJournal. If they allow scripts, they can't just serve them as <script> to everyone. But as <safe-script> they can, because in older browsers the scripts just won't run, and it's not a big deal because, at present, no scripts are allowed at all.

Perhaps we should go for the simpler use cases like including untrusted
advertising SCRIPT tags before tackling the harder ones like securing
user input on LiveJournal :-)

I should stress my point once more: bad security is worse than no security. If the sandbox mechanism is introduced with just the regular <script> element (not <safe-script>), then it would be too dangerous to use it. Sites like LiveJournal won't start using it at all because it's unsafe for the users of older browsers. And those sites who do adopt it will endanger their users' security because untrusted scripts will appear in places where they weren't allowed before.

I agree that there are cases when external scripts are "almost trusted", like advertisement scripts from well-known sources. They could be malicious in theory, but actually you don't believe it's possible. When given a choice: serve such script as unrestricted to an older browser, or don't serve it at all, -- for such scripts, the former choice will be reasonable. In this case, nothing stops the author from using a regular <script> element, not the <safe-script>, inside <sandbox>. The script will be restricted in newer browsers, but it will still run (unrestricted) in older ones.

-- Opera M2 9.0 TP1 on Debian Linux 2.6.12-1-k7

I like your taste in browser and E-mail software :-)

Thanks for giving me a lot of good reasons for my choice.


-- Opera M2 9.0 TP1 on Debian Linux 2.6.12-1-k7
* Origin: X-Man's Station at SW-Soft, Inc. [ICQ: 115226275] <[EMAIL PROTECTED]>

Reply via email to