Adam Thompson <[email protected]> writes:

> And both insecure and incredibly fragile against a malicious web page.
> Someone could insert all sorts of crud in there,
> or use some sort of compromise to insert a magic property in place of the 
> array
> to, for example (in a world where we have ajax)
> capture all cookies set by a different website or similar (and I've not even
> tried to think about this too hard).

Yes, the possibility of exposing internals to JavaScript seems too
fraught with danger, and it seems that the best (and only?) place for
them is in native code.

> That's, if anything, why we need a *more* native DOM,
> but decoupled from the js engine, i.e.
> create object stubs which go back to the DOM to set DOM attributes so we don't
> need to make a bunch of js variable checks to render the DOM.

Yes, the problem we had with native code was, first and foremost, that
it was fragile.  How many months did we spend porting from Spidermonkey
1.8.5 to Spidermonkey 24?  What happens when 24 gets end-of-lifed and
we're stuck scrambling to move to a new engine?  This is why moving as
much as possible out of native code is so darned attractive.
It's a very sweet siren's song.  But if we could come up with a native
DOM that wasn't tied to an engine, it would be even better.  So where do
we start?

> However, if I've learned anything about the internet it's that, in general,
> browsers are the primary way of exploiting users' computers

That's because the industry has been singing the "do it in the browser"
tune since the mid-90s, and now, a document delivery platform is being
used for everything under the sun, from word processing to online
banking.  It wasn't designed for most of these things, so we have this
horrible impedence mismatch.
Unfortunately, we can't change that.

-- Chris
_______________________________________________
Edbrowse-dev mailing list
[email protected]
http://lists.the-brannons.com/mailman/listinfo/edbrowse-dev

Reply via email to