Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
Boom! http://pixelscommander.com/en/web-applications-performance/render-html-css-in-webgl-to-get-highest-performance-possibl/ This looks pretty amazing. On Sat, Feb 14, 2015 at 4:01 PM, Brendan Eich bren...@secure.meer.net wrote: Hang on a sec before going off to a private or single-vendor thread because you think I sent you packing on topics that are of interest (as opposed to Thread-Safe DOM). I'm sorry I missed Travis's mail in my Inbox, but I see it now in the archives. The topics listed at the link he cites *are* interesting to many folks here, even if public-webapps may not always be the best list: - IRC log: http://www.w3.org/2014/10/29-parallel-irc See also: Mohammad (Moh) Reza Haghighat's presentation on parallelism in the 29 October 2014 Anniversary Symposium talks We covered three main potential areas for parallelism: 1. Find additional isolated areas of the web platform to enable parallelism. We noted Canvas Contexts that can migrate to workers to enable parallelism. Initial thoughts around UIWorkers are brewing for handling scrolling effects. Audio Workers are already being developed with specific real-time requirements. What additional features can be made faster by moving them off to workers? 2. Shared memory models. This seems to require an investment in the JavaScript object primitives to enable multiple threaded access to object dictionaries that offer robust protections around multi-write scenarios for properties. 3. Isolation boundaries for DOM access. We realized we needed to find an appropriate place to provide isolation such that DOM accesses could be assigned to a parallelizable JS engine. Based on discussion it sounded like element sub-trees wouldn't be possible to isolate, but that documents might be. Iframes of different origins may already be parallelized in some browsers. - Mozilla people have done work in all three areas, collaborating with Intel and Google people at least. Ongoing work continues as far as I know. Again, some of it may be better done in groups other than public-webapps. I cited roc's blog post about custom view scrolling, which seems to fall under Travis's (1) above. Please don't feel rejected about any of these work items. /be Marc Fawzi mailto:marc.fa...@gmail.com February 13, 2015 at 12:45 PM Travis, That would be awesome. I will go over that link and hopefully have starting points for the discussion. My day job actually allows me to dedicate time to experimentation (hence the ClojureScript stuff), so if you have any private branches of IE with latest DOM experiments, I'd be very happy to explore any new potential or new efficiency that your ideas may give us! I'm very keen on that, too. Off list seems to be best here.. Thank you Travis. I really appreciate being able to communicate freely about ideas. Marc Boris Zbarsky mailto:bzbar...@mit.edu February 11, 2015 at 12:33 PM On 2/11/15 3:04 PM, Brendan Eich wrote: If you want multi-threaded DOM access, then again based on all that I know about the three open source browser engines in the field, I do not see any implementor taking the huge bug-risk and opportunity-cost and (mainly) performance-regression hit of adding barriers and other synchronization devices all over their DOM code. Only the Servo project, which is all about safety with maximal hardware parallelism, might get to the promised land you seek (even that's not clear yet). A good start is defining terms. What do we mean by multi-threaded DOM access? If we mean concurrent access to the same DOM objects from both a window and a worker, or multiple workers, then I think that's a no-go in Servo as well, and not worth trying to design for: it would introduce a lot of spec and implementation complexity that I don't think is warranted by the use cases I've seen. If we mean the much more modest have a DOM implementation available in workers then that might be viable. Even _that_ is pretty hard to do in Gecko, at least, because there is various global state (caches of various sorts) that the DOM uses that would need to either move into TLS or become threadsafe in some form or something... Again, various specs (mostly DOM and HTML) would need to be gone over very carefully to make sure they're not making assumptions about the availability of such global shared state. We should add lighter-weight workers and immutable data structures I should note that even some things that could be immutable might involved a shared cache in current implementations (e.g. to speed up sequential indexed access into a child list implemented as a linked list)... Obviously that sort of thing can be changed, but your bigger point that there is a lot of risk to doing that in existing implementations remains. -Boris Brendan Eich mailto:bren...@secure.meer.net February 11, 2015 at 12:04 PM Sorry, I was too grumpy -- my apologies. I don't see
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
Hang on a sec before going off to a private or single-vendor thread because you think I sent you packing on topics that are of interest (as opposed to Thread-Safe DOM). I'm sorry I missed Travis's mail in my Inbox, but I see it now in the archives. The topics listed at the link he cites *are* interesting to many folks here, even if public-webapps may not always be the best list: - IRC log: http://www.w3.org/2014/10/29-parallel-irc See also: Mohammad (Moh) Reza Haghighat's presentation on parallelism in the 29 October 2014 Anniversary Symposium talks We covered three main potential areas for parallelism: 1. Find additional isolated areas of the web platform to enable parallelism. We noted Canvas Contexts that can migrate to workers to enable parallelism. Initial thoughts around UIWorkers are brewing for handling scrolling effects. Audio Workers are already being developed with specific real-time requirements. What additional features can be made faster by moving them off to workers? 2. Shared memory models. This seems to require an investment in the JavaScript object primitives to enable multiple threaded access to object dictionaries that offer robust protections around multi-write scenarios for properties. 3. Isolation boundaries for DOM access. We realized we needed to find an appropriate place to provide isolation such that DOM accesses could be assigned to a parallelizable JS engine. Based on discussion it sounded like element sub-trees wouldn't be possible to isolate, but that documents might be. Iframes of different origins may already be parallelized in some browsers. - Mozilla people have done work in all three areas, collaborating with Intel and Google people at least. Ongoing work continues as far as I know. Again, some of it may be better done in groups other than public-webapps. I cited roc's blog post about custom view scrolling, which seems to fall under Travis's (1) above. Please don't feel rejected about any of these work items. /be Marc Fawzi mailto:marc.fa...@gmail.com February 13, 2015 at 12:45 PM Travis, That would be awesome. I will go over that link and hopefully have starting points for the discussion. My day job actually allows me to dedicate time to experimentation (hence the ClojureScript stuff), so if you have any private branches of IE with latest DOM experiments, I'd be very happy to explore any new potential or new efficiency that your ideas may give us! I'm very keen on that, too. Off list seems to be best here.. Thank you Travis. I really appreciate being able to communicate freely about ideas. Marc Boris Zbarsky mailto:bzbar...@mit.edu February 11, 2015 at 12:33 PM On 2/11/15 3:04 PM, Brendan Eich wrote: If you want multi-threaded DOM access, then again based on all that I know about the three open source browser engines in the field, I do not see any implementor taking the huge bug-risk and opportunity-cost and (mainly) performance-regression hit of adding barriers and other synchronization devices all over their DOM code. Only the Servo project, which is all about safety with maximal hardware parallelism, might get to the promised land you seek (even that's not clear yet). A good start is defining terms. What do we mean by multi-threaded DOM access? If we mean concurrent access to the same DOM objects from both a window and a worker, or multiple workers, then I think that's a no-go in Servo as well, and not worth trying to design for: it would introduce a lot of spec and implementation complexity that I don't think is warranted by the use cases I've seen. If we mean the much more modest have a DOM implementation available in workers then that might be viable. Even _that_ is pretty hard to do in Gecko, at least, because there is various global state (caches of various sorts) that the DOM uses that would need to either move into TLS or become threadsafe in some form or something... Again, various specs (mostly DOM and HTML) would need to be gone over very carefully to make sure they're not making assumptions about the availability of such global shared state. We should add lighter-weight workers and immutable data structures I should note that even some things that could be immutable might involved a shared cache in current implementations (e.g. to speed up sequential indexed access into a child list implemented as a linked list)... Obviously that sort of thing can be changed, but your bigger point that there is a lot of risk to doing that in existing implementations remains. -Boris Brendan Eich mailto:bren...@secure.meer.net February 11, 2015 at 12:04 PM Sorry, I was too grumpy -- my apologies. I don't see much ground for progress in this whole thread or the sub-thread you started. If we're talking about sync XHR, I gave my informed opinion that deprecating it is empty talk if actually obsoleting by whichever browser takes the first hit inevitably leads to
RE: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
Marc, I'd first mention that I am keenly interested in improving the state-of-the-art in DOM (I'm driving the project to update IE's 20-year-old DOM as my day job.) I've also done a lot of thinking about thread-safe DOM designs, and would be happy to chat with you more in depth about some ideas (perhaps off-list if you'd like). I'd also refer you to a breakout session I held during last TPAC on a similar topic [1]. It had lots of interested folks in the room and I thought we had a really productive and interesting discussion (most of it captured in the IRC notes). [1] https://www.w3.org/wiki/Improving_Parallelism_Page -Original Message- From: Boris Zbarsky [mailto:bzbar...@mit.edu] Sent: Wednesday, February 11, 2015 12:34 PM To: public-webapps@w3.org Subject: Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest On 2/11/15 3:04 PM, Brendan Eich wrote: If you want multi-threaded DOM access, then again based on all that I know about the three open source browser engines in the field, I do not see any implementor taking the huge bug-risk and opportunity-cost and (mainly) performance-regression hit of adding barriers and other synchronization devices all over their DOM code. Only the Servo project, which is all about safety with maximal hardware parallelism, might get to the promised land you seek (even that's not clear yet). A good start is defining terms. What do we mean by multi-threaded DOM access? If we mean concurrent access to the same DOM objects from both a window and a worker, or multiple workers, then I think that's a no-go in Servo as well, and not worth trying to design for: it would introduce a lot of spec and implementation complexity that I don't think is warranted by the use cases I've seen. If we mean the much more modest have a DOM implementation available in workers then that might be viable. Even _that_ is pretty hard to do in Gecko, at least, because there is various global state (caches of various sorts) that the DOM uses that would need to either move into TLS or become threadsafe in some form or something... Again, various specs (mostly DOM and HTML) would need to be gone over very carefully to make sure they're not making assumptions about the availability of such global shared state. We should add lighter-weight workers and immutable data structures I should note that even some things that could be immutable might involved a shared cache in current implementations (e.g. to speed up sequential indexed access into a child list implemented as a linked list)... Obviously that sort of thing can be changed, but your bigger point that there is a lot of risk to doing that in existing implementations remains. -Boris
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
Travis, That would be awesome. I will go over that link and hopefully have starting points for the discussion. My day job actually allows me to dedicate time to experimentation (hence the ClojureScript stuff), so if you have any private branches of IE with latest DOM experiments, I'd be very happy to explore any new potential or new efficiency that your ideas may give us! I'm very keen on that, too. Off list seems to be best here.. Thank you Travis. I really appreciate being able to communicate freely about ideas. Marc On Fri, Feb 13, 2015 at 11:20 AM, Travis Leithead travis.leith...@microsoft.com wrote: Marc, I'd first mention that I am keenly interested in improving the state-of-the-art in DOM (I'm driving the project to update IE's 20-year-old DOM as my day job.) I've also done a lot of thinking about thread-safe DOM designs, and would be happy to chat with you more in depth about some ideas (perhaps off-list if you'd like). I'd also refer you to a breakout session I held during last TPAC on a similar topic [1]. It had lots of interested folks in the room and I thought we had a really productive and interesting discussion (most of it captured in the IRC notes). [1] https://www.w3.org/wiki/Improving_Parallelism_Page -Original Message- From: Boris Zbarsky [mailto:bzbar...@mit.edu] Sent: Wednesday, February 11, 2015 12:34 PM To: public-webapps@w3.org Subject: Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest On 2/11/15 3:04 PM, Brendan Eich wrote: If you want multi-threaded DOM access, then again based on all that I know about the three open source browser engines in the field, I do not see any implementor taking the huge bug-risk and opportunity-cost and (mainly) performance-regression hit of adding barriers and other synchronization devices all over their DOM code. Only the Servo project, which is all about safety with maximal hardware parallelism, might get to the promised land you seek (even that's not clear yet). A good start is defining terms. What do we mean by multi-threaded DOM access? If we mean concurrent access to the same DOM objects from both a window and a worker, or multiple workers, then I think that's a no-go in Servo as well, and not worth trying to design for: it would introduce a lot of spec and implementation complexity that I don't think is warranted by the use cases I've seen. If we mean the much more modest have a DOM implementation available in workers then that might be viable. Even _that_ is pretty hard to do in Gecko, at least, because there is various global state (caches of various sorts) that the DOM uses that would need to either move into TLS or become threadsafe in some form or something... Again, various specs (mostly DOM and HTML) would need to be gone over very carefully to make sure they're not making assumptions about the availability of such global shared state. We should add lighter-weight workers and immutable data structures I should note that even some things that could be immutable might involved a shared cache in current implementations (e.g. to speed up sequential indexed access into a child list implemented as a linked list)... Obviously that sort of thing can be changed, but your bigger point that there is a lot of risk to doing that in existing implementations remains. -Boris
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
Marc Fawzi wrote: This guy here is bypassing the DOM and using WebGL for user interfaces https://github.com/onejs/onejs He even has a demo, with no event handling other than arrow keys at this point, and as the author admits ugly graphics, but with projects like React-Canvas (forget the React part, focus on Canvas UIs) and attempts like these it looks like the way of the future is to relegate the DOM to old boring business apps and throw more creative energy at things like WebGL UIToolKit (the idea that guy is pursuing) I know Rik and keep in touch. OneJS is the kind of over-the-top work (not changing the DOM in-situ) that I recommended previously, in addition to incremental Web evolution. So once again, I'm not sure we disagree, but I am pretty sure your posts are misdirected to this list. Philosophizing about backward compatibility painting the Web into corners that kill it don't really move anything forward here. There is a gap to fill with respect to the GPU, which WebGL-based toolkits won't bridge. I think Rik would agree that we can't replace the web with raw JS+WebGL. Meanwhile, since the first iPhone, the major engines have all offloaded parts of their CSS rendering to the GPU (ideally with a big shared texture memory, bigger than the display viewport), in order to use a separate thread from the main UI event loop thread, for 60fps touch response. Of course the CPUs (multicore + SIMD) need to be used, so we don't want to marry a paritcular processing unit, or (worse) one GPU microarchitecture. Standards and even languages/toolkits (one.js is young) need to mature and abstract enough over hardware variation to be worth their weight -- but not so much they collapse like all the big OOP skycastles of the Java UI toolkit era have. The more we can do with incremental standards to expose this OMTC/A CPU+GPU-based computation in first class ways on the Web, the less anyone will have to make a old wolf or new tiger decision between Web and non-Web approaches. One concrete example: custom view scrolling. See http://robert.ocallahan.org/2014/07/implementing-scroll-animations-using.html If we can focus on these sorts of proposals and not thread-safe DOM or will-backward-compat-kill-the-Web imponderables, we'll get somewhere better, sooner. /be P.S. Meanwhile, are we done (for now) with deprecate-sync-XHR angst, and no browser actually plans to remove it? Not directed at Marc, rather my cc: list browser buddies. :-P
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
On 12/02/15 03:45, Marc Fawzi wrote: this backward compatibility stuff is making me think that the web is built upon the axiom that we will never start over and we must keep piling up new features and principles on top of the old ones this has worked so far, miraculously and not without overhead, but I can only assume that it's at the cost of growing complexity in the browser codebase. I'm sure you have to manage a ton of code that has to do with old features and old ideas... how long can this be sustained? forever? what is the point in time where the business of retaining backward compatibility becomes a huge nightmare? As a side-note, the original thread is a good illustration of what happens whenever browser vendors attempt to deprecate features that are clearly broken by design. Cheers, David -- David Rajchenbach-Teller, PhD Performance Team, Mozilla signature.asc Description: OpenPGP digital signature
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
On Thu, Feb 12, 2015 at 4:45 AM, Marc Fawzi marc.fa...@gmail.com wrote: how long can this be sustained? forever? what is the point in time where the business of retaining backward compatibility becomes a huge nightmare? It already is, but there's no way out. This is true everywhere in computing. Look closely at almost any protocol, API, language, etc. that dates back 20 years or more and has evolved a lot since then, and you'll see tons of cruft that just causes headaches but can't be eliminated. Like the fact that Internet traffic is largely in 1500-byte packets because that's the maximum size you could have on ancient shared cables without ambiguity in the case of collision. Or that e-mail is mostly sent in plaintext, with no authentication of authorship, because that's what made sense in the 80s (or whatever). Or how almost all web traffic winds up going over TCP, which performs horribly on all kinds of modern usage patterns. For that matter, I'm typing this with a keyboard layout that was designed well over a century ago to meet the needs of mechanical typewriters, but it became standard, so now everyone uses it due to inertia. This is all horrible, but that's life.
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
Aryeh Gregor wrote: This is all horrible, but that's life. Indeed, nature is nasty. Search for sacculina carcini life cycle for but one example. The Web and the Internet are evolving systems with some parallels and analogies to biological evolution. See http://www.cc.gatech.edu/~dovrolis/ for more on this, if you are interested. Nothing like _Sacculina_ yet, luckily! /be
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
Brendan Eich wrote: over-the-top work Apologies if this overloaded trope is confusing without more context -- it could mean wildly excessive, or doing what soldiers in trenches did in WWI when the whistle blew (see https://www.youtube.com/watch?v=fssPqRWx9U0 :-/), but I meant build on top of JS and the DOM (even WebGL requires a bit of DOM for the canvas bit). /be
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
Legacy problems Across the computing industry, we spend enormous amounts of money and effort on keeping older, legacy systems running. The examples range from huge and costly to small and merely annoying: planes circle around in holding patterns burning precious fuel because air traffic control can't keep up on systems that are less powerful than a smartphone; WiFi networks don't reach their top speeds because an original 802.11(no letter), 2Mbps system *could* show up—you never know. So when engineers dream, we dream of leaving all of yesterday's technology behind and starting from scratch. But such clean breaks are rarely possible. For instance, the original 10 megabit Ethernet specification allows for 1500-byte packets. Filling up 10Mbps takes about 830 of those 1500-byte packets. Then Fast Ethernet came along, which was 100Mbps, but the packet size remained the same so that 100Mbps ethernet gear could be hooked up to 10Mbps ethernet equipment without compatibility issues. Fast Ethernet needs 8300 packets per second to fill up the pipe. Gigabit Ethernet needs 83,000 and 10 Gigabit Ethernet needs *almost a million packets per second* (well, 830,000). For each faster Ethernet standard, the switch vendors need to pull out even more stops to process an increasingly outrageous numbers of packets per second, running the CAMs that store the forwarding tables at insane speeds that demand huge amounts of power. The need to connect antique NE2000 cards meant sticking to 1500 bytes for Fast Ethernet, and then the need to talk to those rusty Fast Ethernet cards meant sticking to 1500 bytes for Gigabit Ethernet, and so on. At each point, the next step makes sense, but* the entire journey ends up looking irrational.* Source: http://arstechnica.com/business/2010/09/there-is-no-plan-b-why-the-ipv4-to-ipv6-transition-will-be-ugly/ This guy here is bypassing the DOM and using WebGL for user interfaces https://github.com/onejs/onejs He even has a demo, with no event handling other than arrow keys at this point, and as the author admits ugly graphics, but with projects like React-Canvas (forget the React part, focus on Canvas UIs) and attempts like these it looks like the way of the future is to relegate the DOM to old boring business apps and throw more creative energy at things like WebGL UIToolKit (the idea that guy is pursuing) On Thu, Feb 12, 2015 at 3:46 AM, Aryeh Gregor a...@aryeh.name wrote: On Thu, Feb 12, 2015 at 4:45 AM, Marc Fawzi marc.fa...@gmail.com wrote: how long can this be sustained? forever? what is the point in time where the business of retaining backward compatibility becomes a huge nightmare? It already is, but there's no way out. This is true everywhere in computing. Look closely at almost any protocol, API, language, etc. that dates back 20 years or more and has evolved a lot since then, and you'll see tons of cruft that just causes headaches but can't be eliminated. Like the fact that Internet traffic is largely in 1500-byte packets because that's the maximum size you could have on ancient shared cables without ambiguity in the case of collision. Or that e-mail is mostly sent in plaintext, with no authentication of authorship, because that's what made sense in the 80s (or whatever). Or how almost all web traffic winds up going over TCP, which performs horribly on all kinds of modern usage patterns. For that matter, I'm typing this with a keyboard layout that was designed well over a century ago to meet the needs of mechanical typewriters, but it became standard, so now everyone uses it due to inertia. This is all horrible, but that's life.
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
Marc Fawzi wrote: even if the DOM must remain a single-threaded and truly lock/barrier/fence-free data structure, what you are reaching for is doable now, with some help from standards bodies. ***But not by vague blather*** Sorry, I was too grumpy -- my apologies. I don't see much ground for progress in this whole thread or the sub-thread you started. If we're talking about sync XHR, I gave my informed opinion that deprecating it is empty talk if actually obsoleting by whichever browser takes the first hit inevitably leads to market share loss or (before that) developers screaming enough to get the CEO's attention. We will simply waste a lot of time and energy (we already are) arguing and hollering for and against deprecation, without any definite hope of obsolescence. If you want multi-threaded DOM access, then again based on all that I know about the three open source browser engines in the field, I do not see any implementor taking the huge bug-risk and opportunity-cost and (mainly) performance-regression hit of adding barriers and other synchronization devices all over their DOM code. Only the Servo project, which is all about safety with maximal hardware parallelism, might get to the promised land you seek (even that's not clear yet). Doing over the top JS libraries/toolchains such as React is excellent, I support it. But it does not share mutable or immutable state across threads. JS is still single-threaded, event loop concurrency with mutable state, in its execution model. This execution model was born and co-evolved with the DOM 20 years ago (I'm to blame). It can't be changed backward-compatibly and no one will break the Web. We should add lighter-weight workers and immutable data structures and other such things, and these are on the Harmony agenda with the JS standards body. We might even find race-confined ways to run asm.js code on multiple workers with shared memory in an ArrayBuffer -- that's an area of active research. But none of these things is anything near to what you described in the forked subject line: Thread-Safe DOM. I'll leave it at this.I invite others from Mozilla, Google, Apple, and MS to speak up if they disagree. /be
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
On 2/11/15 3:04 PM, Brendan Eich wrote: If you want multi-threaded DOM access, then again based on all that I know about the three open source browser engines in the field, I do not see any implementor taking the huge bug-risk and opportunity-cost and (mainly) performance-regression hit of adding barriers and other synchronization devices all over their DOM code. Only the Servo project, which is all about safety with maximal hardware parallelism, might get to the promised land you seek (even that's not clear yet). A good start is defining terms. What do we mean by multi-threaded DOM access? If we mean concurrent access to the same DOM objects from both a window and a worker, or multiple workers, then I think that's a no-go in Servo as well, and not worth trying to design for: it would introduce a lot of spec and implementation complexity that I don't think is warranted by the use cases I've seen. If we mean the much more modest have a DOM implementation available in workers then that might be viable. Even _that_ is pretty hard to do in Gecko, at least, because there is various global state (caches of various sorts) that the DOM uses that would need to either move into TLS or become threadsafe in some form or something... Again, various specs (mostly DOM and HTML) would need to be gone over very carefully to make sure they're not making assumptions about the availability of such global shared state. We should add lighter-weight workers and immutable data structures I should note that even some things that could be immutable might involved a shared cache in current implementations (e.g. to speed up sequential indexed access into a child list implemented as a linked list)... Obviously that sort of thing can be changed, but your bigger point that there is a lot of risk to doing that in existing implementations remains. -Boris
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
On 2/11/15 9:45 PM, Marc Fawzi wrote: this backward compatibility stuff is making me think that the web is built upon the axiom that we will never start over and we must keep piling up new features and principles on top of the old ones Pretty much, yep. this has worked so far, miraculously and not without overhead, but I can only assume that it's at the cost of growing complexity in the browser codebase. To some extent, yes. Browsers have obviously been doing refactoring and simplification as they go, but I think it's pretty clear that a minimal viable browser today is a lot more complicated than one 10 years ago. I'm sure you have to manage a ton of code that has to do with old features and old ideas... Sometimes. Sometimes it can just be expressed easily in terms of the newer stuff. And sometimes we do manage to remove things -- I'm seeing it happen right now with plugins, which definitely fall in the bucket of a ton of annoying-to-maintain code. how long can this be sustained? forever? If we're lucky, yes. what is the point in time where the business of retaining backward compatibility becomes a huge nightmare? About 10-15 years ago, really. We've just gotten pretty good at facing the nightmare. ;) -Boris
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
this backward compatibility stuff is making me think that the web is built upon the axiom that we will never start over and we must keep piling up new features and principles on top of the old ones this has worked so far, miraculously and not without overhead, but I can only assume that it's at the cost of growing complexity in the browser codebase. I'm sure you have to manage a ton of code that has to do with old features and old ideas... how long can this be sustained? forever? what is the point in time where the business of retaining backward compatibility becomes a huge nightmare? On Wed, Feb 11, 2015 at 12:33 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 2/11/15 3:04 PM, Brendan Eich wrote: If you want multi-threaded DOM access, then again based on all that I know about the three open source browser engines in the field, I do not see any implementor taking the huge bug-risk and opportunity-cost and (mainly) performance-regression hit of adding barriers and other synchronization devices all over their DOM code. Only the Servo project, which is all about safety with maximal hardware parallelism, might get to the promised land you seek (even that's not clear yet). A good start is defining terms. What do we mean by multi-threaded DOM access? If we mean concurrent access to the same DOM objects from both a window and a worker, or multiple workers, then I think that's a no-go in Servo as well, and not worth trying to design for: it would introduce a lot of spec and implementation complexity that I don't think is warranted by the use cases I've seen. If we mean the much more modest have a DOM implementation available in workers then that might be viable. Even _that_ is pretty hard to do in Gecko, at least, because there is various global state (caches of various sorts) that the DOM uses that would need to either move into TLS or become threadsafe in some form or something... Again, various specs (mostly DOM and HTML) would need to be gone over very carefully to make sure they're not making assumptions about the availability of such global shared state. We should add lighter-weight workers and immutable data structures I should note that even some things that could be immutable might involved a shared cache in current implementations (e.g. to speed up sequential indexed access into a child list implemented as a linked list)... Obviously that sort of thing can be changed, but your bigger point that there is a lot of risk to doing that in existing implementations remains. -Boris
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
On Thu, Feb 12, 2015 at 1:45 PM, Marc Fawzi marc.fa...@gmail.com wrote: this backward compatibility stuff is making me think that the web is built upon the axiom that we will never start over and we must keep piling up new features and principles on top of the old ones Yup. this has worked so far, miraculously and not without overhead, but I can only assume that it's at the cost of growing complexity in the browser codebase. I'm sure you have to manage a ton of code that has to do with old features and old ideas... how long can this be sustained? forever? what is the point in time where the business of retaining backward compatibility becomes a huge nightmare? When someone comes up with something sufficiently better to be worth abandoning trillions of existing pages for (or duplicating the read trillions of old pages engine alongside the new one). ~TJ