On 2/9/11 12:27 AM, Kyle Simpson wrote:
I can't speak definitively as to how the JavaScript engine is
implemented (and if the code is significantly different between mobile
and desktop).

In Gecko's case, it's identical (modulo the different JIT backends for ARM and x86 and x86-64, of course).

But I can say that even if their code is substantially the
same, I could still see it quite plausible that the device itself locks
up (not the browser) if there's just simply too much going, taxing its
limited CPU power.

Yes, but that could just happen due to background tabs, workers, etc. That's not something a page author can sanely control....

I can also see it quite plausible that mobile OS's are not as capable of
taking advantage of multi-threading

That's not the case for the OSes Gecko is targeting on mobile, at least. It's not the case on iOS, last I checked. I can't speak to WP7 or Symbian; I don't have any experience with them.

Regardless, considering such things is way outside the scope of anything
that's going to be useful for web developers in the near-term dealing
with these use-cases.

Yes, but so is the proposal here, no?

Even if you're right and the fault really lies with the implementor of
the JavaScript engine (or the OS), that's still a fruitless path for
this discussion to go down. No matter how good the mobile JavaScript
engine gets, I promise you sites will find a way to stuff too much
JavaScript down the pipe at the beginning of page-load in such a way as
to overload the device. That is a virtual certainty.

Yes, but what makes you think that those very same sites will make good use of the functionality we're proposing here?

Now you may be right that authors who really want to screw up like
that will just do browser-sniffing hacks of various sorts and still
screw up. But it's not clear to me that we need to make the barrier to
shooting yourself in the foot lower as a result....

That sounds more like a question of degree (how much we should expose to
the developer, and how) than the principle (should we expose it).

Yes, of course.  Degree is my only concern here.

In any case, I don't see much evidence that suggests that allowing an author to
opt-in to pausing the script processing between load and execute is
going to lead to authors killing their page's performance. At worst, if
the browser did defer parsing all the way until instructed to execute,
the browser simply would have missed out on a potential opportunity to
use some idle background time, yes, and the user will have to suffer a
little bit. That's not going to cause the world to come crashing down,
though.

Neither will the browser eagerly parsing.  ;)

What's VERY important to note: (perhaps) the most critical part of
user-experience satisfaction in web page interaction is the *initial*
page-load experience.

Really? The pages I hate the most are the ones that make every single damn action slow. I have had no real pageload issues (admittedly, on "desktop") in a good long while, but pages that freeze up for a while doing sync XHR or computing digits of pi or whatever when you just try to use their menus are all over the place....

So if it's a tradeoff where I can get my page-load
to go much quicker on a mobile device (and get some useful content in
front of them quickly) in exchange for some lag later in the lifetime of
the page, that's a choice I (and many other devs) are likely to want to
make.

See, as a user that seems like the wrong tradeoff to me if done to the degree I see people doing it.

Regardless of wanting freedom of implementation, no browser/engine
implementation should fight against/resist the efforts of a web author
to streamline initial page-load performance.

Perhaps they just have different goals? For example, completely hypothetically, the browser may have a goal of never taking more than 50ms to respond to a user action. This is clearly a non-goal for web authors, right? Should a browser be prohibited from pursuing that goal even if it makes some particular sites somewhat slower to load initially (which is a big "if", btw).

Presumably, if an author is taking the extraordinary steps to wire up
advanced functionality like deferred execution (especially negotiating
that with several scripts), they are doing so intentionally to improve
performance

My point is that they may or may not be improving performance in the process, depending on implementation details. Unless they tested in the exact browser that's running right now, it's hard to tell.

I see this all the time in JS: people time some script in one browser, tweak it to be faster, and in the process often make it slower in other browsers that have different JIT heuristics or different bottlenecks.

I think you're assuming a uniformity to browser implementations that's simply not there.

and so if they ended up actually doing the reverse, and
killing their performance to an unacceptable level, they'd see that
quickly, and back-track.

You're assuming authors will test in multiple browsers. In my experience, bad assumption...

It'd be silly and unlikely to think they'd go
to the extra trouble to actually worsen their performance compared to
before.

You're assuming malice; I'm just assuming insufficient testing (which is pretty understandable, by the way; I wouldn't want to invest a few thousand dollars in all the various mobile devices needed to run all the different mobile browsers to test on my site) and overtuning for the things you test. The latter is pretty likely.

Really, let's not always assume the worst about web authors.

I'm not. Really, I'm not. Above I'm even assuming way better than is warranted based on my experience actually interacting with websites and attempts by site authors to fix issues. I'm still led to the conclusion that if we let authors overconstrain the UA here that's a bad idea.

If they do it wrongly and their users suffer, bad on them, not on the rest of
us.

If they do it wrongly _my_ users suffer. I'm in the business of creating a user-agent here. Something that represents the user and is supposed to provide the user with the best experience I can manage. While I agree that authors who really want to screw up can always do it, I think we should be careful to make screwing up _hard_, all else being equal. Of course sometimes not all else is equal.

In other words, forbid the browser to start parsing the script, right?
How would you tell whether a browser implemented this as specified?

I could tell if the browser let me control execution pretty easily.

I'm talking about parsing here, not execution.

As for telling if the browser were deferring parsing to a useful degree,
I'm sure that the only way to actually determine that would be test a
site with a particularly long script file (length being a rough
approximation for parsing time), and see just when (between load and
execution) the big parsing delay (as observed previously) was happening.

Assuming the browser does the parsing on the main thread, yes? What if it doesn't?

OK. I definitely agree that better user control over when script
_executes_ is a good idea. Sounds like we're on the same page there.

I think that what should happen is, if a web author chooses to defer the
execution of a script (easily detectable if it's not ready for execution
when loading finishes), the browser should get the strong HINT that
maybe the parsing should be put off until later (and perhaps until
execution time).

I can live with that.

Do you see how what I'm asking for is not direct forcible control over
parsing but some way to strongly encourage the browser to defer both
parsing and execution until later, when *I* say I want it to?

That wasn't what the initial proposal sounded like, but ok. If that's what you're looking for, that sounds fine.

-Boris

Reply via email to