I'm not quite sure why you're running into these issues. It may be worth
while hopping onto the Firefox mailing lists and asking there, though...
In the interim, have a read of
http://www.softwareishard.com/blog/firebug/script-execution-analysis-in-firefox-4/
. I have a feeling that it may be relevant to your problem.
I can't help but think that a lot of your problems, though, could be
solved by decoupling your code more. Perhaps some sort of message sink
system. We use a bespoke one that I wrote here at work, but I believe
that Mark Obcena has some good stuff in this area. Check out
http://keetology.com/blog/2010/10/01/modules-and-callbacks-going-hollywood-with-mootools
and http://keetology.com/blog/2011/03/15/sondheim-meets-mootools .
- Barryvan
On 19/05/11 02:49, Ger Hobbelt wrote:
Let me try do something more constructive than 'mekkeren' over a
semicolon and exchanging world views. Heck, two more rounds and we'll
get the Lord Semicolon caricaturized in ASCII art in here and instead
of talk of a design contest we'll have the Leftists (a.k.a. the Shift
Ones) banging heads with the Right Wing a.k.a. Semicolon Guards in
here. :-P
So... lazyloading. Trrrrouble.
Q1: am I the only one who's seeing this?
Q2: can you poke at rotting spots in my reasoning?
Much obliged!
For TL;DR: see TOC at bottom.
# The story
The usual thing to do when loading a lot of javascript files is
something like this:
-----------
<!--[if IE]>
<script type="text/javascript" src="../Demo/scripts/excanvas.js"></script>
<![endif]-->
<script type="text/javascript"
src="../Demo/scripts/mootools-core.js"></script>
<script type="text/javascript"
src="../Demo/scripts/mootools-more.js"></script>
<!--MOCHAUI-->
<script type="text/javascript" src="../Source/Core/core.js"></script>
<script type="text/javascript" src="../Source/Core/create.js"></script>
<script type="text/javascript" src="../Source/Core/require.js"></script>
<script type="text/javascript" src="../Source/Core/canvas.js"></script>
<script type="text/javascript" src="../Source/Core/content.js"></script>
<!-- etc.etc. ad nauseam -->
-----------------
which is plonking a whole lot of <script> in them there HTML page(s)
and letting it rip.
## Observed symptom:
Particularly when debugging in various browsers, old and modern, you
experience some pretty 'weird' 'unknown object' type errors. The worst
I've seen was with mochaUI, mootools-more and -core, done in the style
above, on a /severely loaded/ server annex test client box: *-more
required a few things from *-core to exist, i.e. *-core to have been
parsed by the JS interpreter by the time it got to parsing the *-more
code, and oddly enough the *-core code hadn't been parsed yet. And it
wasn't a typo in there crashing/aborting the parser, because at other
times, no change what-so-ever, the same combo runs and debugs just fine!
## Observed particularly in:
several of the latest FF3.6.1x revs and FF4, all on Win7/64. Very few
times with IE9. Haven't really tested that much with the others;
nowadays I do most of my dev work in Safari5/Win7 because FireBug
is... another story.
Point is: I do know FF3/4 had (has) trouble, can't say about the others.
Please bear with me, and keep those two Q-uestions in mind.
Encountered the same? Maybe with other combinations of JS script
files? Stay tuned.
This was somewhere winter last year, when I was getting a hold of my
JS fu, so I blamed myself. But I /think/ my fu improved over time
while this type of issue keeps popping up at the oddest of times.
Generally, you get more of it on loaded boxes and 'old hardware', i.e.
hardware that's having a bit of a bother keeping up with all the
JS/Web 2.0 greatness.
## Preliminary Analysis
I had a look and what I saw happening was that several JS files get
loaded in parallel (which is cool; finally some multithreaded loading
in them browsers thingies; took them bloody long enough), but the
graphics in FireBug and Safari were a little vague on the _exact_
order of execution of each JS: looked like they were running in
parallel. WTF? JS is always listed as a single-threaded thing; which
is ludicrous to a systems engineer like me, but then JS doesn't offer
J.S. in terms of signals, critical sections and all that Dijkstra
goodness that was already around in the 70's, so it'd better /be/
singlethreaded or you're /toast/.
Well, so far, I blamed it on the graphing system, tabled the issue as
'a pain in the neck I need to fix asap when I have gathered enough grok'.
# The plot thickens
I went on, had several pains in the neck but managed to migrate to a
100% lazyload approach in an attempt to have my pages load/render that
one microsecond faster.
Got my grubby paws on an excellent lazyloader from a certain Mr. Ryan
Grove ( https://github.com/rgrove/lazyload ) and because I'm allegedly
a 'tactile person' I just had to cop a feel and tweak a few knobs.
Which was a very satisfactory experience, except the aforementioned
'pain in the neck' had a field day. Meanwhile my google fu was having
a breather as everybody I read was harping on having a perfectly good
2.0 experience with this here lazyload concept, so I felt I was
apparently 'the only one with trouble'. And what does 'git blame' say
when you're the only one with an issue? 'look at mirror; culprit
identified'.
## Further analysis; some trial and success
What I found was that the communication graphs were as helpful as
before, FireBug in particular showing completely ridiculous 'spent on
DNS name resolve' time chunks, so I fired up me old trusty network
sniffer (Wireshark) and had a look at the UDP/TCP traffic. Firebug was
talking pure fertilizer. The local DNS server (a special BIND9 thing I
installed way back when so I can serve the .lan. root domain on my
Intranet; _very_ handy when you have multiple Indians doing the
virtual hosted website thing) was faster than the eye could blink
while FF3/FB reported times up to 500ms for that particular part of
the comms. Trustworthiness of graph is hence rated 'works for Lehman
Brothers' and I went on to check with Wireshark. (Too bad I don't have
access at the dedicated hardware sniffing equipment any more. :'-(
Wireshark is cool but that one was off the scale in terms of cool. Sigh.)
Parallel loading is fine, compliments on that one (Mr. Grove done an
excellent job!), but given the lack of MT support in JS, you sure want
the browser/JS engine to /execute/ the loaded JS in order of
appearance, /one/ at a time.
Given the googled blogs and such, there might be a concern there in
FF4, but nobody mentioned FF3.6 so I was at a loss.
## Hypothesis
Given that the erratic behaviour is clearly correlated to random
processes and/or alignment of the Moon and planets; heck maybe even
sunspot activity, it gives off the same fragrant smell as those good
old multithreaded server applications everybody was always struggling
with: each and every time it was a 'timing issue' a.k.a. somebody not
taking care of their thread synchronization/locking in there. A darn
tough roach to find and crack, but you got to, if you can't afford
failure in a realtime environment.
The working hypothesis hence is that the JS gets executed in parallel
somehow, or maybe not in the <script> load order. (The lazyloaders are
all just adding <script> tags so it's really the same thing basically
as having all those <script> tags in there done by hand; the BIG
difference is timing, as a lazyloader will (slowly) add one <script>
tag after another in there, so it /promises/ you'll have a guaranteed
load order as each <script> is only added once the previous one has
fired its 'load' event. Exhaustive code analysis and tests show me
that /that/ guarantee can be assumed to be upheld, so /load/ /order/
can be said to be guaranteed. ... and, BTW, yep, somebody reported a
bug regarding that in lazyload.js lately, but that fault scenario
didn't apply to me.)
Hence the refinement of the working hypothesis -- and please shoot
holes in this when I'm wrong! -- is this:
The browser fires the 'load' event when the JS script is indeed...
loaded. Doesn't mean it's /parsed/ already! Meanwhile, another loading
thread can go and fetch the next JS file (as the browser recognizes
the next <script> tag in there which hasn't been processed yet), and
/somehow/ there's a way to have the previous JS still being parsed
while the next one is loaded but /does not wait/ and hence gets parsed
in parallel, at least for a bit (this has me dumbfounded but the
observed behaviour cannot be explained by me any other way).
## Observed behaviour
Please remember the observed behaviour: two (or more) JS files are
loaded, where B comes after A and depends on A having been
/completely/ /parsed/. As this somehow doesn't happen all the time,
you get 'odd behaviour' by receiving JS errors stating things like 'Fx
object unknown' and in the debugger it shows up as... 'undefined'.
Holy (beep)! Meanwhile, you've console.log happily reporting
mootools-core did arrive (load event fired!) and you're now looking at
-more. (Assuming those two are the A and B here.)
Run again, problem may or may not recur. Throw dice, maybe you're
lucky. Hit F5 fast enough 'seems' to remedy this, but after doing that
for a day or so, emotional statistics indicate no difference at all (I
didn't go total whitecoat and taking notes, registering times, etc. If
you want that, you'll have to pull my dad out from retirement. He's
the logbook scientist research/analysis type.)
S*... I mean: /fertilizer/!
There IS a difference though: kill some heavy calculus activity
running in the background, eating about 3.5 of your 4 cores, try web
page again in debugger and there's ... /less/ trouble. It is /not/
/gone/. But it's a whole lot less.
Which is why I came up with the hypothesis above about load and parse
overlapping into the next parse block, read: browser has locking
issues when it comes to executing JS as if it were truly
singlethreaded. Somebody is not being smart enough. Me, them, or both.
# The solution that made it all go away
Riding on the last hypothesis (chique verbiage for 'assumption' a.k.a.
'guesswork') I tried two things, the first because I thought I saw
'something else' (nope, not UFOs, but we're close):
first thing I did was write a 'combiner' so I could do the above as:
<!--[if IE]>
<script type="text/javascript" src="../Demo/scripts/excanvas.js"></script>
<![endif]-->
<script type="text/javascript"
src="../Demo/scripts/mootools-core.js,mootools-more.js"></script>
<!--MOCHAUI-->
<script type="text/javascript"
src="../Source/Core/core.js,create.js,require.js,canvas.js,content.js,etc.etc.ad
<http://etc.etc.ad> nauseam"></script>
which means I have a 'real time packager' sort of thing server-side
and the goal being: serving fewer separate JS files, preferably only
one(1)!
Because the 'object not found' trouble is definitely coming from the
transition from one JS file to the next, that's for sure.
Anyway, I observed a /significant/ decrease in trouble once I did
this, so I was on to something. Went ahead, added a real-time minifier
and the works, so I can have development source code (with comments)
and production-crunched output at the flip of a switch, allowing me my
desired global behaviour: the ability to debug in a production
environment without any drawbacks like not having the actual source
code for the debugger to walk through. (Been at enough places where
deadlines were met by shortcircuiting the testing department; you
never forget /that/ sort of lesson when you are treated to the
aftermath of some pointy haired decision maker doing his thing to save
his bonus.)
As I didn't (yet) merge all the JS into a single file, the trouble of
'object not known' appeared once in a while.
Also my thoughts went like this: can't always do a 'merge everything
into one' or better put: I couldn't stand it that some bloody browser
was outsmarting me. I hate that. Machines are supposed to be dumber
than people.
So there's the second thing I tried...
# The Final Solution
At one point I ran into google translate. I don't know if they do it
for this reason, but I saw something and had an idea.
Their code looked a bit like this:
<script type="text/javascript"
src="../Demo/scripts/mootools-core.js?cb=do_my_thang"></script>
and the idea is:
- I am in control of the backend
- they clearly are too, and they use that 'cb' parameter to make their
backend produce code which invokes 'do_my_thang()' once X.js is loaded
... and given that JS can only invoke other JS code once it has been
/parsed/, they got something here that's surely only 'firing' once you
got it loaded+parsed!
- YES! I can do the same!
So I went and I did. That's one of the knobs tweaked in Mr. Grove's
code --> [email protected]:GerHobbelt/lazyload.git (which I should update
to Mr. Grove's latest, but anyway: the last 4 lines in lazyload.js are
the ones that matter here: they show the implementation-by-hand of the
idea.)
The lazyloader works as before, ensuring /load/ order as before, only
the tweaks allow one to write code which also ensures /execution/
a.k.a. /parsing/ order by now being able to fire off the next <script>
load action once the previous one has loaded /and/ parsed. Because at
the end of the parse, thanks to the custom backend I now use, each JS
sent to any client gets a little bit of custom code attached which
invokes a frontend-specified function - just like the google translate
similar example above.
Only once that function is invoked, do we trigger the lazyloader into
fetching the /next/ JS file, giving us an iron-clad /guarantee/ that
all JS loaded that way /executes/ in order of appearance, in 'single
file' - like momma duck and ducklings.
Which is exactly what you need.
Of course, that means I'll have to go in and nuke every Asset.js() to
Kingdom Come -- or coax them into going through my lazyloader front +
custom augmenting backend, but for one I can now happily say that
mocha and several others, including mootools itself, /never/ report
the 'object not defined' and similar errors due to some previous class
name still being 'undefined' contrary to expectations.
# The Wondering Why
-------------------------
So far, my solution works, and I trust it to work. Because it's done
in such a way that it's completely... 'Dijkstra savvy'... even when my
hypothesis is off. Which is quite possible as it's got some implicit
flaws: JS is singlethreaded, /they say/, yet the hypothesis says it
isn't (as there's overlap) so that means some browsers out there have
the ability to run JS in multiple threads in parallel but meanwhile
/seem to share/ the variable space. (Note that FF3/4 keep running JS
code in other tabs while you've stopped the run in the current tab in
the FireBug debugger, while the funny thing is that in Safari5/Win7 JS
apparently really is done as a singlethreaded thing because stopping
tab 1 in the JS debugger also halts all the JS in the other tabs.)
So my reasoning must be faulty, flaky, or otherwise incorrect or, at
best, imprecise, and I'd like to know where I went wrong. That's Q1!
Secondly (Q2), I'm /really/ curious if others experienced similar
'WTF? this should have been defined by now! It bloody well loaded
before already!' behaviour in their FF (Chrome? Opera? IE? Safari?
...?) browsers. If you've got another solution approach that's
provably correct for the stated goal (yeah, I'm digging that sort of
thing, sorry ;-) ) than I'm /all/ /ears/.
TL;DR:
-------
- Observed: intermittent errors about objects not being defined
(debugger says: 'undefined') while they /should/ have been loaded as
part of previous JS files.
- You already know those files loaded, which makes this error rate a
'WTF?'
- Trial and error show that the error rate decreases on fast, lightly
loaded machines. Old and busy hardware are prone to failure.
- Failure rates range between 'irritating, not good, but we've got
bigger fish to fry' and 'this product of yours may work on _your_
machines, but on ours it's pure fertilizer, so you fix it before we
remember to pay your check, comprendo?'
- Failure rate decreased consistently across the board when you merge
multiple JS files into few or even one. With one JS file, and
everything moved into it, so no <script> in the HTML page itself even,
no trouble!
- Failure rate decreased consistently across the board when you minify
your JS files, comments and everything stripped (probable cause: less
parsing effort @ client-side)
- lazyloaders, home-grown, Assets, or otherwise, do not have a
positive effect; they /probably/ have a negative effect on the error rate.
- solution consists of provable correct approach to loading and
parsing multiple JS files in predetermined order by using custom
backend to augment code output, which instructs frontend when to load
the next JS file. /May/ be a re-invent of the 'google translate' code
approach.
- Q3: this may be 'original work', but certainly can't be a 'first',
so where are the others? URLs?
--
Met vriendelijke groeten / Best regards,
Ger Hobbelt
--------------------------------------------------
web: http://www.hobbelt.com/
http://www.hebbut.net/
mail: [email protected] <mailto:[email protected]>
mobile: +31-6-11 120 978
--------------------------------------------------
--
Barry van Oudtshoorn
www.barryvan.com.au
Not sent from my Apple ?Phone.