It's true that on many of the most modern browsers, purely caching the
array length for a single for-loop won't show much difference. That's
because that used to be such a problem perf-wise, and the engines
figured out specific optimizations to address it.

As has been pointed out in the previous messages, older browsers show
off these problems in glaring detail. Some people feel like you should
always write code based on the current generation browsers, others
feel you should write to what's coming "next", and others are forced
to write code that is aware of and as tolerant as possible in the
older generations of browsers.

I'd say that's more the key when deciding on these types of "micro"
optimizations... which is the least-capable engine that you care about
your code running well in. If that's bleeding edge Chrome 11, and you
don't care about the rest, then by all means, write all the un-
optimized, abstracted, layered, object-oriented code you can possibly
imagine. If you care about IE6, you might want to consider backing off
of some of that idealism and making some small sacrifices in
readability to get huge gains in performance.

The other thing that people should realize is that, with respect to
array.length caching, sometimes it's not the property lookup that's
the bottleneck, but it's that the array variable you're iterating on
is not in the current scope. If it has to do a lookup on the prototype
chain of some object and make several hops before it finds the array,
or it has to look up several levels of closure scope, or even if it
has to traverse a deeply nested object property tree, those things can
be a lot more costly (sometimes in invisible ways) than just looking
at the .length property itself. Declaring a local alias to the array
before the loop and using that can sometimes net you 8-15% better
performance on the loop. Sure, that's 15% of 3ms (aka, really small),
but it's still a gain nonetheless. It's up to you if that tiny gain is
useful. And yes, I know that some newer (not-yet-released) browsers
are getting even better about caching these lookups.

--------
Lastly, with respect to both my Script Junkie article, and my follow
up blog post (linked above), I was intentionally trying to *not*
discuss micro-optimizations like operators and .length lookups. Not
that this isn't a valid subject to be aware of, but it also doesn't
hold very much real-world improvements... mostly theoretical stuff,
like how in CS classes we got happy when we got an algorithm that was
O(n*log n) instead of O(n^2).

My articles are trying to focus on patterns of degraded performance.
They're called patterns for a reason... because they tend to repeat
themselves across your entire code base. If you are in the habit of
being sloppy about function-call overhead in your API design, or scope
lookups on nested namespaces, or things like that, this pattern will
tend to slow down most if not all of your code.

So while someone can claim looking at a single loop's performance is
just micro/premature optimization, and 20% savings on 3ms is nothing
to care about, if that same type of mistake is made across ALL your
code, and your entire program takes say 500ms to run, then 20% savings
(100ms) across the whole program is something to care about.

I think the debate over performance should shift from focusing on a
single loop or mistake to looking at patterns for bad performance that
permeate our entire code base. That's where the %'s will actually
start to pay off.

--Kyle

-- 
To view archived discussions from the original JSMentors Mailman list: 
http://www.mail-archive.com/[email protected]/

To search via a non-Google archive, visit here: 
http://www.mail-archive.com/[email protected]/

To unsubscribe from this group, send email to
[email protected]

Reply via email to