On Apr 17, 2010, at 3:03 PM, David Herman wrote:
There are multiple levels of opt-in versioning:
(1) versioning of the language itself
(2) language support for versioning of libraries
I agree with what you're saying wrt (1), but wrt (2), feature
detection is feasible, and I'd think more tractable than version
detection.
Yes, I agree.
TC39 has discussed a "frame" (meaning DOM window, ideally web-app
wide) version selection mechanism for the built-ins libraries (plural:
JS, DOM, and more -- and all libraries, too, not particularly
distinguished by being native or primordial in their specs).
No one has made a proposal to TC39 yet.
The closest thing to (2) being fielded today may be what modern IE
versions [A], and now Google Chrome Frame [B], do with the X-UA-
Compatible HTTP header. David Baron of Mozilla has written cogently
about this header [C].
I'm not in favor of inventing something in Ecma that adds opt-in
versioning of the object model (2), for the reason I gave in reply to
Peter van der Zee: complete opt-in versioning including new API
visiblity is too brittle over time -- it is likely to lead to over-
versioned, under-tested, ultimately non-working (except for one of N
browsers) code. Object- and in general feature-detection is more
resilient and less likely to suffer version-scope-creep.
/be
[A] http://msdn.microsoft.com/library/cc817574.aspx
[B]
http://www.chromium.org/developers/how-tos/chrome-frame-getting-started#TOC-Making-Your-Pages-Work-With-Google-
[C] http://dbaron.org/log/2008-01
Dave
On Apr 17, 2010, at 9:38 AM, Brendan Eich wrote:
On Apr 16, 2010, at 2:31 PM, David Herman wrote:
PS Still, I have my doubts about using any such mechanisms for
versioning. Incidentally, ROC was just talking about versioning
and metadata on the web:
http://weblogs.mozillazine.org/roc/images/APIDesignForTheMasses.pdf
Rob's blog post:
http://weblogs.mozillazine.org/roc/archives/2010/04/api_design_for.html
He wasn't talking about JS API design, but some of the lessons
still apply.
Old WHATWG co-conspirators like me obviously agree on the
principles roc presents, but they do not work so well in JS
compared to HTML or even CSS. Consider HTML markup:
<video ...>
<object ...></object>
</video>
A new HTML5 video tag with an object tag as fallback, to use a
plugin to present the video for pre-HTML5 browsers. There are text-
y examples that work too, even if the degradation is not as
graceful as you might get with a plugin (plugins can lack grace
too :-/).
CSS has standardized error correction from day one, although as
noted in comments on roc's blog it lacks "feature detection". But
graceful degradation seems to work as well with CSS as with HTML,
if not better.
With JS, new syntax is going to make old browsers throw
SyntaxErrors. There's no SGML-ish container-tag/point-tag model on
which to build fallback handling. One could use big strings and
eval, or XHR or generated scripts to source different versions of
the JS content -- but who wants to write multiple versions of JS
content in the first place.
The "find the closing brace" error correction idea founders on the
need to fully lex, which is (a) costly and (b) future-hostile.
Allowing new syntax in the main grammar only, not in the lexical
grammar, seems too restrictive even if we never extend the lexical
grammar -- we might fix important bugs or adjust the spec to match
de-facto lexical standards, as we did for IE's tolerance of the /
[/]/ regexp literal.
So API object detection with fallback written in JS random logic
works (for some very practical if not theoretically pretty
definitions of "works") for the non-syntactic extensions coming in
Harmony, assuming we can dodge the name collision bullets. But for
new Harmony syntax, some kind of opt-in versioning seems required.
We survived this in the old days moving from JS1.0 to JS1.2 and
then ES3. One could argue that the web was smaller then (it was
still damn big), or that Microsoft's monopolizing helped
consolidate around ES3 more quickly (it did -- IE started ignoring
version suffixes on <script language=> as I noted recently).
Roc's point about fast feedback from prototype implementations to
draft standards is the principle to uphold here, not "no versioning".
Obviously we could avoid new syntax in order to avoid opt-in
versioning, but this is a bad trade-off. JS is not done evolving,
syntax is user interface, JS needs new syntax to fix usability
bugs. I'm a broken record on this point.
Secondarily, new syntax can help implementations too, both for
correctness and optimizations.
So I should stop being a broken record here, and let others talk
about opt-in versioning. It seems inevitable. We have it already in
a backward-compatible but semantically meaningful (including
runtime semantic changes!) sense in ES5 with "use strict".
Opt-in versioning s not a free ride, but it is going to a
destination we need to reach: new syntax where appropriate and
needed for usability and productivity wins.
/be
_______________________________________________
es-discuss mailing list
[email protected]
https://mail.mozilla.org/listinfo/es-discuss