On May 27, 2008, at 11:00 AM, Brendan Eich wrote:

What's at issue is whether and why unqualified import matters in any object, even the global object only, since the NAS proposal did not allow unqualified import even at global level, and the use-case for unqualified import was dismissed as not compelling.

There's really 4 separable issues:

1) Namespacing of names at global scope (via lexically scoped reference).
2) Unqualified import of names into global scope.
3) Namespacing of arbitrary object properties.
4) Unqualified import of namespaces for arbitrary object properties.

I would claim 1 and 2 are essential, 3 can be done by convention in the absence of 4 (a la the NAS proposal) and 4 is unnecessary and harmful to performance.

That's an interesting idea, although we use namespace qualification along the prototype chain all over the place in ES4, and for what seem like good reasons.

Other languages with successful namespacing features don't have such a mechanism, so I am dubious of the goodness of these ideas. I am concerned that the namespace lookup algorithm for object property access is too complicated.

Agreed, this is the big issue. I share your concern, but the conservative approach (esp. with reference to C++) of throwing out non-global open namespaces looks like an overreaction, and it may not save much complexity.

It could save a lot of complexity, by not requiring any first-class support for namespace lookup on arbitrary objects.

It makes object property lookup depend on the set of open namespaces, which means obj.property may compile to entirely different code depending on the context,

Lexical context, no dynamic open-namespaces scope.

Note I said "compile to" so I think this was clear.


and it seems likely it will slow down property lookup when multiple namespaces are open but static type info is missing.

It certainly could, although we think not in implementations under way. Opening multiple namespaces without is not free in a dynamic language.

Is the name lookup algorithm much simpler if namespaces are top- level only? Since obj.prop could end up with obj referring to the (or I should write "a") global object, I don't see it. Unless you're proposing outlawing such object references using the namespaces open at top-level when obj is the global object.

I would propose that unqualified import only affects lexically scoped lookups, not object property access, even if the object in question is the global object. In particular, if you are willing to say "global.property" instead of "property", it is not such a hardship to say "global.ns::property".


If the only real justification is that it's a nice generalization, then I do not think it is worth the performance hit.

The nice generalization followed from particular use-cases, it did not precede them. I cited those cases (briefly). How about being responsive to them?

I think many (perhaps all) of those cases either use namespaces gratuitously or work fine without unqualified import (and so could use namespaces by convention). For example it doesn't seem important to allow unqualified import of the meta namespace.



ES (any version) has objects as scopes, as well as prototypes. It's hard to keep the gander -- objects below the top level, or on the prototype chain -- from wanting the same sauce that the goose -- the global object -- is trying to hog all to itself.

Is it really? Is there any other language where namespacing of the global namespace has led to namespacing at the sub-object level? C+ +, Java and Python all get by fine without namespacing of individual object properties.

C++ and Java are not the right paradigms for JS/ES. Python is better, but Python *does* allow import in local scope.

Python allows import from inside a local namespace, but does it allow import from outside a local namespace to affect lookup into that namespace? I am not aware of such a feature but I'm not a Python expert.


The reason namespacing at top level is essential to programming in the large is that the global namespace is a shared resource and must be partitioned in controlled ways to avoid collision in a large system. But I do not see how this argument applies to classes or objects.

See Mark's big post, which discusses (in item (b)) extending objects, including standard ones.

Saying the global object is a shared resource that must be partitioned, etc., but no others reachable from it, particularly class objects, are shared resources, is begging the question: what makes any object a shared resource? That the global is the only necessarily shared object does not reduce the benefit, or make the cost prohibitive, of sharing other objects reachable from it.

The benefit is less, because you can use separate objects in different namespaces instead of a shared object with namespaces inside it. The cost is greater because it makes all property lookup more expensive.

The fact that the generalization seems to be unique to ES4 makes me dubious of its actual usefulness. If it has a performance cost, then the generalization seems like a dubious choice.

First, performance is not king or JS would not have prototypes and dynamic typing. It's not a trump card.

JS does have many features that are not good for performance. But that does not mean we should add more.


Second (after all the warm-up sparring), you didn't respond to the particulars that led to the generalization:

* internal namespace per compilation unit for information hiding -- hardcode as a special case?

I'm not sure how this applies to unqualified import of namespaces on arbitrary object properties.

* iterator and meta hooks in objects. Ugly __get__, etc., names instead?

Unqualified import is not necessary for iterator or meta hooks. Namespaces by convention (or __decoratedNames__) would be enough.

* helper_foo() and informative_bar() in the RI?

I don't think any language feature should exist solely for the convenience of the RI.


The use of __ brackets is a problem for rewriting systems like Caja. Whitelisting standard __hook__ names could work, but is this really the best we can do? I doubt it.


Is the cost too high? I think that depends on how the name lookup algorithm works on real-world code. AS3 developers have data to share. Let's get into that.

I'd love to hear the data. AS3 developers, can unqualified lookup of object properties on untyped references in the presence of property namespaces be as fast as when there aren't namespaces at all? If so, how? The most obvious way to do property lookup when there is no static type info is a hashtable lookup on each prototype chain entry, but I do not see an obvious way that a single hashtable lookup can look in multiple namespaces.

Competitively optimizing JS, excluding namespaces, increasingly requires non-obvious implementation techniques.

For that very reason, it is a very bad idea to add features that intrinsically require non-obvious implementation techniques to get decent performance. If more effort must be spent the necessary complexity of the language just to preserve current levels of performance, then that takes away resources from implementing unnecessary complexity to improve performance beyond current levels.

In general, keeping the language simpler is good for performance. Other things being equal, simpler implementations are easier to change, and have more room to add complexity for optimization without becoming too complex for human understanding.

I will agree that some added language features are essential but I think minor improvements in expressiveness that have large complexity cost are a poor tradeoff.

This is a good trade-off if it can be done in reasonable footprint, since there is a huge installed base, and a pretty-big knowledge base. We're not, for example, going to remove the ability to replace String.prototype.charAt, in any compatible ESn version.

Preserving compatibility with existing performance-harming features is not the same thing as introducing new ones.

Optimization ease is not and should not be the sole consideration. Exotic techniques should not be mandated by the spec, on the other hand. But without 'use namespace N' pragmas, programmers will not run into ambiguities at compile time, that result in run-time cost.

Programmers can buy by the yard here, as with other parts of the language that trade performance (or alternatively: that fuel demand for more optimized runtimes) in exchange for expressiveness. And for a lot of JS code, core language performance does not matter even if you go nuts with eval and 'with', compared to other costs dominating the critical paths.

Programmers have found ways to live with today's performance, and it is true that for some applications core language performance is not the bottleneck. But that doesn't mean we can feel free to ignore performance considerations in the design of new language features.



I suspect the answer to this in AS3 is that if you want performance, you have to use type declarations.

That may be the case for AS3 code using Flex, but even such a result would be informative -- don't open multiple namespaces if you're using untyped objects and targeting an unoptimized implementation.

But really, why is any of the several feature combinations that could hurt performance (with, eval, deep scope chains and prototype chains in most implementations) a reason to cripple the language? Performance is not king, and JS ain't C++.

Is removing unqualified import of namespaces at non-global scope (the only aspect of namespaces that seems prima facie harmful to performance) really "crippling the language"?

Regards,
Maciej


_______________________________________________
Es4-discuss mailing list
[email protected]
https://mail.mozilla.org/listinfo/es4-discuss

Reply via email to