On 11.04.2011 22:37, Mark S. Miller wrote:
On Sun, Apr 10, 2011 at 5:27 PM, Dmitry A. Soshnikov
<[email protected] <mailto:[email protected]>> wrote:
As I see it, you address the "issue" of unstratified
meta-programming (note, I take the "issue" in quotes, since
there's no a single meaning whether the unstratified meta-level is
so bad).
It depends on how to look on the issue. On one hand, why if a user
changes the behavior of the `hasOwnProperty` of _his_ object in
_his_ program, then the program should be considered as incorrect?
The user knows how to handle `hasOwnProperty` for the particular
object and the ability to override it is the ability to control
unstratified meta-level via simple reassigning to meta-hooks.
Hi Dmitry, your response makes clear two issues I failed to explain well:
By "program", I do not mean a whole program, i.e., the totality of all
code run within a given JS environment. David's original
getDefiningObject function and variations are clearly not whole
programs. Rather, they are reusable program fragments that could be
packaged in reusable libraries, meant to be link into and used from
within larger programs that the author of getDefiningObject should not
need to know about.
Since we're not worried here about programming under mutual suspicion,
it could very well be valid for getDefiningObject to assume that
various methods are overridden in a contract preserving manner. For
example, a Java hashtable that calls a key's "hashCode" method could
be correct. The key's hashCode method might be incorrect, in which
case the hastable would function incorrectly, but we'd say that this
incorrect behavior is the key's fault rather than the hashtable's fault.
We would like a notion of correctness that allows us to reason in a
modular manner: A correct composition of individually correct
components should yield a correct composite, where the correctness of
the composition depends only on the contracts of the components to be
composed -- not on their implementation. The Java composite above is
incorrect because the key is incorrect. This does not contradict the
notion that the hashtable abstraction by itself is correct even though
in this case it behaves incorrectly.
Yes, thanks for clarification, Mark, I understand this and generally
agree. Though, the issue of combining two "correct" chunks of a code
which may cause then "incorrect" behavior by the meaning of one of the
them (or both) is generic. I can even say it's a problem of a "shared
state". One chunk thinks that it uses this value from the shared state,
meanwhile another chunk has already change it. And it may appear
everywhere when we have such a combination of source which can differ in
semantics.
E.g. the example I used in explaining "hoisting" with using function
declarations. Suppose we have two files `foo.js` and `bar.js`. Thus, the
code of foo.js is:
function foo() {
alert(1);
}
foo(); // 2
why does it alerts 2? We tested the code and saw that it alerted 1. But
-- it was before we combined foo.js with bar.js (normal practice on the
web to combine several sources in one and to obfuscate/minimize it)
which contains:
function foo() {
alert(2);
}
This program now became incorrect while both chunks were correct. And
this is because we got "shared state collision". The similar with that
`hasOwnProperty` -- the function which analyze whether an object has a
property _shares_ the whole prototype chain in respect of where
`hasOwnProperty` is found -- either original
`Object.prototype.hasOwnProperty` or the injected one, own.
The problem here is specifically with the methods on Object.prototype
vs the pervasive use of objects as string->value maps in
JavaScript. What are we to do about
getDefiningObject(JSON.parse(str), 'foo')
? Even without mutual suspicion within the program code, we would like
to reason about the correctness of this as quantified over all
strings, for example, even if the string is received from an external
untrusted or unreliable source.
Yeah, but unfortunately to support "nearly correct" behavior we should:
- to avoid inheritance at all;
- to make objects as simple hash-tables without prototypes (consequence
from the first point);
- to move all meta- and system- operations to the separate space.
We can argue why to avoid inheritance? Because then we can say, that
"foo.js" expected that some method called on instance would behave in a
way we expect, but the object defined in "bar.js" has already
overwritten it and uses own implementation. Is the program still correct
or incorrect? Which chunk has decided it so -- foo.js or bar.js? Or
their combined version? If it's incorrect, then there should be no
inheritance with ability to shadow -- but this is a casual code reuse today.
Or we can remove from inheritance only meta-operations (this is from
what I've started) -- stratified and unstratified meta-levels. Again,
objects are just simple hash-tables without any meta-operation (btw,
e.g. `toString` -- whether it's a meta-operation?) but have prototypes.
So in general this issue of "mutable shared state" applied to this
particular case isn't solve directly. Just with big limitations which
reduce convenience of the usage -- I remember I argued that:
Object
.create(...)
.defineProperty(...)
.freeze()
is much more elegant than this with indention hazard and repeating over
and over again this Object. - Object. - Object -
Object.freeze(
Object.defineProperty(
Object.create(
...
)
)
)
But this is a solution to fix somehow this "unstratified meta-level with
possible injection or shared sources" issue.
Dmitry.
On the other hand, yes, since we have the ability to modify any JS
code via console or even via browser address bar (using
pseudo-protocol javascript:), then it can be viewed as the issue
and as a security program.
But the salvation by the big deal should not be found in making a
dynamic language as a completely frozen and static, but in the
disallowing of code modification (code injection, code poisoning
if you will) via such a simple tools as the console or address bar.
Having the ability to inject the needed code via console just
makes e.g. browser scripting as just an _addition_ for the server
code. This is in case if we have combined client-server
application. It's obvious, that if we have some validation tool,
then the validation on the client side should be done only as the
_convenient addition_. But the same validation will be done on the
server, since we understand that the client code can be easily
injected (the simplest example e.g. utils.validate = function () {
return true; } which always will return positive result).
If we have only client-side application, then the user, if breaks
some code via injection just breaks his own page and therefore
just refuses the convenience we wanted to give him. He doesn't
break anything besides his own page.
So that's the language is dynamic and allow to change objects in
runtime is not in my view about "correctness" of a program. A
programmer, having control of his code, knows how and why he wants
to augment and modify his current runtime objects and code.
From this viewpoint I don't think we need to disallow modifying of
`hasOwnProperty` and similar. Better then to reuse ES5 approach
and separate this meta-operation to Object.hasOwnProperty(foo).
The fact that a language is static perhaps doesn't cancel the fact
that the code still can be injected, etc. So again, IMO a "correct
program" is not about "let's freeze everything here". And
"predefined and predicted contract" also relates to statics
mostly. In "duck typing" exactly this contract can be achieved
during the runtime -- one line before an object cannot pass the
"duck-test", on the other line (after some mutations) it already
can do this, it satisfies the contract -- and it's achieved not
via statics in the language.
Dmitry.
_______________________________________________
es-discuss mailing list
[email protected]
https://mail.mozilla.org/listinfo/es-discuss