Re: Direct proxies update

2011-11-30 Thread David Bruant
Le 30/11/2011 06:56, Allen Wirfs-Brock a écrit :
 On Nov 30, 2011, at 10:24 AM, David Bruant wrote:
 Le 29/11/2011 23:07, Allen Wirfs-Brock a écrit :
 ...
 Objects serve as one of our primary abstraction mechanisms (the other is 
 functions and function closures have similar allocation issues). Anytime 
 you you tell programmers not to allocate you take away their ability use 
 abstraction to deal with complexity.
 I agree with you with some restictions.
 - For a native API, the cost of function closure is null (since the function 
 does not need a scope to capture variable)
 - Objects are an interesting abstraction as long as they have a state.
 For the specific example of Reflection API, the stateless API that Tom 
 started seems to prove that a reflection API does not need a state. In that 
 case, why bothering allocating objects?
 The state is explicitly passed as arguments.  Most important is the first 
 argument that identifies the object.  The client must keep track of this 
 state and explicitly associate it with each call.
Indeed. I realized after posting that what I said was stupid.

  Clients have been known to make mistakes and pass the wrong object to such 
 methods.
Was this a motivation for the creation object-oriented languages?

This is an interesting argument, I think that a particular case where
such error happens is when you have methods like: appendChild(a, b). It
may be confusing, indeed, while a.appendChild(b) makes more clear that
(hopefully) b is appended to a.

Back to the design of a Reflection API, I think I agree that it may be
more clear to have 'mirror.on(a).hasPrototype(b)', than
'Reflect.hasPrototype(a, b)' if that's what you're advocating for.

 One of the things that an object based API does is make the association of 
 between that state and the functions implicit by encapsulating the state and 
 the functions together as an object and automatically associating them during 
 method calls.  This makes it easy for clients to do things that are hard 
 given the other approach.  For example, it allows a client to be written to 
 that is capable of transparently dealing with different implementations of a 
 common API.  In an earlier message I described the example of an inspector 
 client that is able to display information about objects without knowing 
 where or how the object is implemented.  A different reason for using objects 
 in a reflection API is so you can easily attenuate authority.   For example, 
 for many clients it may be sufficient to provide them with non-mutating 
 mirrors that only allow inspection.  They do this by excluding from the 
 mirror objects all mutation methods.
I think what I am missing is understanding how this is better than
creating your own abstraction and whitelisting methods you want to use
from a functional API.
Also, it's as easy to attenuate a Reflection functional API, by
excluding methods you do not want.
In each case, there is a need for an action from the person who wants to
attenuate authority on the reflection API and it is not clear that the
object-oriented API will make this task easier.


 A good GC should (and can) make allocation and reclamation of highly 
 ephemeral objects so cheap that developers simply shouldn't worry about it.
 I agree on the reclamation part, but I don't understand what a GC can do 
 about allocation of ephemeral (or not) objects.
 A good bump allocator
I thought it was an expression, not a sort of allocator...

 simply has a linear memory area where objects all allocated simply by 
 bumping the pointer to the next available slot.  If you need to allocated a 
 three slot object you just increment the allocation point by (3+h)*slotSize, 
 fill in the object slots, and finally compare against a upper bound.  This is 
 actually quite similar to how local variables are allocated on the stack.  h 
 is the number of overhead needed to form an object header so the slots can 
 be processed as an object.  Header size is dependent upon trade-offs in the 
 overall design.  2 is a pretty good value, 1 is possible,  3 or more suggests 
 that there may be room to tighten up the design.   For JS, you have to assume 
 that you are on a code path that is not enough that the implementation has 
 actually been able to assign a shape to the object (in this case knows that 
 it has t3 slots, etc.) that is being allocated.  (It you aren't on such a hot 
 path why do you care).

 This is not to say that there are no situations where excessive allocations 
 may cause performance issues but such situations should be outliers that 
 only need to be dealt with when they are actually identified as being a 
 bottleneck.  To over simplify: a good bump allocation makes object creation 
 nearly as efficient as assigning to local variables and a good 
 multi-generation ephemeral collector has a GC cost that is proportional to 
 the number of retained objects not the number of allocated objects. Objects 
 that are created and 

Re: Direct proxies update

2011-11-29 Thread Mark S. Miller
On Tue, Nov 29, 2011 at 11:03 AM, David Bruant bruan...@gmail.com wrote:

  Le 29/11/2011 19:05, Mark S. Miller a écrit :

 On Tue, Nov 29, 2011 at 10:01 AM, David Bruant bruan...@gmail.com wrote:

  Le 29/11/2011 18:40, Tom Van Cutsem a écrit :

 [...]

   The general rule here is: if your code needs to handle both local and
 remote values, deal with the remote/async case only. The local case should
 be a subset of the remote case.

  Oh ok, interesting.
 ... but does that mean that as soon as we bring concurrency (and
 asynchronisity) to ECMAScript, every API manipulating objects (or
 potentially any remote value)

  should be design in the async style (additional callback argument
 instead of return value)

   ?


  Hi David, could you complete your question? Thanks.

 sorry.

 I think that the answer to my question is to keep designing APIs as it has
 been but to return a promise for the asynchronous case and the API client
 will use the pattern Tom showed ('Q(a).when(function(val){})').


Yes. Or 'Q(a).get(foo)' or 'Q(a).send(foo, b, c)' or their respective
sugared form 'a ! foo' or 'a ! foo(b, c)', depending on what you want to do
with 'a'. Note that if 'a' designates a remote object, in
'Q(a).when(function(val){...})', 'val' will still be bound to a far
reference which is still a form of promise whose . access the promise API
rather than the API of the remote target object. If you invoke the
designated object's API simply with !, that works whether 'a' is a
non-promise, a promise for a local object, or a promise for a remote
object. In all cases, the value of the infix ! expression is reliably a
promise.



 The Reflection API could do that (that's actually what Tom suggested at
 some point) and a proxy reflecting a remote object could also return
 promises.


I don't understand.



 Promises and the unifying Q(a).when seems to be what save us from
 designing two APIs. Looking forward to seeing this in ECMAScript.


Me too! Except for the infix ! sugar, all this can be accomplished today
by using a Q library, such as Kris Kowal's.




 Very much like Tom said about Mirror.on(obj).has, maybe that for the local
 case, instanciating a promise for a local value could be avoided.
 What about 'Q.when(a, function(val){});' or 'When(a, function(val){})'? in
 which a is either a promise or a local value and this acts like we'd expect
 'Q(a).when(function(val){})' to.


Are you just concerned with avoiding an extra allocation, or am I missing
some other issue here?


-- 
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Direct proxies update

2011-11-29 Thread David Bruant
Le 29/11/2011 21:24, Mark S. Miller a écrit :
 On Tue, Nov 29, 2011 at 11:03 AM, David Bruant bruan...@gmail.com
 mailto:bruan...@gmail.com wrote:

 Le 29/11/2011 19:05, Mark S. Miller a écrit :
 On Tue, Nov 29, 2011 at 10:01 AM, David Bruant
 bruan...@gmail.com mailto:bruan...@gmail.com wrote:

 Le 29/11/2011 18:40, Tom Van Cutsem a écrit :

 [...] 

 The general rule here is: if your code needs to handle both
 local and remote values, deal with the remote/async case
 only. The local case should be a subset of the remote case.
 Oh ok, interesting.
 ... but does that mean that as soon as we bring concurrency
 (and asynchronisity) to ECMAScript, every API manipulating
 objects (or potentially any remote value)

 should be design in the async style (additional callback argument
 instead of return value)

 ?


 Hi David, could you complete your question? Thanks.
 sorry.

 I think that the answer to my question is to keep designing APIs
 as it has been but to return a promise for the asynchronous case
 and the API client will use the pattern Tom showed
 ('Q(a).when(function(val){})').


 Yes. Or 'Q(a).get(foo)' or 'Q(a).send(foo, b, c)' or their
 respective sugared form 'a ! foo' or 'a ! foo(b, c)', depending on
 what you want to do with 'a'. Note that if 'a' designates a remote
 object, in 'Q(a).when(function(val){...})', 'val' will still be bound
 to a far reference which is still a form of promise whose . access
 the promise API rather than the API of the remote target object.
 If you invoke the designated object's API simply with !, that works
 whether 'a' is a non-promise, a promise for a local object, or a
 promise for a remote object. In all cases, the value of the infix !
 expression is reliably a promise.

  

 The Reflection API could do that (that's actually what Tom
 suggested at some point) and a proxy reflecting a remote object
 could also return promises.


 I don't understand.
In order to support reflection of both local and remote objects, the
Reflection API could return promises: Reflection.has(o, 'a') would
return a boolean if o is local or a promise to a boolean if o is remote.

For the second part, I was saying that
http://wiki.ecmascript.org/doku.php?id=harmony:proxies#an_eventual_reference_proxy
could be reimplemented and return promises instead of the setTimeout(0).
But I'm a bit confused with this example, because some things are async
(defineProperty, delete, etc.), but some others are synchronous
(getOwnPropertyNames, has, etc.).
Shouldn't everything return promises?

  


 Very much like Tom said about Mirror.on(obj).has, maybe that for
 the local case, instanciating a promise for a local value could be
 avoided.
 What about 'Q.when(a, function(val){});' or 'When(a,
 function(val){})'? in which a is either a promise or a local value
 and this acts like we'd expect 'Q(a).when(function(val){})' to.


 Are you just concerned with avoiding an extra allocation, or am I
 missing some other issue here?
Avoiding an extra allocation is the only worry for this last point. Very
much like Tom worried about mirror allocation at
https://mail.mozilla.org/pipermail/es-discuss/2011-November/018734.html

Digression about memory in JS implementations:
I've been following the MemShrink effort in Firefox. Data structures
have been shrinked, fragmentation has been reduced making a better use
of memory, but I have seen much less work toward reducing the number of
allocations. This is certainly because the study of when an allocation
is required or not is usually complicated.
I don't know what the exact status of implementations is, but what
happens in current JS engines when the expression '[].forEach.call' is
met? Is the allocation of an array actually performed? Hopefully not, I
would not be surprised if it was.

Back to promises, it seems that Q(p).when(f) may become a common
programming pattern to express if p is a local value, call f at next
turn with p as argument. If p is a promise, call f with its resolution
when resolved. If it becomes so, it means that a Q(p) will generate a
promise to throw away in the local value case.
As usual in JavaScript, static analysis won't be possible to avoid the
allocation, because Q(p) could return anything (since Q could be
overridden or come from who-knows-where).
On the other hand, if we have a functional API like 'when(p, f)', we
avoid the allocation by design and are able to express the exact same
thing.


Taken from a different perspective, if we start designing APIs which
return either a local value or a promise to a value, maybe that the
promise API should work with both (instead of having to being forced to
turn everything into a promise before using the API as it is now).
p.when is the only part of the API that would be affected, I think.


Looking through Promise methods

Re: Direct proxies update

2011-11-29 Thread David Bruant
Le 29/11/2011 23:07, Allen Wirfs-Brock a écrit :

 On Nov 30, 2011, at 8:15 AM, David Bruant wrote:
 Avoiding an extra allocation is the only worry for this last point.
 Very much like Tom worried about mirror allocation at
 https://mail.mozilla.org/pipermail/es-discuss/2011-November/018734.html

 Digression about memory in JS implementations:
 I've been following the MemShrink effort in Firefox. Data structures
 have been shrinked, fragmentation has been reduced making a better
 use of memory, but I have seen much less work toward reducing the
 number of allocations. This is certainly because the study of when an
 allocation is required or not is usually complicated.

 A sign that your garbage collector isn't good enough:  People are
 writing style guides that tell developers that they should avoid
 allocating objects.

 Objects serve as one of our primary abstraction mechanisms (the other
 is functions and function closures have similar allocation issues).
 Anytime you you tell programmers not to allocate you take away their
 ability use abstraction to deal with complexity.
I agree with you with some restictions.
- For a native API, the cost of function closure is null (since the
function does not need a scope to capture variable)
- Objects are an interesting abstraction as long as they have a state.
For the specific example of Reflection API, the stateless API that Tom
started seems to prove that a reflection API does not need a state. In
that case, why bothering allocating objects?
That's the same reason why math functions are properties of the Math
object and not math objects.
However, having an object-oriented DOM makes a lot of sense to me since
objects have a state (children, node type, etc.). I'm not sure we could
easily and conviniently turn the DOM into a set of stateless functions.

 A good GC should (and can) make allocation and reclamation of highly
 ephemeral objects so cheap that developers simply shouldn't worry
 about it.
I agree on the reclamation part, but I don't understand what a GC can do
about allocation of ephemeral (or not) objects.

 This is not to say that there are no situations where excessive
 allocations may cause performance issues but such situations should be
 outliers that only need to be dealt with when they are actually
 identified as being a bottleneck.  To over simplify: a good bump
 allocation makes object creation nearly as efficient as assigning to
 local variables and a good multi-generation ephemeral collector has a
 GC cost that is proportional to the number of retained objects not the
 number of allocated objects. Objects that are created and discarded
 within the span of a single ephemeral collection cycle should have a
 very low cost.  This has all been demonstrated in high perf memory
 managers for Smalltalk and Lisp.
If a garbage collection is triggered when a generation is full, then,
your GC cost remains proportional to your number of allocation.

If a garbage collection is triggered at constant intervals, then it
probably runs for nothing (or too few) too often.

 I don't know what the exact status of implementations is, but what
 happens in current JS engines when the expression '[].forEach.call'
 is met? Is the allocation of an array actually performed? Hopefully
 not, I would not be surprised if it was.

 I suspect they don't optimize this although arguably they should.
 However, if you buy my argument then it really doesn't make much
 difference.  Implementations should put the effort into building
 better GCs.
For this particular case where the object is not ephemeral, but
completely useless, a GC will still cost you something (even if very
small), while static analysis can tell you to not allocate at all. I'm
not talking about a smaller cost of allocation+discard, but nullifying
it with a constant (and small) time of static analysis.
-
var a = [1];
function f(e, i){a[i] = Math.random();}

while(true){
[].forEach.call(a, f);
}
-
Without static analysis, the first array is allocated and this will run
the GC. With static analysis, the GC has no reason to run: the first
array does not need to be allocated since its reference is never used
anywhere after the retrieval of forEach (which is looked up directly on
Array.prototype if the implementation is conformant to ES5.1).


I'll take actual garbage as a metaphor, I am pro recycling (garbage
collection), but to recycling, I prefer to avoid buying things with
excessive packaging. This way I produce less garbage (less allocation).
Maybe should we apply basics of ecology to memory management? ;-)

I agree with you that abstractions are a good thing and I won't
compromise them if they are necessary. But it should not be an excuse to
allocate for no reason, even if it's cheap. And while garbage collection
should be improved, if we can find cheap ways to allocate less (at the
engine or programmer level), we should apply them.

 ...
 Looking through Promise methods
 

Re: Direct proxies update

2011-11-29 Thread Allen Wirfs-Brock

On Nov 30, 2011, at 10:24 AM, David Bruant wrote:

 Le 29/11/2011 23:07, Allen Wirfs-Brock a écrit :
 
 ...
 Objects serve as one of our primary abstraction mechanisms (the other is 
 functions and function closures have similar allocation issues). Anytime you 
 you tell programmers not to allocate you take away their ability use 
 abstraction to deal with complexity.
 I agree with you with some restictions.
 - For a native API, the cost of function closure is null (since the function 
 does not need a scope to capture variable)
 - Objects are an interesting abstraction as long as they have a state.
 For the specific example of Reflection API, the stateless API that Tom 
 started seems to prove that a reflection API does not need a state. In that 
 case, why bothering allocating objects?

The state is explicitly passed as arguments.  Most important is the first 
argument that identifies the object.  The client must keep track of this state 
and explicitly associate it with each call.  Clients have been known to make 
mistakes and pass the wrong object to such methods. One of the things that an 
object based API does is make the association of between that state and the 
functions implicit by encapsulating the state and the functions together as an 
object and automatically associating them during method calls.  This makes it 
easy for clients to do things that are hard given the other approach.  For 
example, it allows a client to be written to that is capable of transparently 
dealing with different implementations of a common API.  In an earlier message 
I described the example of an inspector client that is able to display 
information about objects without knowing where or how the object is 
implemented.  A different reason for using objects in a reflection API is so 
you can easily attenuate authority.   For example, for many clients it may be 
sufficient to provide them with non-mutating mirrors that only allow 
inspection.  They do this by excluding from the mirror objects all mutation 
methods.

 That's the same reason why math functions are properties of the Math object 
 and not math objects.

Which works fine as long as you only have one kind of number.  But if you add 
multiple numeric data types then you are either going to have to have 
additional Math objects (ArbitraryPrecisionMath, DecimalFloatMath, etc), have 
generic functions (a dual of objects), or turn them into methods.

 However, having an object-oriented DOM makes a lot of sense to me since 
 objects have a state (children, node type, etc.). I'm not sure we could 
 easily and conviniently turn the DOM into a set of stateless functions.

The same way you do it in C or Pascal or assembly languages.  You have state 
(often structs) and functions and try to make sure you always call the 
appropriate functions with the right kind of state. That's what objects do for 
you.  They automates the necessary house keeping.  
 
 A good GC should (and can) make allocation and reclamation of highly 
 ephemeral objects so cheap that developers simply shouldn't worry about it.
 I agree on the reclamation part, but I don't understand what a GC can do 
 about allocation of ephemeral (or not) objects.

A good bump allocator simply has a linear memory area where objects all 
allocated simply by bumping the pointer to the next available slot.  If you 
need to allocated a three slot object you just increment the allocation point 
by (3+h)*slotSize, fill in the object slots, and finally compare against a 
upper bound.  This is actually quite similar to how local variables are 
allocated on the stack.  h is the number of overhead needed to form an object 
header so the slots can be processed as an object.  Header size is dependent 
upon trade-offs in the overall design.  2 is a pretty good value, 1 is 
possible,  3 or more suggests that there may be room to tighten up the design.  
 For JS, you have to assume that you are on a code path that is not enough that 
the implementation has actually been able to assign a shape to the object (in 
this case knows that it has t3 slots, etc.) that is being allocated.  (It you 
aren't on such a hot path why do you care).

 
 This is not to say that there are no situations where excessive allocations 
 may cause performance issues but such situations should be outliers that 
 only need to be dealt with when they are actually identified as being a 
 bottleneck.  To over simplify: a good bump allocation makes object creation 
 nearly as efficient as assigning to local variables and a good 
 multi-generation ephemeral collector has a GC cost that is proportional to 
 the number of retained objects not the number of allocated objects. Objects 
 that are created and discarded within the span of a single ephemeral 
 collection cycle should have a very low cost.  This has all been 
 demonstrated in high perf memory managers for Smalltalk and Lisp.
 If a garbage collection is triggered when a generation is full, then, your 

Re: Direct proxies update

2011-11-28 Thread Tom Van Cutsem
2011/11/28 Allen Wirfs-Brock al...@wirfs-brock.com

 too many ways to do the same thing is desirable.  We already have a number
 of reflection  functions hung off of Object.   Your proposal replicates
 most of those and adds others as functions in the @reflect module.  Such
 duplication is  probably unavoidable if we want to transition from the
 Object based APIs.  But if we also added a mirrors based API that also
 duplicates some of the same functionality we will have three different ways
 to do some things.  One old way and two two ways.  That seems like too
 many.


The duplication of existing Object.* reflection methods is unfortunate, but
a direct consequence of evolutionary growth. I don't have any solutions for
avoiding it.

 In the common case of forwarding an intercepted operation to a target
 object, a mirror API requires the allocation of a mirror on the target,
 just to be able to invoke the proper method, only for that mirror to be
 discarded right away.


 Yes, I thought about this.  One way to avoid the per call allocation is
 for a proxy to keep as part of its state an appropriate mirror instance on
 the target object. Proxies that need to do mirror based reflection would
 create the mirror when the target is set.  Proxies that don't reflect don't
 need to capture such a mirror.


That would work, although how does the proxy know which mirror factory to
use? (if it uses the default one, there's no polymorphism and you might
as well use the Reflect.* API)

I guess one could pass the proxy a mirror to the target, rather than a
direct reference to the target itself. It still isn't as 'lean' though:
Proxy(target,handler) vs. Proxy(Mirror.on(target), handler)

Cheers,
Tom
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Direct proxies update

2011-11-28 Thread David Bruant
Le 28/11/2011 01:07, Allen Wirfs-Brock a écrit :

 On Nov 26, 2011, at 11:52 AM, David Bruant wrote:

 Le 24/11/2011 22:29, Tom Van Cutsem a écrit :
 2011/11/24 Allen Wirfs-Brock al...@wirfs-brock.com
 mailto:al...@wirfs-brock.com


 If we are going to have a @reflection module that is of broader
 applicability then just writing proxy handlers I'd  like us to
 consider a Mirrors style API.  Otherwise I'm a concern will
 continue to have a proliferation of reflection APIs as we move
 beyond Proxies into other use cases.


 I'm not sure I understand. Additional reflection functionality can
 easily be added to the @reflect module. It need not be exclusive to
 Proxies.
  

 At https://github.com/allenwb/jsmirrors is a first cut of a
 mirrors API that I threw together earlier this year for
 JavaScript.  I don't hold it up as a finished product but it
 could be a starting point for this sort of design.


 At the core is a root question whether we want to expose a
 functional or object-oriented API for reflection functionality.
  These are two different styles each of which is probably
 favored by a different subset of our user community.  I suspect
 that everyone knows which sub-community I align with. The main
 argument for the OO style is that it allows creation of client
 code that can be oblivious to the underlying implementation of
 the API.  The allows for more flexible client code that has
 greater potential for reuse.


 I'm sympathetic to mirror-based APIs myself. However, note that a
 mirror-based API would require an extra allocation as opposed to the
 proposed API:

 // Proposed API:
 Reflect.has(object, name)

 // Mirror-style API:
 Mirror.on(object).has(name)
 I have been thinking about this a lot and I don't find any advantage
 to Mirror.on(object).*(...rest) over Reflect.*(object, ...rest)
 ... for local objects.
 After reading http://bracha.org/mirrors.pdf , I have realized that
 the mirror API aims at more than providing reflection with a uniform
 API for other sorts of objects including, for instance, remote objects.

 Unfortunately, I am not sure I can go further, because I haven't
 found a definition of what a remote object is and don't really know
 how reflecting on them differs from reflecting local objects.
 Among the questions:
 * What is a remote object?
 * How does it differ from a local object?
 * Do you need a local object to emulate a remote object?
 * Does reflecting on remote objects impose synchronisity (waiting for
 the remote object to respond before telling what the answer is)?

 Did you look at my blog posts and the jsmirrors code.  It includes an
 example of using a common mirror API to access both local objects and
 a serialized external object representation. such an representation
 can easily be used to access live remote objects.
I agree, but I don't understand how you can use the same API for both
local and remote objects. Or maybe you don't (retrieve representation
async-ly and create the mirror when the representation is here)?

 In fact, on my to do it, is to extend jsmirros to do so for accessing
 objects in web workers.
I'm looking forward to seeing your implementation.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Direct proxies update

2011-11-27 Thread Allen Wirfs-Brock

On Nov 26, 2011, at 11:52 AM, David Bruant wrote:

 Le 24/11/2011 22:29, Tom Van Cutsem a écrit :
 
 2011/11/24 Allen Wirfs-Brock al...@wirfs-brock.com
 
 If we are going to have a @reflection module that is of broader 
 applicability then just writing proxy handlers I'd  like us to consider a 
 Mirrors style API.  Otherwise I'm a concern will continue to have a 
 proliferation of reflection APIs as we move beyond Proxies into other use 
 cases.
 
 I'm not sure I understand. Additional reflection functionality can easily be 
 added to the @reflect module. It need not be exclusive to Proxies.
  
 At https://github.com/allenwb/jsmirrors is a first cut of a mirrors API that 
 I threw together earlier this year for JavaScript.  I don't hold it up as a 
 finished product but it could be a starting point for this sort of design.
 
 At the core is a root question whether we want to expose a functional or 
 object-oriented API for reflection functionality.  These are two different 
 styles each of which is probably favored by a different subset of our user 
 community.  I suspect that everyone knows which sub-community I align with. 
 The main argument for the OO style is that it allows creation of client code 
 that can be oblivious to the underlying implementation of the API.  The 
 allows for more flexible client code that has greater potential for reuse.
 
 I'm sympathetic to mirror-based APIs myself. However, note that a 
 mirror-based API would require an extra allocation as opposed to the 
 proposed API:
 
 // Proposed API:
 Reflect.has(object, name)
 
 // Mirror-style API:
 Mirror.on(object).has(name)
 I have been thinking about this a lot and I don't find any advantage to 
 Mirror.on(object).*(...rest) over Reflect.*(object, ...rest) ... for 
 local objects.
 After reading http://bracha.org/mirrors.pdf , I have realized that the mirror 
 API aims at more than providing reflection with a uniform API for other sorts 
 of objects including, for instance, remote objects.
 
 Unfortunately, I am not sure I can go further, because I haven't found a 
 definition of what a remote object is and don't really know how reflecting on 
 them differs from reflecting local objects.
 Among the questions:
 * What is a remote object?
 * How does it differ from a local object?
 * Do you need a local object to emulate a remote object?
 * Does reflecting on remote objects impose synchronisity (waiting for the 
 remote object to respond before telling what the answer is)?

Did you look at my blog posts and the jsmirrors code.  It includes an example 
of using a common mirror API to access both local objects and a serialized 
external object representation.  such an representation can easily be used to 
access live remote objects.  In fact, on my to do it, is to extend jsmirros to 
do so for accessing objects in web workers.

Allen___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Direct proxies update

2011-11-27 Thread Brendan Eich
On Nov 26, 2011, at 3:55 AM, David Bruant wrote:

 Le 26/11/2011 01:52, David Bruant a écrit :
 
 Le 24/11/2011 22:29, Tom Van Cutsem a écrit :
 
 2011/11/24 Allen Wirfs-Brock al...@wirfs-brock.com
 At the core is a root question whether we want to expose a functional or 
 object-oriented API for reflection functionality.
  (...)
 I realize what that sentence meant yesterday, very late. And I realized also 
 that all what Tom said was legitimate. A Mirror style API (an 
 object-oriented API) can be built on top of the Reflect API (a functional 
 API as I understand it). The opposite it true, but comes with an overhead. 
 Maybe in the future, it will be possible to optimize expressions like 
 Mirror.on(object).has('bla') (used to implement Reflect.has('bla') if the 
 Mirror style API is used), but it will always require some additional 
 analysis. The opposite is not true.
 
 Consequently, regarding the built-in implementation, I would favor a 
 functional API as well, unless the mirror API has advantages I am oblivious 
 to.

I'm with you. JS has first class functions *and* objects, it is not an OOP-only 
or OOP-first language. The (dead? nearly) hand of Java weighed heavily on some 
parts, and methods make sense in many cases, but the cost of temporary objects 
shouldn't be imposed if a functional API at the lowest level suffices.

/be___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Direct proxies update

2011-11-24 Thread David Herman
On Nov 24, 2011, at 7:37 AM, David Bruant wrote:

 Le 24/11/2011 16:04, Sam Tobin-Hochstadt a écrit :
 You can't do the following:
 
 import {new, delete} from @reflect;
 
 because you can't bind `new' and `delete'.  Even if this were allowed,
 then `new(...)' would still be a syntax error.
 Oh ok... It actually is more an issue of destructuring than modules
 themselves.

Sort of. It's not even really technically a problem with destructuring; we 
could allow that, but it would be useless, because you'd never be able to refer 
to them.

 Interestingly, it means that as soon as we have the module syntax out
 there, there will be pretty much no way to add a new reserved keyword
 (ever?), because someone may be using the identifier and adding the
 reserved keyword would break the module import.

This has nothing to do with modules. Adding a reserved word is *always* 
backwards-incompatible because someone could already be using it as a variable. 
Modules don't change this situation at all.

 import Reflect from @reflect

Almost.

module Reflect from @reflect;

You only use import to pull out exports from inside a module. (We've been 
experimenting with alternative syntaxes, btw. I'll report back on that soon.)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Direct proxies update

2011-11-24 Thread Tom Van Cutsem
2011/11/24 Allen Wirfs-Brock al...@wirfs-brock.com


 If we are going to have a @reflection module that is of broader
 applicability then just writing proxy handlers I'd  like us to consider a
 Mirrors style API.  Otherwise I'm a concern will continue to have a
 proliferation of reflection APIs as we move beyond Proxies into other use
 cases.


I'm not sure I understand. Additional reflection functionality can easily
be added to the @reflect module. It need not be exclusive to Proxies.


 At https://github.com/allenwb/jsmirrors is a first cut of a mirrors API
 that I threw together earlier this year for JavaScript.  I don't hold it up
 as a finished product but it could be a starting point for this sort of
 design.


 At the core is a root question whether we want to expose a functional or
 object-oriented API for reflection functionality.  These are two different
 styles each of which is probably favored by a different subset of our user
 community.  I suspect that everyone knows which sub-community I align with.
 The main argument for the OO style is that it allows creation of client
 code that can be oblivious to the underlying implementation of the API.
  The allows for more flexible client code that has greater potential for
 reuse.


I'm sympathetic to mirror-based APIs myself. However, note that a
mirror-based API would require an extra allocation as opposed to the
proposed API:

// Proposed API:
Reflect.has(object, name)

// Mirror-style API:
Mirror.on(object).has(name)

In the common case of forwarding an intercepted operation to a target
object, a mirror API requires the allocation of a mirror on the target,
just to be able to invoke the proper method, only for that mirror to be
discarded right away.

I don't see mirrors as being in conflict with this API though. Mirrors can
be perfectly layered on top.


 I haven't pushed for adopting mirrors into ES.next because I thought we
 already had too much on the table.  However, if we are going to create new
 reflection APIs then I think we should carefully consider the pros and cons
 of the mirrors style.


I don't understand why you think of the @reflect module as a new
reflection API: all of the functionality in it (save for the
VirtualHandler) was already present in the original Proxy proposal, where
most of the Reflect.* methods were methods on the default
ForwardingHandler. Putting them in a separate @reflect module seems the
right thing to do now that we have a module system.

I'm sympathetic to mirrors, but I don't think it's an either/or story. A
mirror-based API can be layered on top of the standard @reflect module. I'm
not sure it needs to be standardized now though: the current API provides
the minimum required functionality with minimum overhead.

Cheers,
Tom
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss