Le 01/11/2013 23:17, Jason Orendorff a écrit :
On 11/1/13 1:52 PM, David Bruant wrote:
In any case, I've stopped being against weakrefs after a message by
Mark Miller[...]
I am now going to try to convince you that you shouldn't have been
convinced by this use case. :)
You can try :-p

To keep the object granularity across machines, in E, they've created
a protocol (CapTP) that, in essence, streches object references across
machines. Cross-machine wrappers if you will.
When designing this protocol, at some point comes the problem of GC.
Distributed GC...
First, read Terrence's first response in this thread. This is exactly
the kind of use case he is talking about, where GC is used as a general
resource-management workhorse.
I'm not convinced acyclic distributed GC is a good thing to support.
Just to clarify, I don't care about ADGC in isolation, but more about the idea of manipulating remote objects. That's something that'll be hard to convince me that wouldn't be interesting in supporting. And some form of GC is necessary for remote objects. ADGC seems like one practical solution. I'm all ears for other solutions.

The idea is that by the magic of proxies, remote objects can be made to
look exactly like local objects. But then, you can't freely pass a local
object as an argument to a remote method. That would create a back edge.
The system must either block that with a runtime error or risk a cycle—a
silent memory leak on both sides of the boundary. So the boundary is not
transparent after all.
Not necessarily. If one side drops all references to the remote object, it can tell the vat the object comes from, cycle is broken and GC can happen. This happens if the argument is not used beyond the method call. The cycle leak is when only remote references to some objects remain (while all local refs have been dropped). The description of DACG in the CapTP protocol [1] is nothing but a cross-machine refcounting algorithm, with the same issues that are inherent to this algorithm (cross-machine cycles), but temporary cycles (as you seem to describe) are not a problem.

The transparency doesn’t extend to performance or ease of use anyway.
I disagree. I doubt promise pipelining [2] (which dramatically reduces the impact of network latency in applications) can be made easier to achieve without this sort of transparency.

Sidebar: I think general distributed GC (with cycles) is considered
Hard. It's been studied. Mark would know better than me. But I think it
requires, at a minimum, a lot more cooperation from the GC, including
abstraction-busting APIs that we cannot safely expose (like those
Gecko's cycle collector uses to trace through the JS heap).
That or an implementation or improvement upon the idea of distributed mark and sweep. I agree it's Hard. It doesn't make it impractical. From [1]: "The advantage of taking this low road [ADGC] is that garbage collection can proceed by separate uncoordinated local interactions, with no global analysis. Of course, the disadvantage is that we leek live distributed cycles. At such a time as this proves to be a significant problem, we will revisit the issue of implementing a full Distributed Garbage Collector. "

So far, ADGC has proven to be enough in practice. It'll be time to revisit when the time comes.

E is an incredible body of work. I really can't speak too highly of it;
it's amazing. But replicating E is not something I think people want to
use JS for.
The Q library (and I guess the promise community) is moving more and more toward this direction. See promise.invoke for instance:
https://github.com/kriskowal/q/wiki/API-Reference#promiseinvokemethodname-args

Another anecdotal evidence is the API exposed by Google App Script (the server-side is in JS):
https://developers.google.com/apps-script/guides/html/communication?hl=fr
You can define a server-side function f and call it on the client side with:
    google.script.run.f(...args)
It works as if a promise for the result was returned.
The args are serialized as data, so that's where that all stops, but the idea of transparently play with objects defined in one machine is around. I said "anecdotal", but I don't think the user based is massive. But some people us it.

Q and Google App Script are not the entire JS community, but I feel a growing appetite for transparent remote objects, especially since JS can now be used in both client and server-side. A couple of years ago, I had server-to-server communication use case. I designed an HTTP protocol, reinvented the wheel here and there. A remote object protocol may have accelerated things if I had it handy. I'll need to rewrite it from scratch at some point. I'll try that.

One use case of cross-vat communication is the remote debugger
protocol implemented in Firefox/OS.
This is a really good point. I asked Jim Blandy about this. In the
debugger protocol, when you hit a breakpoint or debugger-statement, that
creates a "pause actor" in the protocol. Actors form a tree. Just about
everything you encounter while inspecting a paused debuggee is parented
by the pause actor. When you tell the debuggee to continue, the pause
actor and all its descendents are released (with a single protocol request).

The protocol lets you explicitly promote a reference to some debuggee
object from "pause lifetime" to "thread lifetime", but we don't
currently have any code that uses that feature.

So as far as I know it is *impossible* for the debugger to be leaking
stuff beyond the next time you hit the Continue button!
Interesting. I should start reading about the debugger protocol in details.

David

[1] http://erights.org/elib/distrib/captp/dagc.html
[2] http://erights.org/elib/distrib/pipeline.html
_______________________________________________
dev-tech-js-engine-internals mailing list
dev-tech-js-engine-internals@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-js-engine-internals

Reply via email to