(factoring out this part of the conversation because it seems like a
bit of a sidetrack)

Mark and me:
>> > Why do you believe manual deallocation decisions will be easier in
>> > distributed systems than they are locally?
>>
>> I don't. That's why I cited several examples of systems that require
>> neither fine-grained manual deallocation nor distributed GC.
>
> I didn't say "fine-grained". Erlang requires manual deallocation of
> processes.

I don't think it does in practice, any more than UNIX does. How does a
UNIX admin (or the kernel) decide when to kill a process?

> Also, you did specify "untrusted". Distributed Erlang does not qualify,
> exactly because pid are forgeable.

That's fair, but let me clarify what I mean.

It's true that Erlang doesn't use an ocap model for security. An
Erlang process that doesn't check auth on incoming messages must be
protected by some sort of wrapper that does. Typically some front-end
code plus a firewall.

This particular factoring of responsibilities is not Erlang-specific.
The Web is chock full of JS code (client and server) doing remote API
calls. These systems generally don't use object capabilities. They
don't use any distributed GC equivalent either, except perhaps in the
sense of "leases": server-side objects often live in sessions that
expire if unused for some period of time.

(Offhand, I would expect leases/sessions to continue being good enough
even if many systems did migrate to distributed objects and
capabilities. I don't think distributed GC was anyone's favorite
feature of RMI.)

> What do you mean then by "strong reference"? If Erlang pids are not strong
> references, then I don't understand what you are saying.

I just meant "strong reference" in the usual sense:
https://en.wikipedia.org/wiki/Strong_reference

A pid doesn't keep the referred-to process alive. A pid has no effect
at all on the process it addresses, local or remote.

-j
_______________________________________________
es-discuss mailing list
[email protected]
https://mail.mozilla.org/listinfo/es-discuss

Reply via email to