On 20 July 2013 07:51, Jonathan S. Shapiro <[email protected]> wrote:
> On Thu, Jul 18, 2013 at 8:09 PM, William ML Leslie
> <[email protected]> wrote:
>>
>> However, if disposables cannot escape a task anyway, then disposables
>> are task-local, so you already know (or, already can track) what needs
>> to be disposed of at task exit.
>
>
> Except, as I noted, that disposables can cross task boundaries and become
> shared in real systems.

Yes, capability theory has taught us to pass around file handles
rather than file names and URLs, and naive implementations of this
mean that resources are not task local.  But I was actually trying to
introduce a strawman, because even if they were, refcounting doesn't
solve the problem of prompt finalisation significantly better than any
other mechanism.  Refcounting would tell you about the objects that
would like finalisation just like finalising the task-local heap
would.

>> In any case, for a large number of programs, there needs to be a way
>> to specify finaliser order.
>
>
> You do not and cannot get that guarantee out of any semantically sane
> semantics of finalization. Finalizers, of necessity, execute in unspecified
> order. If you need an order of disposition, you need to implement that in
> the application. This is why the  .NET Dispose() pattern is so important.

Sure they do.  In C++ you know that your destructor is run before
those of your fields, you can usually be sure that most fields of the
object you're destructing are still present (they may not be live, but
they are usually still operational).  This means that object
composition becomes a means of specifying finaliser order:  Finalising
my application-level transaction means closing the database session,
which means closing any outstanding server resources that it is
holding, and people typically seem to do this by using the fields of
their object, and things like keepAlive() methods.

I'm not at all saying that this is a good idea - explicit handling of
resources is the only sane thing to do.  I'd probably prefer that
languages didn't even have support for finalisers.  But for execution
of a finaliser to make any sense, the objects that it operates on need
to be usable, and they may not be if they have been finalised.

> What needs to be achieved in the end is a set of object reference
> propagation constraints that (transitively) guarantee release. The
> specification of that condition needs to be recursive, in much the way that
> confinement is recursive. If we were to specify this with reference
> counting, the first part of the problem is to guarantee that every
> reference-counted pointer is reachable by the stack, the second part is to
> guarantee that they don't escape (so passing to borrowed pointers OK,
> because those cannot be captured). The third part is the tricky one, which
> is to guarantee that reference-counted pointers never form a cycle.  If we
> could achieve those three conditions in the type system - and the first two
> are straightforward - we're done.

Yes.  There is a need to be explicit about the DAG you've built in
order for sensible finalisation semantics.  And deep down I suspect
Rob Meijer would be comfortable with that.

>>  RAII is one way, weakref callbacks
>> another,
>
> RAII works fine. If you mean what I suspect by "weakref callbacks", that's
> not fine - the result is a system that is not GC-safe.

Probably not, then!

The GC keeps a weakly-keyed map[0], the value of which is a closure
that will be called once the object is collected.  Because the object
is already collected when the callback is invoked, the callback can't
revive it.  Because the reference to the callback is not weak, it may
reference any other resource that requires finalisation.

You can see how you can use it to ensure resource finalisation order:
an object can't be eligable for finalisation if it is still referred
to by a weakref callback; in particular, a weakref callback can't
maintain a reference to the object it's supposed to be finalising
because that's a nearly trivial cycle that prevents disposal.  It
requires more thought on behalf of the programmer, which I think is a
good thing[1].  But then, if finalisation of your object requires you
to send the appropriate commands across some socket before you close
it, you just access that socket in the callback, the callback keeps
the socket alive long enough for the finaliser to run.

Of course it's no substitute for resource managament using dynamic
contexts, but it's /A/ mechanism for describing resource finalisation
relationships.

[0] well, implementation in cpython is a field on the object itself.
Smarter rules that work at the type-level, and hence have fewer
storage requirements, are reasonable too.  I guess.

[1] it's not silently magic.

A need to be explicit is better for concurrency also, and it is
relevant to discuss this when talking about finalisers because
*finalisers introduce concurrency*, even in a single-threaded system.

--
William Leslie

Notice:
Likely much of this email is, by the nature of copyright, covered
under copyright law.  You absolutely may reproduce any part of it in
accordance with the copyright law of the nation you are reading this
in.  Any attempt to deny you those rights would be illegal without
prior contractual agreement.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to