Let me see if I understand what you're proposing.

1. You have certain data structures which would be declared to "borrow
pointers", for example ComplexStructureB.

2. Other data structures simply own the objects they point to, like ListA.

3. However, ComplexStructureB would also probably own objects (which
could be borrowed by yet other data structures, potentially involving
cycles), but we're mainly interested in how to deal with the borrowed
pointers.

4. So for a borrows-pointers data type, the compiler will try to
generate and prove-correct a releaseObject operation, which takes a
pointer "foo" and mutates the structure in place so that it no longer
references foo.

(How does it know how this should be done? Is it always just nulling
the places where the pointer occurs? That sounds kind of like weak
references.)

(Or maybe the programmer writes releaseObject. But in that case,
automatically proving it correct is probably too hard.)

5. Any borrowed pointer appears on a stack frame (as in Rust) or (now)
in some ComplexStructure rooted right under a stack frame. So when we
deallocate something like ListA, we can easily find all the
ComplexStructures to call releaseObject on a lot.

(Would you call releaseObject for each borrowed object, or just for
the whole data structure, with other objects it owns being implicitly
released too. Either way, it looks like in general, you're doing a
nested traversal over both data structures, which would be rather
slow.)

6. There could be a subset language where (4) always works.

On Sat, Aug 2, 2014 at 1:07 PM, David Jeske <[email protected]> wrote:
> This can be thought of as a mechanism to extend the lifetime semantics of
> borrowed pointers. Rust borrowed pointers are only allowed while in a
> nested-stack-frame. We can extend the lifetime of this borrowing if the
> container which wishes to borrow them agrees to honor a "release on request"
> event api releaseObject(foo) -- and that we can prove that releaseObject()
> code is implemented correctly.

Remind me how Rust borrowed pointers work; you can't deallocate the
structures you borrowed from until the nested stack frames are popped?
If so, how would this interact with your proposal, where the borrowed
pointers are still there when the stuff they borrowed goes away?

> A) IF we can *prove* that after the release-event is called,
> ComplexStructureB no longer holds any references to the object, then I think
> we can have typesafe memorysafe non-tracing precise heap collection for that
> container.

Well, you have to wonder whether it should really count as
non-tracing, if it has to traverse ComplexStructureB completely. But I
do see that there would be cases where the programmer could point out
a smarter way.

This sort of "special-purpose garbage collection" (but using weak
references in your case) is something that separation logic is good
for reasoning about. When combined with the ability to depend on
runtime metadata, separation logic is actually an extremely powerful
way to reason about manual memory management.

> B) However, IF we can prove that it's *unprovable* whether or not a
> release-event removes all references to an object, then I think we've proven
> it's not possible have typesafe memorysafe non-tracing precise heap
> collection for that container

Proving something unprovable requires a couter-model. And you didn't
say which logic it should be unprovable in. Did you just mean "prove
that releaseObject is unimplementable"?

> "A" is certainly possible for some container data-structures. Therefore,
> this becomes an experiment in finding the limits of provable container
> manipulations. Can a language be written which allows operations on any
> container structure to be proven, or are there certain data-structures which
> can't be proven?

Well, you could consider the whole heap to be a container, and
implement releaseObject with a tracing garbage collecter. ;)

I'm guessing you want to know which containers tend to have efficient
implementations of releaseObject. I don't know, but it seems like
there ought to be a lot, otherwise manual memory management would be
infeasible more often than it actually is.

> Before I deep end any more on this.... Is there any related research I
> should read? Does this sound interesting? totally bunk? Been tried before?
> Original? Any feedback is appreciated.

I haven't seen anything related implementation-wise. But that's not
saying much, 'cause I don't read about memory management. Again,
separation logic seems like the tool to formalize the proofs. It's
also called bunched implications (BI) logic. Lots of papers about it.

Briefly, separation logic lets you reason about resource management
more easily because it provides:
- A "separating conjunction" connective which says two things are true
using disjoint sets of resources.
- The related "separating implication" connective which lets you
express sharing, among other things.
- And also all the usual classical (or intuitionistic) connectives,
which don't affect the implicit resource set.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to