On Fri, Jul 26, 2013 at 12:07 PM, Jonathan S. Shapiro <[email protected]>wrote:

> The point I was trying to make, though, is that accurate GC *cannot* be
> implemented without compiler support. There's an old paper by Boehm and
> somebody about this.
>

110% with you here. Compiler and runtime support is necessary for precise
and reliable GC... as well as any nice modern techniques, like compacting,
C4 hardware-accelerated barriers, etc. etc.


>  I see it as a bit of a quantization problem. It's certainly possible to
>> have hybrid systems, it's just that there is so much less human effort
>> expended building an entire ecosystem of libraries as either a non-GC or
>> full-GC system that half-GC is not a stable state for a single language IMO.
>>
>
If inference can use the right pointer type enough of the time, and we only
> have to hand-annotate a smaller subset, I think that could make a big
> difference.


I don't see how this affects practical issues. If library authors are free
to perform GC dependent operations, then they will, and a non-GC user will
not only have to avoid their libraries, but he will have to avoid all
libraries that use those libraries.

Of course we could explicitly create a set of "GC okay" and "no GC"
libraries for a single language. I'm arguing that we have effectively done
that by making "GC okay" and "no GC" languages -- and with good reason. The
construction effort put into libraries written in a language so far
outweigh the effort of the compiler and runtime that it simply doesn't
matter if you re-use the language across those two cases.

One could use C#/CLR as an implementation language for such a system.
Writing a version of libraries which favor only typesafe stack-allocation,
and yet another version which uses unsafe manual memory management.
However, I argue the chaos would crush it, as there is no mechanism to tell
the compiler to fail memory-modes not accepted in any particular
implementation, nor any mechanism in the CLR to promise to consumers that
you don't create heap objects.


> There has been good work on region inference. An interesting pair of
> questions is:
>
>   1. Under what conditions should we GC a region?
>   2. How often can a compiler determine [statically] that these conditions
> are not met,
>      and GC is therefore unnecessary on that region.
>

My interpretation of this is that you're trying to apply
automatic-compiler-magic to allow code to be used in GC or non-GC modes. I
don't see how this is practical, useful, or possible, since the way we
design APIs and implementations is radically different when trying to
either avoid or manually manage heap-state. By my uninformed and
nearly-ignorant view, this is the wall Rust banged into -- that finally
caused them to remove GC.

How about a different approach.... How about a language/runtime which
simply makes the allocation pattern and requirements of a library a
well-known and constrainable part of software construction. This would
allow us to write libraries which are GC-free, where GC-heap-allocation
would be a compiler error. We must also be able to promise to consumers of
our library that we are GC-allocation-free, since we need this to enforce
that any imported library calls are also GC-allocation free.  Of course
GC-capable libraries could make use of GC-free libraries without trouble.
--- I also suspect this is the path Rust thinks it is headed down, but like
you, I don't believe satisfactory GC can be implemented as a library.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to