On Fri, Jul 26, 2013 at 10:01 PM, David Jeske <[email protected]> wrote:

> On Fri, Jul 26, 2013 at 7:56 PM, Jonathan S. Shapiro <[email protected]>wrote:
>
>> ...What I *am* hoping to do is to use compiler magic to reduce the
>> utilization of the GC heap so that GC pressure and/or total RAM
>> requirements are suitably reduced.
>>
>
> ...Understood. Let me rephrase my point. Using this magic is a necessary
> but not sufficient condition to achieve a practical typesafe GC-free
> capability.
>

Agreed. So let me say it one more time: That. Is. Not. A. BitC. Goal.


> IMHO, in order to achieve a non-trivial program during which no-heap-based
> allocation can occur (a) module exports and run-time linking will need to
> make verifiable promises about lack of heap allocation so the
> compiler/runtime can know this, and (b) the language will need an explicit
> mechanism to disallow statements which cause heap-allocation, so
> programmers can have tools to prevent inadvertantly breaking the promises
> they intended to keep.
>

This is exactly what the GCAlloc effect accomplishes. A procedure whose
effects do *not* include GCAlloc does not allocate storage on the GC heap.
It wasn't called GCAlloc before, because in the earlier incarnation I
wasn't thinking about first-class regions.

It's fairly easy to restrict a library design such that it allocates only
in named regions, but surprisingly useless without GC. In most code, all
you will accomplish is to create regions with lots of garbage in them whose
collection is deferred. Now that may be okay, in the sense that taken
overall your GC efforts will be much better focused, but I'm not aware of
any good data on the interaction of regions and GC in the real world.


Jonathan
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to