On Mon, Aug 5, 2013 at 3:03 PM, David Jeske <dav...@gmail.com> wrote:

> In modular systems programming, I can see the validity of escape inference
> for stack/region optimization only if the functions are fully internal or
> this inference is performed as part of the run-time JIT.
>
> However, if we wish no-escape to be a trustable property of a module
> export, it needs to be explicitly committed to by the module author with
> some form of borrowed pointer argument (such as with CLR "ref", or Rust
> "&"). Likewise, a module export has no visibility into the escaping of
> return values, since it's callers are unknown when the module is compiled
> and packaged.
>

That's over-constraining. Even when we can't do escape analysis on
everything, we can still do escape analysis on many temporaries. My take
is: do the analysis conservatively. Take advantage of the information you *
do* have, and make conservative assumptions where you don't have
information.

I agree with you that contracts about escape need to be part of the
interfaces when the code of the procedure isn't available to be analyzed.
Escape annotations become part of the arrow (function) type.

Also, this is why we generally want escape annotations to take a negative
form (e.g. NoGC rather than GC). When escape annotations take a negative
form, they can be dropped. That loses information, but not correctness.


>
> On Mon, Aug 5, 2013 at 6:40 AM, Jonathan S. Shapiro <s...@eros-os.org>wrote:
>
>> What I think you mean to be saying is that there is no way to *explicitly
>> * stack-allocate an unboxed type *in the CLR*. If it were designed to do
>> so, the JIT engine is certainly free to perform the region analysis that is
>> necessary to do what you are describing.
>>
>
> Clarification... In CLR you can stack-allocate an unboxed scalar or struct
> by declaring it as a stack variable. What you can't do is (safely)
> stack-allocate unboxed arrays or dynamically stack-allocate unboxed types.
> This requires unsafe/stackalloc.
>

Yes. Though the JIT subsystem is entitled to make optimizations that you
cannot prescribe at the source level.


>
>
>> Curiously, doing that wouldn't require *any* changes to the GC (except
>> perhaps removing a sanity check), *even* if no borrowed pointer type is
>> introduced.
>>
>
> Isn't CLR "ref" already a borrowed pointer type?
>

It is not. "ref" is merely "pass by reference". It makes no contract about
escape. Three obvious counter examples:

static T escape1(ref T arg) { return arg; }
static void escape2(ref X arg, T t) { arg.t = t }
static void escape2(ref X arg, ref T t) { arg.t = t }

the binding of a by-reference parameter can't change, but that doesn't
really help, because use-occurrences of that argument are implicitly
dereferenced.


> It seems like they could have abstracted "ref" onto local stack variables
> and allowed the following code safely....
>
> struct Bar {}
> ref Bar[] foo = stackalloc Bar[10];
>

This could indeed have been done, and we did exactly this in BitC. There is
an additional requirement, however: ref types may not appear in function
return types or OUT parameter types.


> However, this would quickly push down a Rust-like path, where we'd want
> authors to decide between "ref" and "non-ref" pointer types intelligently.
> Seems complicated.
>

It could, but it turns out that region analysis can transparently infer
nearly all of the cases where this annotation was safe. The main reason to
have it in the language is to make sure that the result of an inference
pass can be expressed within the language.


shap
_______________________________________________
bitc-dev mailing list
bitc-dev@coyotos.org
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to