On Tue, Jan 16, 2018 at 4:55 PM, Andy Wingo <wi...@pobox.com> wrote:

> Thinking more globally, there are some more issues -- one is that
> ideally we need call-site specialization.  A GF could be highly
> polymorphic globally but monomorphic for any given call site.  We need
> away to specialize.
>

Yes, but I imagine that the gain of having polymorphic inline caches
compared to just GFs will decrease the more that type dispatch can be
eliminated from compiled method bodies (the monomorphic case will be
replaced by a direct call of the IM (compiled method)). Then, of course, a
polymorphic inline cache can perhaps be regarded as an anonymous GF such
that there isn't really much difference.

Dynamically typed data structures would be the remaining main source of
type dispatch.


> Secondly, it would be nice of course to have speculative optimization,
> including speculative inlining and type specialization not only on GF
> arguments but also arguments to regular calls, types of return values,
> and so on.
>

Yes! I think that dispatch on the return values is interesting.

What I'm now going to write is based on almost zero knowledge of compiler
construction, and I still will have to learn the Guile compiler
infrastructure (where is a good start?), so please bear with me. For the
same reason, what I write might be completely obvious and well-known
already.

Imagine that we are compiling the body of a method and we arrive at an
integer addition. At some step of compilation, there has been a conversion
to CPS such that we (at some level) can regard the operation as:

 (+ a b k)

where k is the continuation. This means that k will be called with the
result of the addition:

  (k <sum>)

In a framework where essentially all procedures are GFs, also k, here, is a
GF. This allows it to dispatch on its argument such that it can treat
inums, bignums, floats and doubles differently.

Note now that in such a framework, a GF might have only one method (M) but
the instantiated/compiled methods (IMs) can still be many. If k above is
called with an inum, there will be an IM of k which is specialized to
inums. This means that the compiler later can choose operations relevant
for inums inside k.

The "exploded" (in a slightly different sense) code for + above might, in
turn, contain a branch which handles the transition into bignums at "inum
overflow". If now k above come to occur in a branch of the +-code, inlined
in an outer function, where the argument of k is guaranteed to be an inum,
then our GF dispatch elimination will replace k with with the k-IM for
inums. So, the *only* branch remaining in the code will be the overflow
check in +. (BTW, I wonder if this inlining/specialization to an outer
function could in some sense rely on type dispatch on the continuation k?)

>
> Finally I wonder that if we had the latter, if it matters so much about
> optimizing generic functions in any kind of specific way -- instead you
> could just have a generic optimizer.
>

Yes. I guess I'm mostly using my GFs as a "hook" for my thoughts on this.
The reason I do this is that I can imagine a reasonably simple
implementation where (almost) everything is a GF. :) There, the GFs would,
in some sense, work as data structures for the compiler/optimizer.

>
> Of course the speculative optimizer could work on traces instead of
> methods, and in that case we'd get a lot of this stuff for free... but
> that's a question for farther down the road.  See
> http://scheme2016.snow-fort.org/static/scheme16-paper3.pdf.
>

Yes, this is very nice and exciting work. :)

Best regards,
Mikael

Reply via email to