On 09/05/2014 11:41 AM, Ali Ebrahimi wrote:
Hi,


On Fri, Sep 5, 2014 at 1:47 PM, Remi Forax <fo...@univ-mlv.fr <mailto:fo...@univ-mlv.fr>> wrote:

    I think that in term of concepts there is a kind of convergence
    between the couples Graal/Truffle and c2/java.lang.invoke.

    The force of Graal is to be able to do partial evaluation directed
    user code or by annotations, for me, Hotspot is moving in that
    direction too, it already has special annotations like @Stable or
    @ForceInline, and ultimately the method handle implementation will
    use exactly the same tricks (like read/write access to hotspot
    internal data-structure or magical deoptimization method [1] by
    example) that Truffle uses.

    For me, the major difference is that there is no clear way to do
    type specialization in the invokedynamic world, each runtime
    implementation has to come with it's own solution (or not !) while
    Truffle has TruffleSOM (even if I'm not a fan of the approach of
    TruffleSOM, it exists). A way to close the gap, is to use a
    library on top of ASM that does type specialization (and
    profiling) and delegate to invokedynamic the semantics of the
    language (It was the main idea of the talk I've submitted to the
    JVM Summit this year).

    I think that it's not a good idea to let people to directly use
    tools that do partial evaluation. The problem of partial
    evaluation is that it's very easy to introduce major regressions
    because one thing is not in line with respect to all the others,
    it's too magic. But I believe that it's a good way to solve the
    problem we currently have with the implementation of method
    handles in Hotspot.

    I really would like to have an API that relieve runtime developers
    to take care about type specialization.
    That's why I think that type specialization should not be done
    outside the VM as currently said, but inside the VM.
    To be crystal clear, runtimes should generate code that instead of
    using iload, lload, dload, etc. use only one bytecode op vload (v
    for virtual and not v for value), and we should provide a way to
    instantiate different methods from a generic bytecode by providing
    type informations (signature of the specialized method + return
    type of each calls is enough), a kind of defineAnonymousClass on
    steroid.
    Given that instruction like vload are needed to support value
    type, the idea is to extend it to support primitive type too and
    later to allow to specify type information along the bytecodes.
    This will allow to write one code and to create multiple
    specialized version of the same code without neither generating a
    bunch of bytecodes (like we do now) nor relying on partial
    evaluation + code generation (like TruffleSOM does).

How this relate to Project Valhalla's Type Specialization effort?

The code pushed in the valhalla workspace uses bytecode generation (with ASM) to do the type specialization. I propose to use the same bytecode and to do the specialization either in the bytecode parser (in that case the code can run with an unmodified interpreter) or at JIT time (if either there is no interpreter or if there is a modified interpreter that use an indirection mechanism or a double stack representation).


Ali Ebrahimi

Rémi

_______________________________________________
mlvm-dev mailing list
mlvm-dev@openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/mlvm-dev

Reply via email to