On 05.10.2016 21:45, Charles Oliver Nutter wrote:
On Wed, Oct 5, 2016 at 1:36 PM, Jochen Theodorou <blackd...@gmx.org
<mailto:blackd...@gmx.org>> wrote:

    If I hear Remi saying volatile read... then it does not sound free
    to me actually. In my experience volatile reads still present
    inlining barriers. But if Remi and all of you tell me it is still
    basically free, then I will not look too much at the volatile ;)

The volatile read is only used in the interpreter.

ah... I see.. nice. I get the feeling Remi actually already said this...

    In Groovy we use SwitchPoint as well, but only one for the whole
    meta class system.... that could clearly improved it seems. Having a
    Switchpoint per method is actually a very interesting approach I
    would not have considered before, since it means creating a ton of
    Switchpoint objects. Not sure if that works in practice for me since
    it is difficult to make a switchpoint for a method that does not
    exist in the super class, but may come into existence later on -
    still it seems I should be considering this.

I suspect Groovy developers are also less likely to modify classes at
runtime? In Ruby, it's not uncommon to keep creating new classes or
modifying existing ones at runtime, though it is generally discouraged
(all runtimes suffer).

It depends a bit on the style if it is done more or less often. But I think the majority barely changes the classes. but compared to Ruby probably a lot less.

We have a construct, that adds dynamically methods to multiple classes with a limited thread visibility and lifetime (Categories), but those are actually not realized as meta class changes. Creating a new class can happen any time, but they tend not to be build, they are declared with all the methods you want in there already usually.

    cold performance is a consideration for me as well though. The heavy
    creation time of MethodHandles is one of the reasons we do not use
    invokedynamic as much as we could... especially considering that
    creating a new cache entry via runtime class generation and still
    invoking the method via reflection is actually faster than producing
    one of our complex method handles right now.

Creating a new cache entry via class generation? Can you elaborate on
that? JRuby has a non-indy mode, but it doesn't do any code generation
per call site.

well, the code generation is optional, otherwise we use reflection in that mode. WE use the technique since I think 2008. And basically you have an interface call(Object[]), which we produce an implementation for at runtime and then call it. We use MagicAccessorImpl to avoid bytecode validation... well... if existing/accessible, not sure that is still the case in jdk9 though

[...]
Ahh, so when you invalidate, you only invalidate one class, but every
call site would have a SwitchPoint for the target class and all of its
superclasses. That will be more problematic for cold performance than
JRuby's way, but less overhead when invalidating. I'm not which
trade-off is better.

have to test it out in the future.

We also use this invalidation mechanism when calling dynamic methods
from Java (since we also use call site caches there) but those sites are
not (yet) guarded by a SwitchPoint.

yes, we have a very few cases like this as well.

[...]
With recent improvements to MH boot time and cold performance, I've
started to use indy by default in more places, carefully measuring
startup overhead along the way. I'm well on my way toward having fully
invokedynamic-aware jitted code basically be all invokedynamics.

invokedynamic by default is the way to go ;)

    It is also good to hear that the old "once invalidated, it will not
    optimized again - ever" is no longer valid.

And hopefully it will stay that way as long as we keep making noise :-)

indeed ;)

bye Jochen

_______________________________________________
mlvm-dev mailing list
mlvm-dev@openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/mlvm-dev

Reply via email to