On Wed, Oct 5, 2016 at 1:36 PM, Jochen Theodorou <blackd...@gmx.org> wrote:

> If I hear Remi saying volatile read... then it does not sound free to me
> actually. In my experience volatile reads still present inlining barriers.
> But if Remi and all of you tell me it is still basically free, then I will
> not look too much at the volatile ;)

The volatile read is only used in the interpreter.

In Groovy we use SwitchPoint as well, but only one for the whole meta class
> system.... that could clearly improved it seems. Having a Switchpoint per
> method is actually a very interesting approach I would not have considered
> before, since it means creating a ton of Switchpoint objects. Not sure if
> that works in practice for me since it is difficult to make a switchpoint
> for a method that does not exist in the super class, but may come into
> existence later on - still it seems I should be considering this.

I suspect Groovy developers are also less likely to modify classes at
runtime? In Ruby, it's not uncommon to keep creating new classes or
modifying existing ones at runtime, though it is generally discouraged (all
runtimes suffer).

> cold performance is a consideration for me as well though. The heavy
> creation time of MethodHandles is one of the reasons we do not use
> invokedynamic as much as we could... especially considering that creating a
> new cache entry via runtime class generation and still invoking the method
> via reflection is actually faster than producing one of our complex method
> handles right now.

Creating a new cache entry via class generation? Can you elaborate on that?
JRuby has a non-indy mode, but it doesn't do any code generation per call

> As for Charles question:
>> Can you elaborate on the structure? JRuby has 6-deep (configurable)
>> polymorphic caching, with each entry being a GWT (to check type) and a SP
>> (to check modification) before hitting the plumbing for the method itself.
> right now we use a 1-deep cache with several GWT (check type and argument
> types) and one SP plus several transformations. My goal is of course also
> the 6-deep polymorphic caching in the end. Just motivation for this was not
> so high before. If I use several SwitchPoint, then of course each of them
> would be there for each cache entry. How many depends on the receiver type.
> But at least one for each super class (and interface)

Ahh, so when you invalidate, you only invalidate one class, but every call
site would have a SwitchPoint for the target class and all of its
superclasses. That will be more problematic for cold performance than
JRuby's way, but less overhead when invalidating. I'm not which trade-off
is better.

We also use this invalidation mechanism when calling dynamic methods from
Java (since we also use call site caches there) but those sites are not
(yet) guarded by a SwitchPoint.

> To me horror I just found one pice of code commented with:
> //TODO: remove this method if possible by switchpoint usage

With recent improvements to MH boot time and cold performance, I've started
to use indy by default in more places, carefully measuring startup overhead
along the way. I'm well on my way toward having fully invokedynamic-aware
jitted code basically be all invokedynamics.

> It is also good to hear that the old "once invalidated, it will not
> optimized again - ever" is no longer valid.

And hopefully it will stay that way as long as we keep making noise :-)

- Charlie
mlvm-dev mailing list

Reply via email to