Hi Remi,

thanks a lot for the clarifications! Makes sense to me now.

Best regards,
Tobias

On 13.09.2018 11:09, Remi Forax wrote:
> Hi Tobias,
> 
> [Switching back to valhalla-spec-experts]
> 
> ----- Mail original -----
>> De: "Tobias Hartmann" <[email protected]>
>> À: "valhalla-dev" <[email protected]>
>> Envoyé: Jeudi 13 Septembre 2018 09:33:19
>> Objet: Re: Valhalla EG meeting notes Sep 12 2018
> 
>> [Switching from valhalla-spec-experts to valhalla-dev]
>>
>> Just wanted to add my 2 cents to the JIT part of this discussion:
>>
>> On 13.09.2018 00:22, Karen Kinnear wrote:
>>> Frederic: LW1 uses NO boxes today. JIT could not optimize boxes, can we 
>>> consider
>>> a model without boxes?
>>
>> +1
>>
>>> Brian: MVT, L&Q signatures were messy with boxes. With LWorld all those 
>>> problems
>>> go away.
>>> Frederic: Question of the number of types in the vm.
>>> Remi: Two types in the language level and 1 in VM.
>>> Karen: Goal of this exercise is:
>>>    1) user model requirements to support erased generics - requires null and
>>>    null-free references to value types
>>>    2) JIT optimizations in general for value types (not for erased 
>>> generics, but in
>>>    general code and in future in reified generics) - depend on null-free
>>>    guarantees.
>>> Our goal: minimal user model disruption for maximal JIT optimization.
>>
>> As explained below, for the JIT it would be optimal to be able to statically
>> (i.e. at compile time)
>> distinguish between nullable and null-free value types. We could then emit
>> highly optimized code for
>> null-free value types and fall back to java.lang.Object performance (or even
>> better) for nullable
>> value types.
>>
>>> The way I read your email Remi - I thought you had user model disruption, 
>>> but no
>>> information passed to the JIT, so no optimization benefit.
>>> Remi: If inlining, the JIT has enough information.
> 
> Let me clarify because during that part of the conf call both me and Frederic 
> were not on the same planet.
> Here, i was talking about a strategy to implement nullable value type at the 
> language level, this is neither a discussion about implementing non nullable 
> value type in the language nor a discussion about implementing non nullable 
> value type in the VM.
> 
> This strategy is named "Always null-free value types" in latest Dan Smith 
> email, again it's "null free value types" from the language POV, not the VM 
> POV.
> 
>>
>> Yes but even with aggressive inlining, we still need null checks/filtering at
>> the "boundaries" to be
>> able to optimize (scalarize) nullable value types:
>> - Method entry with value type arguments
>> - Calls of methods returning a value type
>> - Array stores/loads
>> - Field stores/loads
>> - Loading a value type with a nullable (non-flattenable) field
>> - Checkcast to a value tyoe
>> - When inlining method handle intrinsics through linkTo and casting Object
>> arguments to value type
>> - OSR entry with a live value type
>> - Every place where we can see constant NULL for a value type in the 
>> bytecodes
>> - Some intrinsics
> 
> yes, for non nullable value type, for nullable value type the idea is to 
> erase them to Object (or their first super interface).
> 
>>
>>> Frederic: Actually with field and method signatures it makes a huge 
>>> difference
>>> in potential JIT optimizations.
>>
>> Yes, it would make above null filtering unnecessary for null-free value 
>> types.
>>
>>> Remi: Erased generics should have ok performance
>>> Frederic: We are talking about performance without generics - we want full
>>> optimization there.
> 
> Here, you can see the mis-communication issue in plain sight, Frederic is 
> thinking about the semantics of non nullable value types in the VM. 
> 
>>> Remi: what happens in LW1 if you send null to a value type generic 
>>> parameter?
>>> Frederic: LW1 supports nullable value types. We want to guarantee/enforce
>>> null-free vs. nullable distinction in the vm.
>>
>>> Remi: there are two kinds of entry points in the JIT’d code
>>> Frederic: 2: i2c which does null checks and if null calls the interpreter, 
>>> c2c -
>>> disallows nulls.
>>> editor’s note: did you get the implication - if we see a null, you are 
>>> stuck in
>>> the interpreter today because we all rely on dynamic checks.
>>
>> Yes, that's an important point.
>>
>>> Remi: For the vm, if you have a real nullable VT JIT can 
>>> optimize/deopt/reopt
>>> Frederic: this is brittle, lots of work, and uncertain
> 
> yes, i fully agree with Frederic, i can live with non first class support of 
> nullable value type in the VM.
> 
>>
>> That's what we currently have with LW1. Although the language exposure is
>> limited by javac, null
>> value types are *fully* supported in the VM/JIT but with a huge performance
>> impact. We deoptimize
>> when encountering NULL but do not attempt to re-compile without 
>> scalarization.
>> We could do that, but
>> that would mean that whenever you (accidentally) introduce NULL into your
>> well-written, highly
>> optimized value type code, performance will drop significantly and stay at 
>> that
>> level.
> 
> The fact that LW1 do nullcheck for value type is something independent in my 
> opinion, we need these nullcheck for migration and not necessarily for 
> supporting nullable value type.
> 
>>
>> Of course, there are ways to optimize this even without null-free value 
>> types.
>> For example by
>> profiling and speculation on nullness. Or by having two compiled versions of 
>> the
>> same method, one
>> that supports nullable value types by passing them as pointers (no
>> deoptimization) and one that
>> scalarizes null-free value types to get peek performance.
>>
>> But as Frederic mentioned, these approaches are limited, complex and the 
>> gain is
>> uncertain. It's
>> maybe similar to escape analysis: We might be able to improve performance 
>> under
>> certain conditions
>> but it's very limited. I think the whole point of value types is 
>> performance, so
>> we should try to
>> get that right.
> 
> Technically it's better than "plain" escape analysis because there is no 
> identity so you can re-box only at the edges.
> 
> Anyway, there is a bigger issue, as John said, it means you have to support a 
> really weird semantics for method call because each parameter type which is a 
> value type can be null or not, so if you want to specialize on that it means 
> you have a combinatorial explosion of all the possibilities (2 pow n with n 
> the number of value type in the parameter list), you can try to be lazy on 
> that but if you think in term of vtable a method will have to support it's 
> own specialization + all the specializations of the overriden methods.
> 
>>
>> To summarize: If we just need to support null value types and are fine with 
>> null
>> screwing up performance and having weird side effects, we are basically done 
>> today. If null
>> value types need to perform well (i.e. similar to j.l.Object), optimally we 
>> would need to be able to
>> statically distinguish between nullable and null-free value types.
> 
> As i said above, i'm fine with the current semantics of non nullable value 
> type in the VM because if we have a null that appears it's because of 
> separate compilation/migration.
> 
> I like your last sentence, because it's the whole point of the strategy to 
> erase nullable value type in Java to Object in the classfile, a nullable 
> value type will perform as well as java.lang.Object so instead of trying to 
> introduce a way to denote nullable value type in the classfile, let's erase 
> nullable value type as Object.
> Obviously, i'm lying here because if you erase something to Object, you need 
> a supplementary cast and when you erase something you can have method 
> signature clash so it a trade off but the advantage of this proposal is that 
> the VM doesn't have to bother to understood nullable value type and having a 
> simple JVM spec is a HUGE win.
> 
>>
>>> Remi: VM does not want nullable value types
>>> Frederic: VM wants to be able to distinguish null-free vs. nullable value 
>>> types,
>>> so for null-free we can optimize like qtypes and fo nullable, we can get 
>>> back to
>>> Object performance.
>>
>> Yes, exactly.
> 
> Both sentences are true :)
> Those are different strategies.
> 
>>
>> Best regards,
>> Tobias
> 
> regards,
> Rémi
> 

Reply via email to