> On 22 Jan 2020, at 01:43, Ronie Salgado <[email protected]> wrote:
> 
> That performance regression looks more like a language implementation bug 
> that a problem of the language itself.

Probably but you see it via an optimisation prism. The problem is not just an 
optimisation of perform: which should be done 
in any way and orthogonally to this problem. 

Now what if we want to build static analysers.
We put the burden on tool maker. 
If I have a write a type inferencer or a call graph then I will have to 
consider that the arg is 
        - a block and do block analysis
        - a symbol and do a special treatment

This is slightly not relayed but since any object understands value…..
we just dropped the possibility to use the code representation to gain some 
information about the program. 

This exactly for this reason that the classDefinition is a good move because 
sending a message to a class
is a bad idea (easy and sweet at the beginning) but you cannot build nice 
tools, you cannot easily build remote code browser
and perform dependency analyses (oh yes we can we build ring for it and many 
more). 


> If we assume that #do: and some other selectors (e.g. #select, #reject) 
> should always receive a block instead of a symbol, then the compiler could 
> perfectly replace the symbol literal for an already instantiated block 
> literal transparently from the user perspective.

Yes this is the optimisation I mention. 

> If the compiler does that we can save the bytecode for instantiating the 
> block closure, which can save a potential roundtrip to the garbage collector. 
> I guess (I am just speculating) that this performance overhead must be in the 
> implementation of the #perform: primitive, which I guess has to:
> 1) go through the JIT stack into the C stack transition (saving/restoring 
> interpreter/JIT state, additional pressure, and primitive activation overhead.
> 2) the lack of inline caches for #perform: (again, I am just guessing in this 
> case).
> 
> Note that the OpalCompiler is currently inlining some methods such as 
> #ifTrue:ifFalse: ,  #ifNil:ifNotNil: #and: #or: and there are not actual 
> message sends, so adding an additional list of selector where literal symbol 
> arguments are synthesized as blocks is not different to the cheating that the 
> current compiler is doing. If an user wants to disable this inlining, there 
> is currently a pragma for telling the compiler.
> 
> Do you want me to propose an experimental patch for testing this 
> infrastructure?

It would be nice but to me it is not exactly my point. 
After we should also see what is the impact on the decompiler, debugger, ….


Now there is no such closed list of selectors that accept a block and now get a 
symbol. 

At the end what I would like in Pharo is a scientific process: 
        Where we 
                propose 
                measure 
                evaluate and 
                decide once we have the data 

and not just oh this is easy let us make it. 

Pharo will not be a nice language by just adding tricks on top of tricks. 

S. 





Reply via email to