On Wed, Sep 4, 2013 at 3:35 AM, Igor Stasenko <[email protected]> wrote:

> i think you can feel the difference while running interpreter.
> With JIT it makes little sense to have special selectors imo.
>

There are two advantages.  Space; special selectors encode a message send
in 1 byte, instead of 1 byte for the send bytecode + 4 bytes for the
literal selector.  Optimization; the JIT (as does the interpreter) inlines
+,-,*,/ etc for SmallInteger and then does even more inlining for
>,<,>=,<=,== when they're followed by a jump.

As far as space goes, now that the system is 32-bit, and especially if it
is 64-bit there is an opportunity to add 256 special selectors and use a
two-byte special selector bytecode, 2 < 1 + 4.  VisualWorks does this.
 There are problems though; over time the most statically frequent
selectors can change as the library changes (e.g. blockCopy: is no longer
used).


>
> On 4 September 2013 12:31, Nicolas Cellier <
> [email protected]> wrote:
>
>> Agree,
>> My point was to ensure there was no speedup, so it's only a space
>> optimization (1 slot saved from literals, times 5000 senders or so per
>> selector plus maybe a byte for the send bytecode ?).
>> However the arithmetic ops, comparisons, bit ops, at: at:put:, == and
>> class special selectors still have some specific speed up, especially in
>> COG, so we cannot get rid of specialSelectors alltogether...
>>
>>
>> 2013/9/4 Marcus Denker <[email protected]>
>>
>>>
>>> On Sep 4, 2013, at 12:08 AM, Nicolas Cellier <
>>> [email protected]> wrote:
>>>
>>> > I note that #class was removed from specialSelectors (nilled entry) so
>>> as to not use the VM hack which fetches the class without sending a message.
>>> > Pharo prefers the regular message send.
>>> > But next to that entry, there is #blockCopy: which was formerly used
>>> for blue book BlockContext.
>>> > BlockContext was removed from Pharo...
>>> > So that makes two available slots for optimizing most used (sent)
>>> messages...
>>> > We might choose some candidates and test on some macro benchmark if
>>> ever that really makes a difference.
>>>
>>> I am not sure if optimizations on that level make sense…
>>>
>>>         Marcus
>>>
>>
>>
>
>
> --
> Best regards,
> Igor Stasenko.
>



-- 
best,
Eliot

Reply via email to