Le 24 oct. 2014 15:51, "Clément Bera" <[email protected]> a écrit :
>
> The current x2 speed boost is due only to spur, not to sista. Sista will
provide additional performance, but we have still things to do before
production.
>
> The performance gain reported is due to (from most important to less
important):
> - the new GC has less overhead. 30% of the execution time used to be
spent in the GC.
> - the new object format speeds up some VM internal caches (especially
inline caches for message sends due to an indirection for object classes
with a class table).
> - the new object format allows some C code to be converted into machine
code routines, including block creation, context creation, primitive
#at:put:, which is faster because switching from jitted code to C then back
to jitted code generate a little overhead.
> - characters are now immediate objects, which speeds up String accessing.
> - the new object format has a larger hash which speeds up big hashed
collections such as big sets and dictionaries.
> - become is faster.

Amazing comes to mind...

Looks like a case of 1+1=much more than 2.

Keep up the good work, you guys are setting a high standard for us to
match. It is truly inspiring !

Phil
>
>
> 2014-10-24 15:20 GMT+02:00 kilon alios <[email protected]>:
>>
>> thanks max, i completely forgotten about esug videos, looks like i found
what to watch during the weekend :D
>>
>> On Fri, Oct 24, 2014 at 4:12 PM, Max Leske <[email protected]> wrote:
>>>
>>>
>>>> On 24.10.2014, at 15:06, kilon alios <[email protected]> wrote:
>>>>
>>>> very nice
>>>>
>>>> so any more information to this, how exactly this optimization works
and which kind of data will benefit from this ?
>>>
>>>
>>> Clément’s byte code set talk at ESUG:
http://www.youtube.com/watch?v=e9J362QHwSA&index=64&list=PLJ5nSnWzQXi_6yyRLsMMBqG8YlwfhvB0X
>>> Clément’s Sista talk at ESUG (2 parts):
>>>
http://www.youtube.com/watch?v=X4E_FoLysJg&list=PLJ5nSnWzQXi_6yyRLsMMBqG8YlwfhvB0X&index=76
>>>
http://www.youtube.com/watch?v=gZOk3qojoVE&list=PLJ5nSnWzQXi_6yyRLsMMBqG8YlwfhvB0X&index=75
>>>
>>> Eliot’s Spur talk at ESUG (3 parts):
>>>
http://www.youtube.com/watch?v=k0nBNS1aHZ4&index=49&list=PLJ5nSnWzQXi_6yyRLsMMBqG8YlwfhvB0X
>>>
http://www.youtube.com/watch?v=sn3irBZE7g4&index=48&list=PLJ5nSnWzQXi_6yyRLsMMBqG8YlwfhvB0X
>>>
http://www.youtube.com/watch?v=1Vg0iFeg_pA&list=PLJ5nSnWzQXi_6yyRLsMMBqG8YlwfhvB0X&index=47
>>>
>>>>
>>>> On Fri, Oct 24, 2014 at 3:47 PM, Sebastian Sastre <
[email protected]> wrote:
>>>>>
>>>>> remarkable!!!
>>>>>
>>>>> congratulations for the impressive results
>>>>>
>>>>> thanks for sharing!
>>>>>
>>>>> sebastian
>>>>>
>>>>> o/
>>>>>
>>>>> > On 23/10/2014, at 17:40, Max Leske <[email protected]> wrote:
>>>>> >
>>>>> > For those of you who missed this on IRC:
>>>>> >
>>>>> > henriksp: estebanlm: Care to run a small bench Cog vs Spur for me?
>>>>> > [3:32pm] henriksp: int := ZnUTF8Encoder new.
>>>>> > [3:32pm] henriksp: [int decodeBytes:#[67 97 115 104 44 32 108 105
107 101 32 226 130 172 44 32 105 115 32 107 105 110 103 0]] bench.
>>>>> > [3:32pm] henriksp: had a 16x speedup with assembly implementation
vs Cog, if it's 8x vs Spur, that's just really impressive
>>>>> > [3:44pm] Craig left the chat room. (Quit: Leaving.)
>>>>> > [3:53pm] Craig joined the chat room.
>>>>> > [4:08pm] VitamineD joined the chat room.
>>>>> > [4:20pm] estebanlm: checking
>>>>> > [4:21pm] estebanlm: Cog: 167,000 per second.
>>>>> > [4:22pm] estebanlm: Cog[Spur]: 289,000 per second.
>>>>> > [4:23pm] estebanlm: henriksp: ping
>>>>> > [4:33pm] tinchodias left the chat room. (Ping timeout: 245 seconds)
>>>>> > [4:33pm] tinchodias joined the chat room.
>>>>> > [4:34pm] henriksp: 70% more work done, nice!
>>>>> > [5:09pm]
>>>>> >
>>>>> >
>>>>> > Yay! :)
>>>>>
>>>>
>>>
>>
>

Reply via email to