On Apr 15, 2013, at 4:50 PM, Igor Stasenko wrote:

> On 15 April 2013 16:43, Igor Stasenko <siguc...@gmail.com> wrote:
>> On 15 April 2013 16:32, Henrik Johansen <henrik.s.johan...@veloxit.no> wrote:
>>> 
>>> On Apr 15, 2013, at 3:25 PM, Igor Stasenko wrote:
>>> 
>>>> On 15 April 2013 12:14, Henrik Johansen <henrik.s.johan...@veloxit.no> 
>>>> wrote:
>>>>> 
>>>>> On Apr 13, 2013, at 12:16 AM, Igor Stasenko wrote:
>>>>> 
>>>>>> 
>>>>>> But that's fine.. now look at
>>>>>> 
>>>>>> secondsWhenClockTicks
>>>>>> 
>>>>>>     "waits for the moment when a new second begins"
>>>>>> 
>>>>>>     | lastSecond |
>>>>>> 
>>>>>>     lastSecond := self primSecondsClock.
>>>>>>     [ lastSecond = self primSecondsClock ] whileTrue: [ (Delay
>>>>>> forMilliseconds: 1) wait ].
>>>>>> 
>>>>>>     ^ lastSecond + 1
>>>>>> 
>>>>>> that is complete nonsense. Sorry.
>>>>>> 
>>>>>> This code relying on primSecondsClock resolution, which is..... (drum
>>>>>> roll..... )
>>>>>> 1 second..
>>>>>> 
>>>>>> then it is combined with millisecondClockValue , as you see later to get
>>>>>> system time with millisecond precision..
>>>>>> 
>>>>>> I am not genius in math and physics.. but even i understand that if
>>>>>> you measurement has error X
>>>>>> you cannot get more precision than X, even if you combine it with
>>>>>> another measurement with higher precision.
>>>>>> 
>>>>>> (But i can be wrong with that.. if so, please explain why)
>>>>> 
>>>>> Because, as Levente said in code, resolution  != accuracy.
>>>>> When you measure a single sample, sure, that value has an error of max 
>>>>> resolution / 2.
>>>>> But when you measure when that value _changes_, you have an error of the 
>>>>> underlying accuracy / 2.
>>>>> Clock s value:                  1               2
>>>>> resolution:                     |_______|_______|
>>>>> accuracy:                       |||||||||||||||||||||||||||||||||||
>>>>>                                                        ^
>>>>> Here's to hoping the above ascii art(?) worked!
>>>>> 
>>>>> So no, that method is not complete nonsense.
>>>>> 
>>>> Okay.
>>>> Now i rewriting the code which uses new primitive 
>>>> (primUTCMicrosecondClock),
>>>> and then we will no longer need this startup logic nor wrap around
>>>> check & other gory details...
>>>> just use a primitive.
>>> 
>>> Sounds clearer, yes.
>>> 
>>> However, be aware, at least on my VM:
>>> [Time millisecondClockValue] bench '34,600,000 per second.'
>>> [Time primUTCMicrosecondClock] bench '12,400,000 per second.'
>>> 
>>> Which in my case translated to that in a Time2 class with #now defined:
>>> now
>>> 
>>>        | microSecondsToday |
>>>        microSecondsToday := self primUTCMicrosecondClock \\ 86400000000.
>>>        ^ self seconds: microSecondsToday // 1000000 nanoSeconds: 
>>> microSecondsToday * 1000
>>> 
>>> and changing the relatively low-hanging fruit in standard Time (removing 
>>> Duration creation and other crap):
>>> 
>>> seconds: seconds nanoSeconds: nanoCount
>>>        "Answer a Time from midnight."
>>> 
>>>        ^ self basicNew
>>>                seconds: seconds nanoSeconds: nanoCount
>>> 
>>> To benchmarks that looked like this:
>>> "Old code"
>>> [Time now ] bench '7,330,000 per second.' '7,280,000 per second.' 
>>> '7,170,000 per second.'
>>> "Using microsecondClock every call"
>>> [Time2 now ] bench  '2,280,000 per second.''2,300,000 per second.' 
>>> '2,180,000 per second.'
>>> 
>>> Also note, Time2 lacked translation to Local time, which would be needed 
>>> for consistency with Time now.
>>> 
>> 
>> Well, i am not started modifying Time now, (will start in couple minutes ;)
>> but here results of DateAndTime now
>> before and after:
>> 
>> before:
>> 
>> 
>> [ DateAndTime now ] bench '316,000 per second.'
>> 
>> after:
>> 
>> [ DateAndTime now ] bench '414,000 per second.'
>> 
>> 
>> with Time now, i think even if it will be "2,300,000" answers per second,
>> it is fairly acceptable , when accuracy of primitive is 1 microsecond,
>> you actually don't need more than 1 million answers per second, or you
>> will receive same answers.
>> 
>> But aside of it, may i ask, why performance is so important/critical
>> for those? What is use case?
>> 
> 
> everything is good in comparison:
> 
> [ self yourself ] bench '58,800,000 per second.'
> 
> does nothing and just 25 times faster.. than Time now,
> so what is the point? :)

The point was raising awareness that performance differences are also worthy of 
measuring when doing a reimplementation, not just code cleanliness.
A 3x decrease in performance is worth at least consideration as part of a 
reimplementation, even when the conclusion is "it's plenty fast enough for most 
uses anyways".

Not to mention, "Made X Y% faster" looks better in the release notes than 
"Restructured the internals of X" :)

Cheers,
Henry


Reply via email to