Sven, FYI, if you have the slice, I can run all our tests against it. We do just a little bit of things with DateAndTime in a planning app :-) so potential issues might be exposed.
Johan On 27 Mar 2014, at 15:17, Sven Van Caekenberghe <[email protected]> wrote: > Umbrella issue: > > https://pharo.fogbugz.com/f/cases/13139/Speed-Regressions-in-DateAndTime > > I rewrote (and simplified) DateAndTime>>#+ #- #= > I added caching for #epoch > I switched the localTimeZone to an #asFixedTimeZone variant > > My benchmark now runs 40x FASTER than in Pharo 1.4 > > All Chronology tests remain green. > > I will upload slices when the funding goal of 1,000,000 USD is reached on > https://www.kickstarter.com/projects/1003214829/improve-pharo-dateandtime-performance > > PS: Half of the funds will go to Camillo for making DateAndTime work in UTC > internally ;-) > > PS2: I am not sure I am allowed to fix this so close to release, is it a > feature or a bug fix ? ;-) > > On 27 Mar 2014, at 12:29, Sven Van Caekenberghe <[email protected]> wrote: > >> >> On 27 Mar 2014, at 11:55, Johan Brichau <[email protected]> wrote: >> >>> On 27 Mar 2014, at 11:50, Sven Van Caekenberghe <[email protected]> wrote: >>> >>>> It is slow(er) in 2 and fast(er) in 3, according to this discussion and my >>>> reading of the code. If you see the inverse, than please provide some >>>> details. >>> >>> We come from Pharo 1.4, where our timing benchmarks that use a lot of >>> DateAndTime operations run 4x faster (in Gemstone too). >>> It is indeed faster in 3 than in 2. (I believe because of a wait in de >>> DateAndTime creation that has to do with clock precision). >>> >>> Johan >> >> An x4 (400%) slowdown sounds like an unacceptable regression to me. >> >> Could you maybe provide some benchmark ? >> >> I did a quick one (in Pharo 3): >> >> [ >> | timestamps | >> timestamps := Array streamContents: [ :out | >> 1024 timesRepeat: [ out nextPut: (DateAndTime now - (60*60*24*265) >> atRandom seconds) ] ]. >> 64 timesRepeat: [ timestamps sorted ]. >> timestamps sort. >> timestamps collect: [ :each | timestamps includes: each ] ] timeToRun. >> >> => "0:00:00:09.491" >> >> In 1.4, this is indeed about 40 times faster: >> >> [ >> | timestamps | >> timestamps := Array streamContents: [ :out | >> 1024 timesRepeat: [ out nextPut: (DateAndTime now - (60*60*24*265) >> atRandom seconds) ] ]. >> 64 timesRepeat: [ timestamps sorted ]. >> timestamps sort. >> timestamps collect: [ :each | timestamps includes: each ] ] timeToRun >> milliSeconds. >> >> => "0:00:00:00.228" >> >> For this test, ZTimestamp is 100 times faster: >> >> [ >> | timestamps | >> timestamps := Array streamContents: [ :out | >> 1024 timesRepeat: [ out nextPut: (ZTimestamp now - (60*60*24*265) >> atRandom seconds) ] ]. >> 64 timesRepeat: [ timestamps sorted ]. >> timestamps sort. >> timestamps collect: [ :each | timestamps includes: each ] ] timeToRun. >> >> => "0:00:00:00.07" >> >> Looking at this with the Time Profiler, I see that DateAndTime>>#asSeconds >> (used in #<) takes 95% of the time. I am pretty sure we can fix this. I'll >> make an issue and slice later on. >> >> Sven >> > > > > -- > Sven Van Caekenberghe > Proudly supporting Pharo > http://pharo.org > http://association.pharo.org > http://consortium.pharo.org > > > > >
