As a single data point: ====================== anonymous_fix.d ========== 500000500000
real 0m0.168s user 0m0.200s sys 0m0.380s ====================== colvin_fix.d ========== 500000500000 real 0m0.036s user 0m0.124s sys 0m0.000s ====================== norwood_reduce.d ========== 500000500000 real 0m0.009s user 0m0.020s sys 0m0.000s ====================== original.d ========== 218329750363 real 0m0.024s user 0m0.076s sys 0m0.000s Original is the original, not entirely slow, but broken :-). anonymous is the anonymous' synchronized keyword version, slow. colvin_fix is John Colvin's use of atomicOp, correct but only ok-ish on speed. Jay Norword first proposed the reduce answer on the list, I amended it a tiddly bit, but clearly it is a resounding speed winner. I guess we need a benchmark framework that can run these 100 times taking processor times and then do the statistics on them. Most people would assume normal distribution of results and do mean/std deviation and median. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
signature.asc
Description: This is a digitally signed message part