2009/4/30 Igor Stasenko <[email protected]>:
> Here the Pharo benchs
>
> 100 "processes" benchSwitch: 5. "seconds"
>
> 1) "No changes to Processor"
> '2,520,509 switches/sec'
> '2,520,590 switches/sec'
> '2,413,224 switches/sec'
>
> 2) "After changes to Processor, but using old VM primitives"
> '1,106,806 switches/sec'
> '1,110,870 switches/sec'
> '1,116,981 switches/sec'
> '1,204,289 switches/sec'
>
> "After installing new Processor"
>
> '152,779 switches/sec'
> '150,913 switches/sec'
> '152,608 switches/sec'
> '153,844 switches/sec'
>
> Its interesting that transition from 1) to 2)
> which just increasing a send chain to get to primitive, like:
>
What's even more interesting that i added a bench:
AdvancedProcessorScheduler>>bench: seconds
| proc count |
count := 0.
proc := [ [ Processor interruptWith: [count := count + 1] ] repeat ]
forkAt: Processor activePriority -1.
(Delay forSeconds: seconds) wait.
proc terminate.
^(count value // seconds) asStringWithCommas, ' switches/sec'
And it shows following results:
Processor bench: 5
'1,039,398 switches/sec'
'1,060,727 switches/sec'
'1,083,370 switches/sec'
As you can see, a scheduler loop alone is quite performant and nearly
matching the speed of VM based scheduler.
Which makes me wonder why semaphore signal/wait causes so much speed
degradation...
Looks like it does looping where it shouldn't. Need to inspect it more
closely. :)
> Semaphore>>wait >> primitive
> Semaphore>>wait >>
> Processor>>waitForSemaphore:>>Semaphore>>primitiveWait>>primitive
>
> downgrades a bench numbers by more than a 2x factor.
>
> 2009/4/30 Stéphane Ducasse <[email protected]>:
--
Best regards,
Igor Stasenko AKA sig.
_______________________________________________
Pharo-project mailing list
[email protected]
http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project