I'm learning and would love to hear ideas about how best to benchmark
Smalltalk rendering. I tried this very crude test in a Workspace:

loops := 100.
rect := Display boundingBox.
a := Time millisecondsToRun: [
loops timesRepeat: [
Display forceToScreen: rect.
].
].
Transcript show: a / loops asFloat; cr.

With a full-screen window on a current model Mac Pro with stock Pharo 3
image and VM it reports 47 ms/update. On my Surface Pro 2 (Windows 8) it is
takes only 1.1 ms. I'm kind of shocked by that number in comparison. It's a
lower resolution screen but still.

I know full screen updates are not the norm for many applications but
they're still very important. Also keep in mind this is just the low level
screen update, no Morphic overhead. Ideally the display update takes less
than a millisecond to leave time for all the higher level rendering.

For performance assessment I'd like to have a nice Frames Per Second graph
running in the corner as well as repeatable test cases matching common uses
(scrolling, full updates, partial updates, empty window, densely packed
grid, text editor, etc) that could be run across platforms.

After that I wonder about profiling. Can accurate timed traces be done from
top to bottom? I imagine it would at least get murky at the VM boundary.

I also noticed logic to limit display updates to 50 times a second,
presumably to minimize CPU usage. CPU usage would also be reduced by
optimizing rendering but in any case I'd like to see an update rate
synchronized with the video rate or faster (60 or more). That would
minimize tearing and allow for no perceivable UI lag.



On Mon, Jun 2, 2014 at 3:48 PM, Esteban A. Maringolo <emaring...@gmail.com>
wrote:

> 2014-06-02 16:51 GMT-03:00 stepharo <steph...@free.fr>:
> > It depends what you measure.
> > If you measure classBrowser opening then nautilus is slower because it is
> > doing more stuff.
>
> If adding more features degrades performance in a human perceivable
> way then it's slowing down. (I mean, if slowdown is linear as extra
> features are added). The trade-offs might be worth it (as indeed they
> are).
>
> > So if you really want to help
> > make a benchmark showing the difference between 1.4 and 3.0 on a
> > **concrete** case. Without that we cannot make progress.
>
> Fair enough. I'll do so :)
> Next time I won't say it feels slower (but not slow) until I get an
> appropriate benchmark.
>
> Regards,
>
> --
> Esteban.
>
>

Reply via email to