With the statusbar closed, there's more screen real estate to update, no?

On Sun, Apr 11, 2021 at 11:24 AM Daniel Heck <m...@dheck.net> wrote:

> Hm... it's hard to know what's going on without more information. Have you
> tried using a profiler (e.g. "gprof" or "oprofile" on Linux) to figure out
> _where_ the time is being spent? In general, I find CPU time to be a
> relatively poor proxy for performance, not just because it is influenced by
> things like CPU scaling, but also because it usually doesn't account for
> time spent waiting (I/O, memory, external devices) and time spent in other
> parts of the system (the operating system, hardware drivers, other
> processes).
>
> That being said, Enigma's rendering "engine" is indeed antiquated and not
> a good fit for the way modern computers update the screen. Rendering in
> software and uploading the resulting image to the GPU is simply not
> efficient any more. Ideally, we would use the SDL_Render API to let the GPU
> do all of the drawing so that we have to transfer as little image data
> between the CPU and the GPU as possible. But this would require a
> significant rewrite of the display code...
>
> Now that I think of it, have your experimented with the way screen updates
> are handled in ecl::Screen:flush_updates()? When there are more than 200
> updated regions on the screen, the function simply updates its entire
> contents, which might be related to your observation that drawing more is
> faster.
>
> - Daniel
>
> > On 10. Apr 2021, at 18:23, Andreas Lochmann <and.lochm...@googlemail.com>
> wrote:
> >
> > Hi everyone,
> >
> > I'm currently performing some experiments to improve Enigma's
> performance in the graphics department. For this, I measure the CPU time
> used to solve certain self-solving levels, particularly one with smooth
> scrolling, because this is our Achilles' heel right now.
> >
> > I noticed that Enigma uses less CPU time when something else runs in the
> background, like a web video. This is easily explained by the CPU frequency
> stepping up. However, it seems like this effect appears even when I
> activate/deactivate the Enigma's own status bar (the one counting up the
> time and displaying the level title) and full utilisation. Let me explain:
> >
> > I first launched several prime generators in the background, so my cores
> were 100% utilised and CPU frequency on its maximum. With status bar
> activated, Enigma uses on average 13.05 CPU-seconds for a specific task.
> When I completely deactivate the status bar, the same task takes about
> 13.87 CPU-seconds -- and this although drawing the status bar itself has to
> be done within the same 13.05 CPU-seconds. (This is not a statistical
> fluke. For the average I used four runs each time, and all four runs WITH
> status bar needed consistently less time than WITHOUT status bar. And I did
> similar, slightly different experiments before, all showing the same
> paradoxical behaviour.)
> >
> > How can it be that drawing the status bar still leads to less CPU time
> used? Is "video memory cache in CPU" a thing? (Remember that Enigma relies
> on software acceleration for graphics.)
> >
> > Maybe someone of you knows this and can help me out?
> >
> > Because: If this turns out to be real, it might make Enigma faster if it
> has to draw more on each time step. After all, we have lots of code trying
> to reduce the blit count.
>
>
>

Reply via email to