In our previous episode, Lars said:
> > Could be potential further discussion about FireMonkey moved to
> > fpc-other, please? I don't think that it's still related to FPC...
> Fine, let me put it this way to keep it 100 percent on topic.
> 
> If FPC was able to create hardware accelerated GUI apps (nothing to do
> with firemonkey... just in general if it was able to), is this useful?

Well, first, most GUI applications are already accelerated. So even if the
application has no explicit knowledge about it, it might use the GPU.

> What advantages do hardware accelerated GUI apps offer?

Aside from that, as an example, I implemented OpenGL support in my (work)
application simply to have smooth scrolling of large images without blocking
the main thread.  Low framerates (5-10, higher are rate limited), but large
images.  (2Mpx mono till 25mpx RGBA)

FM and some other seem to stress effects and control animations though, like 
the trend in GUI was before it got win8-10 flat again. Probably that is
because it is easier to fixate in the API and requires less enduser ability
and effort.

> If FPC was able to target hardware accelerated GUI (or if it already does)
> then what advantages would this offer people?  There must be some reason
> for hardware accelerated GUI apps.

The main reason is to have a widget set that feeds the "real" gui system in
a compositive way. IOW if the screen needs to be redrawn, the OS or X
environment must be able to render the next scene without additional CPU
cycles.

This is important on the desktop (which is compositing since Windows Vista,
OS X since (afaik) 10.4), but even more so on mobile, since it simply means
that rendering the next image will run on the lower clocked, parallel GPU
rather than the higher clock, not so parallel CPU.

But that is less about actually doing stuff in the GPU but having a GPU
friendly model/abstraction. You can do old school rendering as long as you
pass it to the OS/XServer in a way it can composite it easily without
additional events, read: a modern API.  If the desktop then creates a new
arrangement of windows it doesn't have to call back into the application to
render the image again (like e.g. Windows GDI does).

Of course anybody designing any new widgetset will stress this fact, just
like anything nowadays should tick CSS and javascript bullets.

But all that requires actual OS/Xserver support.

> But maybe it's to do with flash style animations, video game like
> interfaces for apps... 3D apps?

I think the core desirable principle is to support compositing, as in no
application involvement to render the same image, but slightly stretched
(e.g. resizing a window) etc. You can of course extend this slightly to
animation. Instead of the desktop calling the application to render every
new sequence in the application, you pass the animations frames at once, and
leave it to the desktop to render it optimally.

This leads too general more responsive GUI, with lower CPU usage (mobile
again), even if the elements are reletively hires and complex.

Of course the marketeers want to sell you some hyped up vision of it, but
the core concept is ok.
_______________________________________________
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Reply via email to