On Wednesday, 8 January 2014 at 11:34:53 UTC, Mike Parker wrote:
Rendering to a memory buffer to generate png images is a legitimate use case. If Phobos has a graphics API, I would expect that to be supported even when no gpu is present.

Yes, this is true, but that was not the goal stated at the start of the thread. The linked framework is a wrapper for a hodge podge of graphics technologies that target real time graphics (an internal framework that was developed to do graphics for advertising I think).

A generic non-real-time graphics API that is capable of generating PDF, SVG and PNG would be quite useful in web-services for instance. But then it should be based on the graphics model that can be represented efficiently in PDF and SVG.

However, if you want interactive graphics, you enter a different domain. An engine that assumes that all geometry change for each frame is quite different from an engine that assumes that most graphics do not change beyond simple affine transforms.

If you decide that most surface do not change (beyond affine transforms) and want a portable graphics solution, you either write your own compositor (in D) on top of the common GPU model, or you use an engine which provides a hidden compositor (like SVG, and even Flash).

With a compositor you can let your "non real time" graphics API to write to surfaces that are used by that compositor. Thus, the real time requirements for graphic primitives are much lower. But it is more work for the programmer than using a high level retained mode engine such as SVG.

However the argument against shaders/GPU does not hold, I think. Using simple shaders (with your own restricted syntax) does not require a GPU. If you can parse it at compile time you should be able to generate D code for it, and you should be able to generate code for GL/DX at runtime quite easily (probably a few days of work).

Reply via email to