On Thursday, 9 January 2014 at 02:21:41 UTC, Adam Wilson wrote:
Yes, it is supposed to provide 2D and 3D and ideally have both in the same window.

Then we should decide what the 2D surface properties are:

1. Are they z-ordered or do we use painters-algorithm (drawing back to front), basically: how should 3D and 2D blend into each other?

2. Are they considered to be transparent all over the surface, or are the non-transparent parts of them to be handled more efficiently?

3. Are they to be scaled as bitmaps or are they going to have exact precision.

If it is desirable to skip the compositor complexity I'd say try to go entirely for triangular geometry and shaders in the first version and treat 2D the same way you treat 3D, but have shape objects so that you don't have to constantly transfer meshes to the GPU.

Pre rendering:
1. build shape objects.
2. build graphic contexts with various transforms and setups (colours, scaling). 2. register shape objects with engine with expected min-max LOD level (mesh resolution).
3. engine preloads data to the GPU if desirable.

Per frame:
1. get graphic contexts with various transforms and setups (colours, scaling)
2. toss shape object id's to the engine through a graphic context
3. engine queues transparent parts and renders non-transparent parts immediately
4. engine sorts transparent parts and render them

It will probably not be very fast for 2D, but it is the better starting point if you later want to mix 2D and 3D. Besides, by the time the framework is ready maybe most GPUs have fast enough shaders for this to be the best way to do it for larger shapes (then you can special case for smaller shapes later by drawing to them to textures).

Reply via email to