Probably the most useful response I can give immediately is to say:

When I say OpenGL or 3D, I think they are both a bad idea. We don't need
3D, and when talking about 3D, we don't need to be contaminated with the
idea of adding OpenGL-like features. It's a bad idea layered on a bad
idea. They are bad influences, both, because they jump to solutions
before problems are considered fully.

Porter-Duff compositing implies sub-pixel alpha blending and does not
imply 3D.

For a long time I read everything that went by on this list, and my
impression is that you came out of the gates with the idea of doing 3D,
and that part was never really discussed in a meaningful way. The
alternative was too simple and badly defined to be meaningful. I
eventually stopped reading everything a month or two ago when I lost
hope in OGP's future.

The fact that you say everything I say implies "3D pipeline" to you
illustrates my point exactly to me. You are stuck in an box of
preconceptions. You have never been out of this box, and your proposed
solution is overcomplicated as a result.

My requirements imply several things to me:

1) Polygons
2) Layers
3) 2D
4) Compositing
5) An efficient representation for data that takes advantage of the
above, and allows efficient encoding of scale, rotation, translation and
depth for objects and their children.

There is a useful concept that I think needs to be kept in mind in
graphics:

The fastest way to do something is to not do it. This is a comment on
the general way in which people should implement graphics primitives
nowadays. Why copy something when you can just read the data from the
new address?

Similarly, we need things to layer over other things, but we don't need
perspective correction. The Z axis implies visibility only, not scale,
and we can encode that information without requiring Z values on each
input polygon.

Similarly, we don't need texture mapping. Sure, we need some way to
represent rectangles of pixels, but use the word "texture" and you have
already jumped to a potentially suboptimal solution for our problem
space. If we can do sub-pixel rendering and handle a ridiculous number
of polygon fragments in parallel, it's a lot less painful to do what we
want, without ever resorting to texture maps. And we could do it because
rasterizing could be implemented in a way that requires no texture
reads, no zbuffer reads, and little or no overdraw.

We don't need to do 3D transforms, because we only have 2 dimensions for
any object, and a layer depth for that object, and 2D transforms are
enough. That's a lot fewer gates or operations.

In other words, we can throw away a lot of what makes 3D rendering
"hard" in terms of gates and bandwidth, and implement some pretty
dramatic parallelism in the rendering stages because each rendering unit
doesn't refer to any external data.

And finally, most of the things that you mark as "trivial" are never
done, and it would be really nice to fix that, because the "trivial"
things are often the most important things when it comes to making a
smooth GUI. If we just implemented all the trivial stuff, we'd have
something better than anything else out there.

Hm, this is a lot more than I intended to say, but it should start the
ball rolling.

Cheers,
Ray

_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to