On 4/28/07, Nicolas Boulay <[EMAIL PROTECTED]> wrote:
The card will be a fast fixed pipeline without the shader we see in
last generation card.
It was said that a "simple" shader is not necessary easy because you
need to serialise some operation that are now pipelined. So it's very
easy to have a slow card with shader.
I remember some year ago when the trend was to replace Xorg. Some
guy's use a matrox card (because it has full spec). So 4 years ago, i
could see mplayer with transparent rotating windows, etc... This was
possible because one guy decide to use every functionnality of the
card and have the spec and not because the card respect some standard.
You can have OpenGL 2.0 functionality with any full spec'd gpu, it's just
a matter of what is accelerated by the gpu and what is not. Mesa has an
alpha/test implementation of software glslang, but the issue of doing
graphic processing in a general cpu is speed.
Buffer Objects(OpenGL 1.5?) implemented in gpu memory don't require a
programmable shader unit, and are what make possible for beryl/xgl
rotating windows, and effects. They just require the pipeline ability
to efficiently
interchange rasterised bitmaps and textures(ie render to texture).
Rendering to an
off-screen part of frame buffer and uploading that to a texture unit
is one (slow)
way of doing it(I might be wrong about the slow part). They do interfere with
the pipeline since you add another set of data dependencies to the pipeline.
Might be a driver issue or hardware issue depending on how to
efficiently implement it.
The point might be completely moot if there is no specialized memory (ie
textures, vertex data, pixel data, frame buffer, all reside in the
same hierarchy of memory and have same access time)
[Note that I'm using OpenGL language for the pipeline elements, but it might
be a little different in the current/intended design]
Todays all real time 3D use triangles. The game use more and more
texture, effects, but globally the number of triangles are stable. So
all the game have the same kind of "look" with caracters with sharp
edge.
For some specialiste rasterised triangle are a dead end because the
need number of triangles are exponential compare to the quality.
So we heard about raytracing that scale much better, about more
complexe geometrical base (nurbs, bezier surface,...). All of this as
been refused because we need opengl and all of this did not fit well
for OpenGl.
Raytracing, and Radiosity are geared toward photorealism. If we don't
want a full blown 3D card for games, why do we want photorealism?
"real time" is also a keyword in the issue, and the fact that we might not have
a 2D engine per-se is one of the reasons to use industry standards
with regards to the 3D engine (ie. drawing a desktop with point, lines, triangle
and texture is not a new thing now, doing it with spheres seems to
be... starting by texture mapping)
I imagine that supporting metaballs is out of question even if this
could be great for biological object. I imagine the use of an other
base objet beside triangle : a sphere. It avoid the sharp edge problem
and i imagine that balls have enough mathematical parameter that could
ease the draw, as for the triangle.
Cpu like feature is out of question because of the slow speed. So
maybe some part of the pipeline could became more programable to
enable the use of corner case trick. Few FMAC unit add so much power
compare to a pur software approach that i can't imagine that this is
not possible to find a way to extend a little bit our pipeline.
I am not a specialist in 3D. Maybe all of this need too much change to
be integrated in the current design.
The main problem with trying things like programmable shaders is
routing and bandwidth in the fpga. I don't know what is the state
of fpgas nowadays but reading from newsgroup archives[1] form 1995 it
was not feasible to implement super scalar dynamic pipeline cpu designs in fpga.
More recent research in undergrad classes[2] deals with very simple
approach that deal only with few integer units. So if we can imagine testing
it in the fpga, we can't imagine doing it in asic(until we the money
to confidently prototype ASIC from simulation in existing tools :) )
[1]http://www.fpgacpu.org/usenet/superscalar.html
[2]http://homepage2.nifty.com/City-1/sbcci2003_abstract.html
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)