On 4/3/06, Vinicius Santos <[EMAIL PROTECTED]> wrote: > On 4/3/06, Timothy Miller <[EMAIL PROTECTED]> wrote: > >[snip] > > That all being said, some of all of the Linux community will shift > > over time to wanting to have programmable shaders and hardware vertex > > processing. With products like OGC1 funding us, we're going to want a > > piece of the game market. In the shorter term, I can't help but > > wonder if we couldn't produce a simpler design (probably which runs at > > a much higher clock rate) that is fully programmable. I personally > > will have to try not to get distracted by it, but there's no reason > > why the community couldn't attempt to best the OGA1 design by spec'ing > > out a programmable architecture. > > > > Until the next major OGD-related event, what do you say to spending > > some time discussing some different ideas? > > The first problem seems to be prototyping a full programmable gpu in > fpga. It's pratically a superescalar generic processor with my past > researching on fpga mailing lists showed to be not feasible, but I am > not an expert. > But I like the idea, specially because programmable vertex/pixel > shaders aren't important only to games, but CAD(in a generic sense) > as well. And let's not forget uni-verse ^_^ >
There are lots of tradeoffs in computer architectures. One of the ways that RISC and VLIW designs get their speed and simplicity is by trading off instruction compactness. VLIW is particularly bad, requiring lots of NOOPs. I wonder if there isn't some way that we can be 'wasteful' in order to get performance. How about adding gobs of memory to the card and somehow employing that? Maybe our shader microcode would be horribly space-inefficient in exchange for being reasonably fast? So, as I understand it, there's a geometry processor. The geometry processor takes objects in world coordinates, transforms them into screen coordinates, and shades vertexes. The vertex shader and the fragment shader could use similar languages. In between them is still some 'legacy' stuff, like the rasterizer. The major advantage that a GPU has is pipelining, so we don't want to eliminate that. But there's got to be a reasonable way to make the stages programmable. For instance, we could have 16 stages of the fragment shader executing different steps of a shader algorithm, and algorithms that required longer bits of code would feed back into the top of the pipeline. Chunks of code would have to be fixed-length, so any decision-making would have to be based entirely on predication. _______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
