2007/4/16, Attila Kinali <[EMAIL PROTECTED]>:
On Mon, 16 Apr 2007 17:36:25 +0200
"Nicolas Boulay" <[EMAIL PROTECTED]> wrote:
> I don't see any interest beside the low power consumption but at a
> great design cost to match all the codec.

Being able to play a video at all?

Ok. I miss HD.

Ok. There are not that many videos out there that hit the
CPU / Memory Bandwidth limit these days, but they exist and
they are real.
If you want to know more about why we need this, ask Dieter :-)

> But what's up if we could use open source codec ?
>
> If the gpu include a (fast) cpu. You can compile FOSS codec and use it
> directly on the board. You can imagine the same for a X server.

Using a on board general purpose CPU on the graphics card will
not give you any advantage at all. If a PC CPU is too slow, how
do you want to beat that with a CPU that you can put onto a graphics
card without implementing half a PC on it?

Because you don't have the PC history behind you ?

No, the only reasonable way to do that is to optimize the hardware
for this specific task.

I think about a generic cpu to avoid the need to create a good
compiler. But you could always add an idct in hardware and the need
for a powerfull cpu vanished at the cost of few modification in the
codec used (motion compensation need power too ?). Beside that, all of
this is very close to all the graphical ressources.


And your idea about the X-server on a graphic card isn't new either.
I've heard this at least two or three times in the last couple of years.
However nice the idea might be, but it is, because of its specialization
commercialy not feasible.

If you dont believe me, have a look at the market of x terminals.
The market is pretty much dead for the last 5-10 years and has been
replaced by thin clients that are mostly PC based or some simple
of the shelf embedded system.

> Is it realistic ?

Technologically yes, commercially no.

                        Attila Kinali

PS: In case you wonder, i still think we should not do any
video decoding in hardware and leave everything to the software
running on the CPU. Unless we really have too much space on the
FPGA to waste.

Sure you can't add this to the FPGA and test it.

Adding a true cpu with low latency access to the graphical pipeline
look nice to me. But i imagine that power will never be enought to be
interresting. So we could design a specific cpu that look very much
like a vertex/pixel shader, that will always be behind ATI or NVIDIA
one :/

Beside that, for the 3D world, when i look to all the 3D game, they
look always the same. Or we can see there lack are the same. The
number of triangle per scene are the same but texture are bigger and
shader more complexe. So you could always see around character this
shape that look like behind cut by cissor at high speed.

It was said that to improve the visual quality with current algorithm,
the need for performance are exponential. Is it possible to add
something not very big that could enable the use of other primitive
rather triangle ? Maybe nurbs are too complexe to manage. Maybe sphere
could be used ? I have so nice picture using metaballs, but maybe
metaballs are unthinkable in hardware. Ellipse ? Like used in an old
game which i forget the name, in the time where Direct3D was not so
used.

So something that could be used to avoid this sharp looking of
edge/border of complexe object (avoid the use of zillion of
triangles). Maybe sphere could be a candidate, it look easy to manage
it in 3D, but it's interresting if we could make them interract nicely
between them and triangles.

I know that we lack of ressource. I know that we are bind by opengl.
It was said that opengl only knows triangle. But that the problem of
chicken and eggs. With no hardware to test, nobody will try thinks
differently.

Nicolas
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to