With multiple cores, real time ray tracing becomes possible. But
it will still be useful IMNSHO to have a 'smart' framebuffer
that can do image resizing and compositing itself instead of
doing that on the CPU. So the open-graphics card will still be
useful.

Implemented by video card developers? I doubt it. What is more
likely is general purpose CPUs built onto the graphics card, so
you can download a raytracer at runtime.

Agreed. This will be unlikely for quite some time, however, because modern CPUs are just too slow (that sounds weird... ;-)). Maybe once CPUs start having 16+ cores it will be viable to vendors, but for now most users are happy with the current rendering model. Raytracing produces really, really beautiful and lifelike images, but it requires a lot of repetitive computation ==> Not well suited to CPU.

In terms of OGA/ODG, we should probably support enough of OpenGL that linux apps are reasonably fast. Modern CPUs are reasonable at some things, so if a subset of OpenGL is supported (on intermediate boards), the most possible computation should be run on the graphics card. I use ratpoison as my window manager, but people using Metacity or the like might have varying results on performance - we should probably test these out in the future (the very, very far future) to see how pleasing it is.

As for a smart framebuffer, I totally agree. Even the ability to quickly shade a 2D polygon in hardware can GREATLY speed up almost any graphics application (this is not from any real experience with HW-accelerated polys, but I know from testing framebuffer graphics apps on different processors how much of a difference it makes. I am sure this scale up the GFLOPs nicely :-)). Of course, we should support more than that, but starting small is always nice.

nick
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to