On Mon, Nov 04, 2002 at 07:52:05AM -0600, Stephen J Baker wrote:
> It might be interesting to consider doing some of the work of compositing
> in the graphics card - where the hardware supports it.
> The latest generations of nVidia and ATI cards have support for full
> floating point pixel operations and floating point frame buffers. If
> you stored each layer as a texture and wrote a 'fragment shader' to
> implement the GIMP's layer combiners, you'd have something that would
> be *FAR* faster than anything you could do in the CPU.
Note that you still have:
- Texture upload issues (a lot of data have to go to the card every time you
change a layer, think previewing here)
- Texture _size_ issues (most cards support `only' up to 2048x2048)
- Fragment shader length issues (okay, the NV30 and Radeon9700 both will
support a lot longer shaders than you have today)
- Limitations on the number of textures (Radeon9700 has maximum 8 texture
coordinate sets, and 16 textures... for GIMP use, one would probably be
limited to those 8, though).
- Some of GIMPs layer effects would probably be quite hard to implement in a
fragment shader (simple blends etc. would be okay, though)
- Problems with internal GIMP tiling vs. card's internal swizzling (if one
settles for OpenGL, which would be quite natural given that GIMP is most
common on *nix-based systems, one would have to `detile' the image into a
linear framebuffer, _then_ upload to OpenGL).
Now, none of these are probably _real_ show-stoppers -- but I still think
implementing this would be quite difficult. I'm not really sure how well
GIMPs internal display architecture would work with this either.
That being said, it could be an interesting project :-)
/* Steinar */
Gimp-developer mailing list