On Mon, 9 Dec 2002, Patrick McFarland wrote:
> > Notice that the latest series of graphics cards from nVidia and ATI
> > (and others) support floating point all the way through to the frame
> > buffer. This will mean that the 3D rendering community (games, simulation,
> > etc) will be very interested in floating point image processing and
> > storage in the very near future.
> > I would urge everyone to consider floating point pixels rather than
> > just going to 16 bit. This is a big change and you only want to make
> > it once.
> Erm, thats kinda cool, but unless we can access that framebuffer, it wont be
> useful to us. We'll still be stuck writing to the 8bit per channel 2D
> framebuffer. (Now, of course, we could chop the final display image into GL
> texturers, and display that, but that requires a spfp per channel GL texture
I'm not suggesting that this would be useful to GIMP - but that other
developers who are working in 3D using modern rendering hardware will
soon need support for 32 bit floating point texture maps.
So, I was pointing out that floating point imagery is soon going to
be important to many other user communities outside of the film industry
and it follows that floating point images ought to be loadable, editable
and save-able from within mainstream GIMP.
IMHO, that's a better route to take than going to 16 bit or even integer
> Ive been asking for spfp per channel rendering for a totally different reason:
> not only can you have numbers above pure white (> 1.0) and below pure black
> (< 0.0), but you can properly use SSE to accelerate FP calculations (using gcc
> 3.2.x and up with -msse and -mfpu=sse,387. On my Intel P3, apps that heavly
> used spfp math had a speed increase of 2x-4x, all due to the extra execution
> units chugging along.)
You could use a modern graphics pipeline for that too - but it's a lot less
friendly to code for - and it won't port to all graphics cards - so it's
probably not likely to be a thing that GIMP would want to make use of.
On something like an ATI Radeon 9700 or the upcoming nVidia GeForceFX,
you can create floating point texture maps - and use the incredibly
fast 'fragment shader' processor to composite, scale, rotate, perspect,
tile or otherwise process them into the floating point frame buffer,
then read that back into the CPU at the end. Whether that's faster
than doing it in the CPU alone depends on the complexity of the
per-pixel processing - for complex per-pixel operations, I'd expect
the graphics card to be able to beat the CPU - but for simple operations
the data transfer overheads into and out of the graphics card would
The nVidia card also supports a 16 bit 'half float' format which would
be interesting for HDR.
> This also helps with HDR too. HDR is a spfp per channel storage mode, used by
> several high end industry apps. Those included is Lightwave, the famous 3D
> modeling/rendering application. With HDR enabled (and saving in a HDR format),
> you can alter the final image to bring out more detail from shadows and such
> by just altering the gama ramp, Due to the huge ammount of data, you wouldnt
> notice the difference the "fixed" copy, and the entire image rerendered with
> new lighting settings to correct the "mistake:" the least significant bit of
> data is still below even 16-bit per channel display modes.
There were a bunch of papers at SigGraph last year about rendering
HDR images on a standard display without losing important visual information.
All interesting stuff.
Steve Baker (817)619-2657 (Vox/Vox-Mail)
L3Com/Link Simulation & Training (817)619-2466 (Fax)
Work: [EMAIL PROTECTED] http://www.link.com
Home: [EMAIL PROTECTED] http://www.sjbaker.org
Gimp-developer mailing list