Steven Sacks wrote: > Common sense dictates that a 32 bit pixel takes more rendering > computation than a 16 bit pixel or an 8 bit pixel.
The 32-bit pixel includes an 8-bit alpha channel, and that in itself takes more rendering time. Your graphics card actually has more impact on rendering time than the color depth. All modern graphics cards do the rendering in hardware, of course, though that wasn't always the case. I did some benchmarking while I was working at Disney, and we found that 24-bit graphics actually rendered faster than 16-bit. The underlying reason is simple--with 24 bits, each byte represents a color (red, green, or blue), and it's easy for the firmware on the graphics card to work with a straightforward 3 bytes. 16-bit graphics are another story. You still have to represent red, green, and blue. Basic math tells you that each color gets 5 bits--but what do you do with the other bit? A 1-bit alpha channel? Ignore it? Do 6-5-5, or 5-6-5? Different manufacturers, and different systems, treat it differently. Down on the hardware level, the 16-bit graphic presents other challenges. You have to take the 16-bit word and peel off the relevant bits, and pop them into the byte-size registers. If you're familiar with Assembly language, you know that involves a lot of ROR, SHL, and the like. It happens in hardware (actually firmware), so it's pretty fast, but the 24-bit graphics are still easier to handle. As far as 8-bit graphics, they are really just 24-bit graphics with a limited range of colors (the palette). Instead of representing an actual color, each byte is an index into a palette, so there's an additional step there that must be performed during rendering. It's not so simple after all. Cordially, Kerry Thompson _______________________________________________ Flashcoders mailing list Flashcoders@chattyfig.figleaf.com http://chattyfig.figleaf.com/mailman/listinfo/flashcoders