Around 23 o'clock on Jun 23, Owen Taylor wrote:
> The really simple model is: > > - All pictures contain sRGB information > - Compositing can be done either with or without gamma > correction as a binary switch. I'd prefer an even simpler model which preserves as much color resolution as the target device can provide: - All pictures contain RGB information directly displayable on the device. Any color space conversion is done in the client. - Gamma correction comes from the XDCCC tables loaded in the server. The server may set default XDCCC tables from the DDC information, or a "gamma" value may be set in the server configuration. Yes, I know most monitor gamma values are garbage. - Gamma correction can be turned of in each destination picture; it's on by default. I don't see any advantage to adding complexity here; it's not the X servers job to do color space corrections, it's just it's job to make sure the compositing works right, and these tables are clearly required for that part of the puzzle. > > 1) Are there other modifications to the compositing computation that > would also be useful? > > I don't think so. Yes, you could get more general - e.g., > allow different forward and reverse transformations for > the destination drawable. But I don't see it as useful. The question was intended to be broader than this; I'm interested to know what other useful extensions we might make to the P/D compositing algebra. > At least 12 bits are required to do "linear sRGB" <=> SRGB > conversion reasonably. Perhaps going up to 16 bits makes > sense for a round number; this would also give us headroom > for 10-bit images. But this inflates the size of the > lookup tables a lot. The XDCCC tables provide linear interpolation to allow the lower end of the tables to be compressed. I'd much rather use 16 bits with linear interpolation than 12 bits without. > - What's the performance impact? I think defaulting > to sRGB for the intermediate space (no gamma correction) > is necessary because gamma corrected isn't going to be > hardware accelerated in most cases and isn't that > amenable to software acceleration such as via MMX. A related question is how current hardware exposes gamma corrected compositing; we need to make sure any model we expose can be met by that hardware so that gamma corrected compositing can be the default for most people. > It's possible that working in 16/16/16/16 linear > sRGB might actually be easier for heavy compositing > into destination alpha in some apps. We don't have 64 bit pixels in the X server yet, and I also don't want to expose multiple colorspaces to applications. > - What does it mean for DirectColor visuals? I guess > deviceRGB could be interpreted relative to the > colormap for the drawable, not just the visual > but it sounds complicated. I'd prefer to continue to treat DirectColor as if the RGB values going in were in some more-or-less linear space; applications attempting to use that color space deserve what they are about to get. Keith Packard XFree86 Core Team HP Cambridge Research Lab _______________________________________________ Render mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/render
