I've been looking at cleaning up pixmap format and visual initialization. I'm having some difficulty understanding why the code is currently doing what it's doing. Perhaps someone has an idea.
In the initial connection block, a DEPTH is a tuple of a depth and a list of visuals. A VISUALTYPE is a tuple of (visual-id, visual-class, rgb-masks, bits-per-rgb, colormap-entries). Note that a VISUALTYPE does not specify depth. The protocol contains the following enigmatic text: "A given visual type might be listed for more than one depth or for more than one screen." Assuming that "visual type" here means a VISUALTYPE - which seems pretty reasonable - the interpretation for more than one screen makes sense. The visual ID and all its properties happen to be numerically equal on more than one screen, and this is never ambiguous because windows are always created relative to a particular root window. Listing a visual for more than one depth, though, I can't figure out what that could possibly mean. Maybe if you make a depth-16 window on a Visual that can also do depth-24, then PutImage will automatically do the obvious color expansion? (Or the reverse.) But if you have that, what do you put in rgb-masks to make it make sense for both depths? Maybe instead this is for PseudoColor hardware that wants to expose (say) both depth-4 and depth-8, and have the initial 16 colormap entries shared between depth-4 and depth-8 visuals, so that apps only expecting depth 4 can work transparently? More practically, the existing drivers - and every server I've ever seen - only ever have one depth per visual, and applications are by now quite prepared to deal with having tons of visuals thanks to GLX and Composite. Would enforcing a 1:1 map from Visual to Depth be an implementation burden on the server side for any reasonable class of hardware? - ajax _______________________________________________ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: https://lists.x.org/mailman/listinfo/xorg-devel