Hi Roman,

Roman Kennke wrote:
This doesn't explain some methods that crept up into this class (like getBounds and getNormalizingTransform). They should have been on GraphicsDevice.

Why? Isn't getBounds() used to indicate which part of the whole virtual
screen (Xinerama) a GC is representing? At least, this is what the API

That's the thing. GC doesn't represent a screen, virtual or otherwise. It represents one of several possible (color model-related) configurations of a GraphicsDevice, which represents a screen.

So you could create a Window using one of those graphics configurations, and it will happen to be on the same screen the configuration belongs to.

docs tell me. I suppose in the ideal world we'd need getBounds() in both
the GD (the bounds of the whole virtual screen) and the GC (the bounds
of the sub-screen). Ok, in an ideal world, Xinerama would be represented
by a different class then GC I suppose...

In an ideal world we wouldn't have had bounds in GCs. The GC represent different visuals, nothing to do with screens' geometry.

So suppose you have an X11 system with single board with two screens (no xinerama), 32 bit IntBgr default visual, and the board is capable of simultaneously displaying a 8-bit grayscale visual windows.

But also imagine that this system supports changing display nodes to 16 bit (I hope you have good imagination!), and different resolutions.

   Then you'll have two GD with two GC each (32-bit Bgr and 8-bit GrayScale).

The DisplayMode[] will have entries for 32 and 16 bit modes, and whatever different resolutions you have.

Ok, this makes sense. So in my case, where I can display windows only in
one color model simultanously (and no Xinerama of course on this poor
embedded box), I'd only return one GC at any time. Good.

  Yes.

have GD.getDisplayModes(). And GC is also used for Xinerama support,
right? So what does GC represent? I am implementing on an embedded
platform right now, I only have one screen, but a couple of different
settings, and would like to know what is correct here. Is there a
relationship between the list of display modes and the list of GCs? If
No. List of display modes is the list of possible display modes the graphics device can be switched to. That includes different resolutions, as well as bit depths. (see below for more)

Ok, so this is what I assumed the GCs are for. Good to know.
It makes less sense if your device can display windows with different bit depths simultaneously so there isn't one "desktop bit depth", which is why there's DisplayMode.BIT_DEPTH_MULTI to indicate that case.

I think I mean something different. Suppose your graphics board is
capable of using 2 different resolutions (1024x768 and 800x600) in two
different color modes (RGB565 and RGB556) (not simultanously). This
would make 4 DMs. The problem I see is that bitDepth is not expressive
enough to differentiate between RGB565 and RGB556, both use 16 bits.

  I see. Indeed, this situation wasn't foreseen.

  Thanks,
    Dmitri

Then: what is the default configuration? Is it just some random
   This is tricky.

   In _most_ cases it corresponds to the screen's default visual / pixel format.

Except for the cases where the default visual is say 8-bit, but there's a 32-bit visual available. This happens on older SunOS systems with CDE where the default visual is 8 bit. There are Sun adapters for sparc which can display windows with different bit depths (and palettes, and gamma correction) simultaneously.

Ok, I guess when I only ever have one GC (see above) that's easy now :-)

What is (and how to use) GD.getBestConfiguration()? The
GraphicsConfigTemplate class seems pretty useless to me.
   It is. =) I'm not sure anyone uses this stuff.

Haha, good. :-)

Thanks a lot for clearing this up Dmitri. Oh, it means I have to rework
lots of code. THANKS! ;-)

/Roman


Reply via email to