On 12/12/13 11:27 PM, Jim Graham wrote:


On 12/12/13 11:19 AM, Sergey Bylokhov wrote:
On 12/12/13 7:16 PM, Anton V. Tarasov wrote:
On 11.12.2013 14:38, Sergey Bylokhov wrote:
On 11.12.2013 13:18, Anton V. Tarasov wrote:
With Nimus, at some moment, when the
nimbus.AbstractRegionPainter.paint(Graphics2D g, ...) method is
called it is passed the graphics instance created by
JLF.createGraphics() which derives it from the JLF's root buffered
image. Then, somewhere up the stack the method calls for
getImage(g.getDeviceConfiguration(),..),
Yes, correct. But you can created double sized surface for you
buffered image(in the same way as it was done for volatile) and
provide correct DeviceConfiguration for it.

Sergey,

It seems I didn't yet understand you. Could you please clarify? What
"double sized" surface do you mean for a BufferedImage, and what do
you mean by the "correct" DeviceConfiguration for it? (I can't put a
CGLSurfaceData into a BufferedImage).
When retina support was added to the volatile images, there were the
same problems with the mixing of "logical" and "real" sizes. And
volatile image(logical view) was not changed, but its surface was. When
the volatile image is created we check the native scale and use it to
create the surface, then in SG2D we take the difference between an image
and its surface into account.
Why we cannot do the same thing for OffscreenImage? We control when it
is created, we control when its used, offscrenimage is created for the
component-> we know what GraphicConfigs should be used for the scale
checking. Probably we can reuse/extract some code which was added to the
cglsurface?

The only real difference here is that BufferedImages have multiple definitions of width and height. For VolatileImage objects there is no view of the pixels so the dimensions are the layout size and the expected renderable area and the returned graphics is pre-scaled. For BufferedImage we can do all of that, but the dimensions have the added implication that they define the valid range of values for getRGB(x, y) and grabbing the rasters/data buffers/arrays and digging out pixels manually.

If it were just getRGB(x, y) we could simply do a 2x2 average of underlying pixels to return an RGB value, but I don't think there is any amount of workaround that we can apply to make the digging out of the rasters and storage arrays work for those who manually access pixels... :(
But I am talking about OffScreenImage(or we can add new one), which is not public so we can try to block/change operations in our code. Not sure that our backbuffers leaked to the users.


            ...jim


--
Best regards, Sergey.

Reply via email to