I’ve run into something that may be an issue.
The code to select a variant image in MultiResolutionToolkitImage looks like
this:
public Image getResolutionVariant(int width, int height) {
return ((width <= getWidth() && height <= getHeight()))
? this : resolutionVariant;
}
I am using 1x and 2x image pairs, running 1.8u25 on Mac OS 10.0.
I ran into a problem with an image that contains a 1 pixel tall horizontal line
in both the 1x and 2x images which I then tried to scale horizontally on a
non-HiDPI display by drawing into an area whose height matched the (nominal)
image height but had a larger width. The code above selected the 2x image
because of the larger width. However, the 2x image had to be reduced along the
Y axis, and the result was that the 1 pixel line was not visible.
The example, in case you haven’t guessed, is using a nine slice image to create
a border.
The issue is how to trade off between reduction and magnification. Using the 2x
image in this example means less magnification along one axis but requires a
reduction along the other axis that would not be required if the 1x image was
used.
I realize that I can (with some effort) implement my own policy for this
example, but I’m wondering whether a more sophisticated default policy would be
a good idea. Although extra magnification might produce some blurriness,
reduction can lose fine details completely, so perhaps the default policy
should avoid unnecessary reduction.
Another way of saying this is that higher resolution variants are provided to
make images look better, and they should be used if they might make images look
worse.