Andrea Aime wrote:
> Some years ago rendering tests were made against simple,
> manually verified exemplars, that is, generated images that
> were stored on disk and then compared pixel by pixel against
> the result of a test render operation. To put it in other words,
> it was checked that the rendered image was exactly like the
> one expected.
>   
Two ideas:
- you can capture the font handling code by providing a custom labeler 
(like we do from uDig) and then just doing normal unit tests on the 
labeler to make sure it has collected the expected information
- make it a "mask" rather than a pixel by pixel check; just to account 
for anti aliasing...to make sure content is drawn in the right area
- Can we just do a histogram (at a very course level), even if 
anti-aliasing and font differences occur we will expect a similar 
percentage of R, G and B out the other end....

Your pain however is understood; Eclesia and acuster have both had the 
idea of a GeoToolsDemos app (similar to SwingDemos) that shows what the 
library can do - including rendering. Perhaps running an app like this 
would be a good way to handle the manual test at the end of the release 
process.

Cheers,
Jody

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Geotools-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/geotools-devel

Reply via email to