Hi,
the major bluder in http://jira.codehaus.org/browse/GEOT-3484
made me think again about how to test map rendering.

So far what we have is largely insufficient, as it's not really comparing
the maps we're generating wit a known to be good result.

I have thought long about this topic and explored different possibilities.
Comparing the rendered image with a known to be good sample was
the first thing we tried... and I still bear the scars of that approach.
What happened back then is that we were comparing images pixel
by pixel, and fail at the slightest difference.
Unfortunately that fell apart pretty soon due to differences in how
the antialiasing and font rendering was performed on different platforms.

There is no much to discuss here, pPixel by pixel exact comparison
is dumb, we want the test to fail if a human would notice the difference
in maps, if the difference is actually relevant, not otherwise.

Other approaches were discussed, such as passing in a mock graphic
and record the command sequence.
That's probably even dumber, because a change in the way we do
the rendering would make the test fail even if the output would be
exactly the same.
Normally what we want to test is the result, not how we got there.

A similar approach could have been to test the output by dumping
the resulting graphics in some vector format. But again, that is not
good, if I flip text rendering from direct text to text shape extraction
and fill (something that we do) I would get different primitives
in the vector dump. Again, what we care is how the map looks like
to a human, not how we got there.

In the specific case of GEOT-3484 the reprojection optimization
(when it works, that is) actually changes slightly the pixel contents of the
result, but not in a way that would make a human see it.
Again, the criteria is really human perception of differences, I don't
care if the RGB composition of the pixel is slightly different, but I
care a lot if the reprojection is plain wrong like in GEOT-3484.

Long story short, we really need a tool that compares images like
a human would do. Like PerceptualDiff:
http://pdiff.sourceforge.net/

That tool is exactly what we'd need to make proper result to expected
image comparison. Triple bummer it's a native command line tool
that we cannot assume will be around.

Yet, probably better to have tests using it than not having tests
like we do now. Replicating it in Java is unfortunately hard.

So I ask you, what about starting to leverage that tool?
If it's in the path we use it, otherwise we skip that part of the test.
Hudson and the machines of the people interested in map rendering
tests would install it.

Opinions?

Cheers
Andrea

-- 
-------------------------------------------------------
Ing. Andrea Aime
GeoSolutions S.A.S.
Tech lead

Via Poggio alle Viti 1187
55054  Massarosa (LU)
Italy

phone: +39 0584 962313
fax:      +39 0584 962313
mob:    +39 333 8128928

http://www.geo-solutions.it
http://geo-solutions.blogspot.com/
http://www.youtube.com/user/GeoSolutionsIT
http://www.linkedin.com/in/andreaaime
http://twitter.com/geowolf

-------------------------------------------------------

------------------------------------------------------------------------------
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
_______________________________________________
Geotools-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/geotools-devel

Reply via email to