Hi Laurent,
I think you've looked into the issues of flatness and how inflection points affect it about as much as I have at this
point. I'm not sure that sub-dividing at min/max values helps filling, but a way to subdivide at inflection points
might improve the flattening algorithm. It's worth a try.
I believe the original AFD was intended to be used in the inner loop to render each pixel rather than as a flattening
metric the way we are using it. Another concept might be to have the scanline converter that currently only deals with
lines support quads and cubics as well so we don't have to flatten them (though subdividing to avoid extreme rates of
change in the step size might still be called for). It would complicate the code that increments the cur_x on each
sample line, but it would reduce the number of segments we'd have to store. Also, we might want to do subdivision at
min/max of Y values in that case so that the Y values are monotonic.
With respect to float/double, I have another bug somewhere where we have a large inaccuracy for a very large circle that
intersects the viewing area along only a very tiny portion. The errors get especially bad with dashing because we
iterate each dash using an incrementally relative sub-divide rather than returning always to the original curve to
sub-divide from t1-to-t2. I'll look it up and send you a pointer so you can see how Marlin does with those paths. I
believe that I briefly modified the Pisces Dasher to simply use doubles and the problem went away, but I didn't do any
performance analysis and the newly accurate dashes no longer matched the still-float-based fills so more work was
needed. Modern processors tend to deal with double precision natively and so other than storage considerations double
calculations are often as fast or sometimes faster than float (because of not needing to be cast back to 32-bit float
when the FPU always produces 64-bit double answers)...
...jim
On 11/28/16 1:58 PM, Laurent Bourgès wrote:
Jim,
There is one thing I think you should look at, though, and I wasn't sure if
I should file a bug. If you've downloaded the Rasterization verifier from
the old JBS bug for non-AA rendering, try running it with the 4quads mode
(-quad argument I think). It looks like it averages slightly more failures
on Marlin than jpisces or npisces. (3 bad pixels per shape rather than 2 -
or 4 vs. 3 - I forget the exact numbers). This may simply be a difference
in the DEC_INC_BND settings and an easy fix. File a bug if you find
something to fix...
I made tests related to nonAA quality and the related quad decimation
thresholds:
The quad decrement threshold seem a bit too high => lowering to 0.5
subpixel leads to 2.07 error:
OpenPisces:
bad paths (78759/100000 == 78,76%, 177046 bad pixels (avg = 2,25), 6524
warnings (avg = 0,08)
MarlinFX:
bad paths (90332/100000 == 90,33%, 288202 bad pixels (avg = 3,19), 6702
warnings (avg = 0,07)
MarlinFX with DEC_BND=0.5:
bad paths (77342/100000 == 77,34%, 164702 bad pixels (avg = 2,13), 6649
warnings (avg = 0,09)
So the trivial fix consist in lowering the threshold.
HOWEVER, it led me having a look again at the algorithmic approach:
The (Sun) paper (Lien 87) describes this approach (AFD) to draw curves
pixels by pixels so the original thresholds were defined to 1 / 0.5 pixels
and it indicates that this algorithm generates at least 1 point per pixel
... so is not related to any error bound nor really the good approach to
minimize the number of segments when the curve is "flat".
Moreover, this test performs path fills (no stroke) so I am asking several
questions about the algorithmic approach:
- AFD thresholds are in fact related to delta X/Y or speed and are ensuring
small displacements (subpixel) unrelated to any error (ROC ?) between the
segment and the curve => it will generate lots more segments than needed
except when the curve has cups or inflexions ...
- subdividing curves may be a more appropriate approach (agg, QT...) as
there are several flatness tests (intermediate point distance like in
ShapeSpanIterator.c) to ensure the curve is under-control ie within the
tolerance
- in Stroker, the curve is subdivided at cups, inflexion points, roots to
have monotonic cuves but in the case of path fills, the curve is directly
processed by the Renderer: maybe we should first subdivide the curves at
those extrema and then use the current approach to improve the affinity at
these special points.
Finally the curve segments are generated with the floating-point maths so
the pixel accuracy is also strongly related to rounding points on the grid
(at pixel centers), so this implies to have small tolerance to compensate
rounding issues.
Jim, do you have advices on:
- how to improve both curve accuracy but also minimize the number of
generated segments ? which algorithm ?
- which flatness test has your recommendations ?
- do you know an adaptive AFD variant that determine the step size
according to radius of curvature ? (like in Graphics Gem I: p594 Wallis
Tutorial on Forward Differencing)
Laurent