Hi Denis,

Simpler is usually better in the long run. Many of these algorithms are lightyears faster than what we had been using before and part of that is due to their straightforward nature and very simple tests. Some of the special case code like what we had for CubicCurve (and maybe QuadCurve) might beat them, but they are pretty generally fast otherwise.

In fact, one of my low priority goals was to switch Area over to these new methods and gut a lot of the complicated containment code out of it. It is using what used to be our general purpose containment code, but when I created these new crossing methods for GeneralPath/Path2D they are now the new faster technology for general shapes. But, the old code had been designed for Area in the first place so I didn't want to disrupt it. My impression is that it might be faster if Area used the new code, but I never got around to doing all of the perf testing. Meanwhile we carry 2 implementations around. If we do have Area start using the new code, then special purpose Monotonic versions of the methods would pay off since Area enforces monotonicity on its paths so we already know they are monotonic when the method is first called...

                        ...jim

On 1/13/11 1:58 PM, Denis Lila wrote:
Hi Jim.

I implemented all of these. Unfortunately the only one that made
any measurable difference was

if ((y0<= y1) == (ymid<= ymin)) {
// Either rightside up and first half is likely a fast rejection
// or upside down and first half is possibly a reject
do second half first
} else {
do first half first
}

because I don't think that
Either way, it only saves a few tests for the branch that isn't taken.

is true. I mean, in the following case:
                __________
               /          \__
            **/************  \
            */*************   \
          __/**************    \
         /  ***************    |
(x1, y1)   ***************    |
            ***************    |
                              /
                            _/
                          _/
                         /
                     (x0, y0)

The old algorithm would inspect the right half of the curve first, and
then it would need to subdivide it and check the first and second
quarters of the original curve (and perhaps subdivide those too). A
heuristic in the lines of what I suggested (i.e. the test you gave)
would check the half closer to (x1,y1) first, it would subdivide that,
and in the next recursion level it would find that the common point of
those two quarters is inside the rectangle and return RECT_INTERSECTS.

So the version that orders the recursive calls gets away with 2
subdivisions, while the original version must do 3-5.

But I'm not sure if we should use it because cases like the above are
pretty rare. In fact, although the "if ((y0<= y1) == (ymid<= ymin))"
test improved case (2) by about 25%, it made the test with the
random rectangles and the loopy curve (from my previous e-mail)
about 15% slower.

Regards,
Denis.

----- Original Message -----
Hi Denis,

On 1/12/2011 2:33 PM, Denis Lila wrote:
Hi Jim.

I replaced that test with

if (!getBounds2D().contains(x, y, w, h)) {
return false;
}

and that fixed both the bug and the performance of (1) without
making
(3) worse.

It turns out that that test actually slows it down a bit. It was
helping
us when we were using the PathIterator, but after the introduction
of
the rectCrossings method you suggested, it's faster in all cases
(even (1))
to just do the general thing all the time. I've updated the webrev
with this.

Great news! After reading the previous email I was going to ask that
question, but you've already done the work.

Eliminating the PathIterator also had a large impact on the
intersect(rect)
method. Now it's about 20% faster in cases (1) and (3). (2) is still
slower
but only by a factor of 2-3. Here's the webrev for it:
http://icedtea.classpath.org/~dlila/webrevs/intersectsFix/webrev/

I'm not sure how frequently we run into case 2 in practice, and the
code
being simpler and based on a mechanism that has survived testing
pretty
well is a big win. I'm willing to go with this. Plus, if we find ways
to make the Curve methods go faster then it will help a lot of shape
types.

Have you used more than one case to test the performance differential,
or did you find a single case that gave the intended result and then
run
just that one case N times? If so, then perhaps the performance
differential is largely an idiosyncrasy of the particular test case?

Either way, I think the existing patch is a good compromise and
possibly
close to being a fairly reliable win depending on what kind of
performance testing you did.

The following is for further optimization experiments...

I think the reason (2) is slow is because rectCrossingsForCubic
recursively
searches through subdivisions of the input curve starting at t=0 and
in increasing t. Do you think it would be worth it to switch the
order
of the recursive calls depending on the distances of the two
subdivided curves relative to the rectangle (so that perhaps we
would get
to the subdivided curve that crosses one of the rectangle sides
sooner)?

Not sure - how would you gauge "distance to the rectangle"? How about
this quick test:


What if we optimized the fast rejection cases (which would make all
test
cases go faster) by trying to do some trivial min/max sharing for the
Y
case rejections. Minimally if the first Y rejection test finds that y0
= ymax then there is no need to test y0<= ymin in the second set of
rejection tests, so the following would cost no more than what we do
now:

// Assuming ymin strictly<  ymax
if (y0>= ymax) {
if (all others>= ymax) {
return crossings;
}
// No need to do ymin testing since the first test would fail
} else if (all<= ymin) {
return crossings;
}

I'm not sure how many tests it might save in practice, though, but it
would never cost any more tests (and the compiler can't optimize it
away
since it doesn't understand that we can require ymin<ymax as part of
the
method contract).

Another solution:

if (y0<= y1) {
// y0 is above if y1 is above
// y1 is below if y0 is below
test y1, yc1 and yc0 above
test y0, yc0 and yc1 below
} else {
// reverse assumptions as above
test y0, yc0 and yc1 above
test y1, yc1, and yc0 below
}

(Note that it leads off every case above with a test of y0 or y1 since
those tests are testing 2 rejection points against the rectangle, but
the yc0 and yc1 tests only test a single point against the rectangle.)
It only eliminates a total of 1 test, though since you still have to
test y0 against y1. You can take it another step further by comparing
yc0 against yc1:

if (y0<= y1) {
// y0 is above if y1 is above
// y1 is below if y0 is below
if (yc0<= yc1) {
// similar assumptions about yc0,yc1 ordering
test y1, yc1 above
test y0, yc0 below
} else {
test y1, yc0 above
test y0, yc1 below
}
} else {
/* similar */
}

One downside with these "ordering the control points" approaches,
though, is that the minimum number of tests in the rejection portion
may
go up, even if the max number goes down. It may simply be trading off
average for consistency.

Another idea: Once a curve is monotonic in Y then we can do very fast
rejections. It might be worth testing for monotonicity (in Y mainly)
along with the above/below rejections and switch to a faster monotonic
method when that case occurs:

if (y0<= yc0&&  yc0<= yc1&&  yc1<= y1) {
return rectCrossingsForMonotonicCubic(crossings, ...);
} else if (reverse monotonic tests) {
return 0 - rectCrossingsForMonotonicCubic(0 - crossings,
reverse curve);
}
// Standard y rejection tests...etc

...jim

Reply via email to