One thing the bug report doesn't seem to mention is that the tiles are
32x32 (it implies it by talking about the 1024 values being copied), and
that there is a function which quickly tells you whether a tile is all
0s or all 1s so the renderer can either skip or do quicker fills of
regions
This is often a problem that can happen if you don't have a prototype
for the function. The default passing semantics for floating point is
to pass them as doubles unless there is a prototype that says that they
are floats.
Did you get the prototype correct, and did you make sure it was
Good to hear it as it avoids some hard decisions.
Per-pixel consistency is always the desired goal, but sometimes you have
to look at what is possible to accomplish with the APIs we depend on and
make some hard decisions. If it would take a work around technique
that would take 100x as long
Clemens Eisserer wrote:
Extra alpha has the same behavior for all AlphaComposite instances. In a
nutshell, the extra alpha value gets logically multiplied with the source
before the actual compositing operation. The AlphaComposite docs explain
this process in great detail (look for the
Hi Roman,
This isn't well documented in SurfaceData.h, but there is a set of flags
SD_LOCK_NEED_PIXELS which indicates if you need to do the read. This
is used in X11SurfaceData.c, but that file has so many ifdefs and
alternate pixel access modes now that you really have to dig through it
Torsten Landschoff wrote:
Hi Jim,
On Tue, Feb 17, 2009 at 06:27:43PM -0800, Jim Graham wrote:
The width and height of a GraphicsConfig is essentially irrelevant
information. If you get the GraphicsConfig of a component, it doesn't
Why is there a method then to query irrelevant
This is almost there. A couple of points about the solution, though:
- If you skip the MOVETO then you need to make sure that you later emit
an lsink.moveTo otherwise the lsink object will complain about bad
state. If you look in ProcessPath.c you will see that a skip boolean
is set
That isn't currently possible, but it sounds like a useful thing to add.
One problem is that there are some types that know that they have
alpha, others that know that they do not, and still others which are too
general and may have alpha or may not, so how do you encapsulate that
information
into the case where
the usage in checkFontInfo is the only usage that does not occur from a
pipeline...
...jim
Mario Torre wrote:
Il giorno mer, 10/06/2009 alle 03.02 -0700, Jim Graham ha scritto:
What is the need for this fix? Is there a bug being fixed here other
My design goal was doesn't corrupt anything if run from multiple
threads (in other words don't require synchronization in any common
places just to keep it functional and don't require it to be used from a
particular thread), but only correct behavior if run from 1 thread at a
time.
In other
of the post-LCD text work... Phil?
...jim
Mario Torre wrote:
Il giorno lun, 15/06/2009 alle 13.37 -0700, Jim Graham ha scritto:
Hi Mario,
How are the drawGlyphList methods called when the loops is null? I ask
because they are only ever installed on the SunGraphics2D
That makes a lot of sense since I think the introduction of the
GlyphListLoopPipe in its current form was where the original methodology
of always install loops if you plan to use loops was first (and only
time) violated.
I think I raised the issue at the time and Phil pointed out that, in
. Or did
I get something wrong here?
i
/ Roman
Jim Graham wrote:
That makes a lot of sense since I think the introduction of the
GlyphListLoopPipe in its current form was where the original
methodology of always install loops if you plan to use loops was
first (and only time) violated.
I think
21:52, Jim Graham ha scritto:
One solution would be to always set the loops for the validations that
install one of these pipes. That could have potential performance
impact, but it would be no worse than the validation sequences that
already set the loops every time so I don't think it would
...
...jim
Mario Torre wrote:
Il 15/07/2009 23:41, Jim Graham ha scritto:
Numbers that small aren't statistically significant. Our J2DBench
benchmark calibrates each test to run a number of iterations that result
in at least 2.5 seconds of run time. Try upping your loop iterations by
a factor
Roman Kennke wrote:
http://cr.openjdk.java.net/~neugens/100068/webrev.06/
So the short story is the webrev.05 was actually better and we better
forget about webrev.06 at this point?
It also looks like the webrev.05 is better than a stock JDK - even more
promising!
Regarding webrev.05, I
I can't speak for the changes to the build files, but I only have a
couple of very small suggestions for the code changes.
Our coding style for continued class declarations would suggest using
the following indentation:
public class FooPipe extends SomeKindaPipe
implements LoopBasedPipe
If the NetBeans stuff is not due to changes I've made (i.e. it is due to
their overzealous I'm going to rewrite all your build files for you
because you 'stupidly' chose to use a different version of NB than
everyone else in your project hahaha, then I just revert the files
manually (either
redundancy.)
...jim
Mario Torre wrote:
Il 17/09/2009 22:27, Jim Graham ha scritto:
http://cr.openjdk.java.net/~neugens/100068/webrev.08/
:)
Mario
I could go too ways on this.
It looks like the code is looking to drop arrays that have grown so that
it doesn't waste memory. Do we reuse these objects? If not, then the
code can be deleted.
If we do reuse them, then why not just set them to null and let them get
recreated the next time
Andrew John Hughes wrote:
There's http://lists.x.org/archives/xorg-devel/2009-June/001242.html
but I avoided posting this in the original mail because it seems to
have changed again between that commit and the final release,
presumably due to compatibility issues (XShm.h is back and it's now
Yes, indeed, that all makes sense for your fix. I wasn't intending to
register an objection with the fix, I was just curious about the changes
they made which, as you say, seem quite convoluted...
...jim
Andrew John Hughes wrote:
2009/11/3 Jim Graham jim.a.gra
The thing that bothers me about this fix is that the value being
returned here is the raw computed value. All of the values in this
routine are being returned in floating point sub-pixel maximum
accuracy. I don't see why *this* code needs to round this value. If
something that uses the data
Phil's message brings up another issue for me with the patch. Why use
ROUND instead of a ceiling operation? Do we know what the best option
is for the code above?
Again, I would strongly favor leaving these base calculations in the
scalar alone and focus more on making sure the proper
The Ductus pipeline will do the same thing with STROKE_PURE. It sounds
like maybe the Pisces pipeline doesn't support STROKE_NORMALIZE yet?
...jim
Denis Lila wrote:
Hello.
I noticed that anti aliased horizontal and vertical lines are
not drawn properly. I've
Hi Denis,
Here are my thoughts on it:
- Lines are affinely transformed into lines. The slope may be different
before and after the transform, but both have a single slope.
- The ratio of a line length to its transformed line length is a scale
factor that depends solely on the angle of the
thought to how this could be done, as I am not very
familiar with Bezier curves, but it doesn't seem easy enough to justify
fixing such a small bug.
- Original Message -
From: Jim Graham james.gra...@oracle.com
To: Denis Lila dl...@redhat.com
Cc: 2d-dev@openjdk.java.net
Sent: Wednesday, June 9
the output can handle bezier curves. This way, the
only use left for Stroker would be when anti-aliasing, and for
every thing else we wouldn't have to use a flattening path iterator.
Thanks,
Denis.
- Jim Graham james.gra...@oracle.com wrote:
Hi Denis,
Consider the case of using
For AA this is exactly what we do (round to nearest pixel centers for
strokes). Note that this is done prior to any line widening code is
executed.
For non-AA we normalize coordinates to, I believe the (0.25, 0.25)
sub-pixel location. This is so that the transitions between widening of
The first part means that if the scale is uniform in X and Y
(AffineTransform has some logic to determine this property in its
getType() method) then we can use X11 to do line widening by just giving
it a scaled line width. Also, X11 is limited to integer line widths so
we would only want to
in another e-mail, the allocation of the array
can be eliminated, since it can be turned into a field. So it should
have better performance in pretty much all cases.
If you would like me to send a webrev for that too, e-mail me.
Thank you,
Denis.
- Jim Graham james.gra...@oracle.com wrote:
Hi
Hi Denis,
float operations are pretty fast and they are usually done in a separate
part of the processor so the compiler can schedule a lot of bookkeeping
instructions to run in parallel with waiting for the results of the FP
instruction. In the end, they often end up being free if there are
method: emitReverse, that goes through the reverse list
and emits it. It's used in 2 methods which used to just do it themselves.
- Jim Graham james.gra...@oracle.com wrote:
Hi Denis,
You moved some code around without modifying it. This makes it hard
to
see what changed and what didn't
notwithstanding)...
...jim
Jim Graham wrote:
Hi Denis,
In looking through this code, I can see where it emits the proper join
for ccw turns, but it looks like it just emits a round join for cw
turns. Is that true? Doesn't this mean that a cw path will always have
round
(although
it does solve a certain bug).
1.
I removed some write-only variables in Renderer and Stroker.
2.
I removed Dasher and Stroker's ability for the same object to be used with
more than one output, transform, width, etc. Jim Graham said we should consider
removing the no argument
Denis Lila wrote:
Hello Jim.
Thank you very much for taking the time to read through this.
169 - if origLen reaches the end of the dash exactly (the == case)
You're right, I should. I can't just replace = with == though,
because the results will be the same: in the equal case origLen
there are only about 9000
scanlines and for each
of them I've just added a few calls to one line getters (which used to
be direct
accesses into arrays).
Thanks,
Denis.
- Jim Graham james.gra...@oracle.com wrote:
Denis Lila wrote:
Hello Jim.
Thank you very much for taking the time to read through
Woohoo, Denis! I look forward to seeing the new version!
...jim
On 7/28/2010 5:51 AM, Denis Lila wrote:
Hello Jim.
This one performs almost identically to what is already there
in openjdk6 and 7, since it's exactly what I sent for review
last week, but with all the changes
,
Denis.
- Jim Graham james.gra...@oracle.com wrote:
Woohoo, Denis! I look forward to seeing the new version!
...jim
On 7/28/2010 5:51 AM, Denis Lila wrote:
Hello Jim.
This one performs almost identically to what is already there
in openjdk6 and 7, since it's exactly what I
Hi Denis,
The changes look fine and I'm moving on to Renderer.java...
I'd make as many methods private as you can. It looks like all your new
methods are private, but some old methods are still public even though I
don't think they are accessed elsewhere (like addEdge()?). I think
private
Hi Denis,
More thoughts on Renderer.java.
-- Skipping gaps (minor optimization) --
If there is a gap in the edges in Y, say if a path consists of two
subpaths, one that covers y=[0..10] and another that covers
y=[1000..1010], then I think you will iterate over each y value from 10
to 1000,
this set of changes...
...jim
On 7/29/2010 5:27 PM, Jim Graham wrote:
Hi Denis,
The changes look fine and I'm moving on to Renderer.java...
I'd make as many methods private as you can. It looks like all your new
methods are private, but some old methods are still public
Hi Denis,
It looks fine. Hopefully we can eventually figure out why the sorting
on the fly didn't pan out.
Denis Lila wrote:
Hi Jim.
Thanks for all your suggestions. I fixed the edge array indexing
issue, the moveTo bug (not assigning x0), and the double
initialization issue. I also
Hi Denis,
First, comments on the high level normalizer (Normalizing iterator):
- If there is no normalization going on, I would use the Shape's own
flattening (i.e. getPathIterator(at, flat)). The reason being that some
shapes may know how to flatten themselves better, or faster, than a
Hi Denis,
That's great! I just did a last minute double-check of your last
(final) webrevs to be sure.
Have you tested Java2Demo with these changes? I'd also run any
regression tests you can find with the changes. If there are no
problems there, then you are good to go to push it...
Hi Denis,
I'll wait for some clean webrevs once you get the float stuff in for a
final review. I did take a really quick look and thought that a better
way to handle OFF would be to set rval to -1 and then check rval 0
as the (quicker) test for OFF in the currentSegment() method. Does that
Hi Denis,
Well, I guess it's a good thing that Java2Demo had a path like that in
it - not a very common case, so it's good we found it!
The fix looks fine. It still seems like there is way more logic there
than is needed - hopefully if we can get rid of flips at some point,
much of it will
range...
...jim
Jim Graham wrote:
Hi Denis,
Well, I guess it's a good thing that Java2Demo had a path like that in
it - not a very common case, so it's good we found it!
The fix looks fine. It still seems like there is way more logic there
than is needed - hopefully
it
in before I drive you crazy... ;-)
...jim
Denis Lila wrote:
- Forwarded Message -
From: Denis Lila dl...@redhat.com
To: Jim Graham james.gra...@oracle.com
Sent: Monday, August 9, 2010 4:58:10 PM GMT -05:00 US/Canada Eastern
Subject: Re: [OpenJDK 2D-Dev] Various fixes
dealt with the issue of handling OFF by just not accepting it as an input.
After all, a normalizing iterator only needs to be created, and is only created
if the normalization mode is not OFF.
Thanks,
Denis.
- Jim Graham james.gra...@oracle.com wrote:
Hi Denis,
I'll wait for some clean webrevs
Denis Lila wrote:
Hi Jim.
I think the first version is a better choice for now since you said that
the performance difference isn't noticeable. I think the lower level
flattening might look a little different if we ever decide to upgrade
the pipeline to deal with curves. In particular, you
Hi Denis,
On 8/23/2010 4:18 PM, Denis Lila wrote:
To widen cubic curves, I use a cubic spline with a fixed number of curves
for
each curve to be widened. This was meant to be temporary, until I could find a
better algorithm for determining the number of curves in the spline, but I
Hi Denis,
On 8/24/2010 3:35 PM, Jim Graham wrote:
As far as flattening at the lowest level when doing scanline conversion,
I like the idea of using forward differencing as it can create an
algorithm that doesn't require all of the intermediate storage that a
subdividing flattener requires. One
Hi Denis,
At the bottom-most rendering level monotonic curves can be cool to deal
with, but I'm dubious that they help with widening. For one things, I
think you need more breaks than they would give you and also they might
sometimes break a curve when it doesn't need it.
One way in which
OK, I see. You were doubting that the thing that came after Pisces
could be that much different considering that Pisces is rendering many
more sub-pixels.
Actually, embarrassingly I think it can. It just means the non-AA
renderer has some performance issues. One thing I can think of is
On 9/3/2010 6:03 AM, Denis Lila wrote:
the cost of the context switches into native and back for each path
segment dominate the performance of long paths.
I see. That makes sense.
It was something I was meaning to fix for a long time (when that code
was first written native code was so much
Hi Denis,
Things got really busy for me over the past week so I wasn't able to
keep up with the discussion on this, but I will be looking more at it
next week. In the meantime it sounds like you are on the right track.
I wish I'd have investigated it to the level you are at so I could be of
Hi Denis,
On 9/27/2010 7:43 AM, Denis Lila wrote:
Hi Jim.
How much faster? I'm worried about this, especially given our tiled
approach to requesting the data. What was the bottleneck before?
(It's been a while since I visited the code - we weren't computing the
crossings for every curve in
Hi Denis,
On 10/12/2010 6:01 AM, Denis Lila wrote:
Hi Jim.
2. I changed how the alpha map is managed in PiscesTileGenerator to
something that's a bit clearer and uses less memory (the latter comes
from changing the +300 in the alpha tile allocation to +1. If there was
a good reason for using
HI Denis,
I'm just now getting down to the nitty gritty of your webrevs (sigh).
On 10/6/2010 1:36 PM, Denis Lila wrote:
webrev:
http://icedtea.classpath.org/~dlila/webrevs/noflatten/webrev/
PiscesRenderingEngine.java:
line 278 - the det calculation is missing b.
line 296 - is there an
Round 2
On 10/13/2010 3:40 PM, Jim Graham wrote:
HI Denis,
I'm just now getting down to the nitty gritty of your webrevs (sigh).
On 10/6/2010 1:36 PM, Denis Lila wrote:
webrev:
http://icedtea.classpath.org/~dlila/webrevs/noflatten/webrev/
Stroker.java:
Are you happy with the current
Round 3...
On 10/6/2010 1:36 PM, Denis Lila wrote:
webrev:
http://icedtea.classpath.org/~dlila/webrevs/noflatten/webrev/
I'm going to set the rest of Stroker.java aside for a moment and focus
on other areas where I have some better knowledge...
Renderer.java:
lines 83, 91, 99: can't
Round 4...
On 10/6/2010 1:36 PM, Denis Lila wrote:
webrev:
http://icedtea.classpath.org/~dlila/webrevs/noflatten/webrev/
BezCurve.java:
I'd add some set() methods to BezCurve/Curve and then use a scratch
instance in Renderer (and other places?) to reuse those calculations,
such as:
Hi Denis,
Looks like some great new work here! I'll try to keep the pie in the
sky suggestions down now so we can get this in soon...
On 10/18/2010 2:19 PM, Denis Lila wrote:
Hi Jim.
I'm just now getting down to the nitty gritty of your webrevs (sigh).
Thanks. I hope it's not too bad.
Hi Denis,
On 10/18/2010 2:21 PM, Denis Lila wrote:
Are you happy with the current variable names?
Not really. The names you suggested are much better. I'm using them now.
As for making a vector class, I think we should push this and then decide.
It's absence has already done most of the
On 10/19/2010 10:38 AM, Denis Lila wrote:
Hi Jim.
If I haven't replied to a suggestion, that means I've implemented and
I thought it was a good idea, so I don't have anything to say about it.
That's mostly true too for me, but there are a couple that I might go
back to - I'll let you know
Hi Denis,
On 10/19/2010 10:40 AM, Denis Lila wrote:
ROCsq - I looked at the wikipedia article and it wasn't clear how it
directly related to your function since the article is dealing with
the curvature of a function graphed against its own t, but you are dealing
with 2 parametric equations
Hi Denis,
One clarification:
On 10/20/10 7:11 AM, Denis Lila wrote:
When would the isCW test trigger? Does it track rev? What happens
at 180 degrees (is that test reliable for the randomization that might
happen when omxy are directly opposite mxy)?
isCw is used for computing the arc
On 10/20/10 7:54 AM, Denis Lila wrote:
In #2, you have a bunch of I'() || B'() which I read as the slope
of the derivative (i.e. acceleration) is equal, don't you really mean
I() || B() which would mean the original curves should be parallel?
Otherwise you could say I'() == B'(), but I think you
Right, but it seemed to me that if omxy was the from vector and mxy
was the to vector, that the computed mmxy should always be predictably
on the same side of it, no? If it was on the wrong side then it
wouldn't be a random occurence, it must be related to the input data.
So either it is
OK, I can see how your terminology works now, but it seems odd to me. I
never consider re-expressing the coordinates on a curve as a vector and
basing geometric properties on those constructed vectors. I either
consider the points on the curve, or its tangent or its normal - none of
which is
Hi Denis,
I saw something in the latest webrev that reminded me of an earlier comment.
On 10/18/2010 2:21 PM, Denis Lila wrote:
line 389 - The test here is different from closePath. What if they
were both prev == DRAWING_OP_TO?
I am now using prev!=DRAWING_OP_TO (not ==, since it is
Hi Denis,
On 10/25/2010 7:34 AM, Denis Lila wrote:
(and I have some ideas on further optimizations to consider if you are still
game after this goes in)...
I'd love to hear what they are.
Here are my thoughts:
- Currently Renderer has more stages than we probably should have:
for (each
Hi Denis,
Just to be certain - you are still planning on putting the existing
stuff back and we're talking about future work, right? I'd love to get
a stake in the ground here.
On 10/25/2010 3:30 PM, Denis Lila wrote:
- Create a curve class and store an array of those so you don't have
to
Hi Denis,
On 10/26/2010 6:58 AM, Denis Lila wrote:
90% (guesstimate) of the time edges do not cross each other, thus if
you sort the crossings without reordering the active edges then you just
end up doing the same sorting work (same swaps) on the next scanline. My
SpanShapeIterator code
Hi Denis,
Good news!
On 10/28/2010 3:27 PM, Denis Lila wrote:
If we moved to a Curve class or some other way to
consolidate the 3 lists (may be easier in native code), this might win
in more cases...
Does that mean you no longer think we should flatten every curve as soon
as we get it?
No,
SRC behaves like SRC, but AA is another part of the equation. It works
like this (for any rule):
blendresult = PORTER_DUFF(rule, rendercolor, dstcolor, extraalpha)
// For SRC, blendresult = rendercolor modulated by extra alpha
storedresult = INTERP(dstcolor, blendresult, aacoverage)
// For
Why is SRC with an extra alpha handled any differently than SrcNoEa with
a color that has alpha? The two cases are supposed to be folded
together because it doesn't matter where the alpha comes from.
There is also a paintType indicator that indicates when the paint is
opaque. If you only
If you allow ALPHACOLOR (paintState = ALPHACOLOR) then you should be
able to handle Src with EA...
...jim
On 10/29/10 12:49 PM, Phil Race wrote:
This fixes it, although the same may need to be done to OGL and D3D
subclasses
of SurfaceData.java
-phil.
diff --git
Hi Denis,
A generic suggestion - make all of your classes final - that gives the
compiler the maximum flexibility to inline any methods you write.
With respect to the algorithm choices:
I think they key is that the X sorting rarely has any work to do. The
first test of does this edge need
Hi Denis,
I had a bit of luck with the following next() method:
private int next() {
// TODO: make function that convert from y value to bucket idx?
int bucket = nextY - boundsMinY;
for (int ecur = edgeBuckets[bucket]; ecur != NULL; ecur =
the possibility of using SIMD or when value-types
would be a huge benefit), it has its own performance pitfalls
especially if the workload is small and things like Get*ArrayCritical
cause scalability problems because they have to lock the GC.
Well, Jim Graham said that a native version of the engine
It's still a work in progress, but I've cleaned up a lot of logic and
made it faster in a number of ways. Note that I've abstracted out the
cache stuff and created an AlphaConsumer interface which may evolve
over time.
In FX we actually consume alphas in larger chunks than the code in JDK
A couple of questions about the code that I haven't touched...
Is there some reason why the AFD for cubics doesn't have any tests for
dddxy (the constants for its equations), but the AFD for quads is
testing the ddxy on every loop? I know that these values do change when
the AFD variables
I ended up going with:
- get rid of edgeMxy in all methods but addLine()
- addLine computes min/max of first/lastScanline
- addLine also computes min/max of x1,x2 values
this turned out to be just about the same speed for my FX rendering
version (which I believe is more sensitive than the way
Hi Denis,
On 12/6/2010 4:21 PM, Denis Lila wrote:
Hi Jim.
line 134 - what if numx or numy == 0 because only roots outside [0,1]
were found?
In this case lines 151-162 will execute, and nothing is wrong. The only
problem is when both numx and numy are 0. This is certainly possible in
the
On 12/8/2010 9:37 AM, Denis Lila wrote:
Shouldn't it be [A, B]?
I thought about this when implementing it, but I don't think it mattered
whether it was closed or half open, and the closed interval would have been
somewhat more awkward to implement.
I'm not sure how the closed interval is
Hi Denis,
On 12/8/2010 12:04 PM, Denis Lila wrote:
I'm not sure how the closed interval is awkward. Isn't it just proper
choice of = and= vs. and in the testing method?
In the filtering function, yes, but I was referring to cubicRootsInAB in
Helpers:122-133 where we iterate through
On 12/10/2010 8:27 AM, Denis Lila wrote:
Hi Jim.
Yes. The improvement shown by the bench marks is substantial.
Then this is great news!
Indeed :-)
Woohoo!
How often do we end up needing getTCloseTo in practice?
It depends on the ratios of the lengths of the sides of the control
Hi Denis,
The example I gave was intended to be very crude - I was simply
describing the technique, but as I said it would require better math to
really know what the right formula would be.
With respect to finding a cubic root, currently you are doing that in 2
dimensions, but what if we
Very nice! How does it compare to CubicCurve.solveCubic() (which I know
has issues with the 2 root case, but I got it from a reliable source -
some textbook on Numerical Recipes)?
Also, one area that I had issues with the version I used in CC2D was
that it chose a hard cutoff to classify the
Hi Denis,
Those sound like just the kind of problems I believed existed in the
CC2D algorithm.
You might want to submit it as a separate push and get credit for fixing
4645692 (solveCubic doesn't return all answers), and maybe even the
following failures in the containment methods (which
Hi Denis,
That sounds like some very good ideas for making this method very accurate.
On the other hand, we're starting to get into the territory where an
advanced math package should be catering to these requirements. The
solveCubic was an internal helper function for implementing the hit
Hi Denis,
Lines 1094-1096, they could also be NaN if any of the numerators were
also zero and these tests might fail (but only for the case of all of
them being zero I guess, otherwise one of the other divisions would
result in infinity). Are accidental infinities (caused by overflow
rather
Hi Andrew,
Do you really need the = NULL on the declarations? They are
initialized on the following line, that should be good enough for any
compiler or lint processing.
Other than that, the new fix looks good...
...jim
On 12/21/2010 2:57 AM, Steve Poole wrote:
:09, Jim Graham wrote:
Hi Andrew,
Do you really need the = NULL on the declarations? They are
initialized on the following line, that should be good enough for any
compiler or lint processing.
Other than that, the new fix looks good...
...jim
On 12/21/2010 2:57 AM, Steve Poole wrote:
Hi Andrew
Hi Denis,
Line 1099 - I decided to check out Cordano's method and noticed a
discrepancy. The comment here says we are calculating the p and q for
this equation, but the values assigned to the p and q variables in lines
1102,1103 happen to be p/3 and q/2. That's fine because almost all of
The regression tests for this bug do not call the method directly. They
may exercise the function indirectly in some pipelines, but not all
pipelines will use this method (the current version of Pisces in OpenJDK
doesn't even use it until you integrate your other changes as far as I
know).
Hi Denis,
I'm attaching a test program I wrote that compares the old and new
algorithms.
Obviously the old one missed a bunch of solutions because it classified
all solutions as 1 or 3, but the new one also sometimes misses a
solution. You might want to turn this into an automated test for
Hi Denis,
What about logic like this:
boolean checkRoots = false;
if (D 0) {
// 3 solution form is possible, so use it
checkRoots = (D -TINY); // Check them if we were borderline
// compute 3 roots as before
} else {
double u = ...;
double
1 - 100 of 622 matches
Mail list logo