Ralf Hemmecke wrote:
> 
> > 1) Have you run the test on unpatched FriCAS?
> 
> No, why should I?
> 
> That sounds probably arrogant, but I would expect that I'm not the only
> person who ignores unknown errors.
> 
> > 2) Google for 'polynomial gcd failure due to bad reduction'.
> 
> Why should that be a way to find out what the bug in FriCAS was? Would
> it be OK, if I used Yandex or any other search engine. Would it be OK,
> if I am without internet connection when I want to review the patch?

"bad reduction" is standard terminology.  PGCD works by treating
multivariate polynomials as polynomials in one variable (called
main variable) with coefficients beeing polynomials in other
(auxiliary) variables.  PGCD chooses random values for
auxiliary variables and substitutes them into arguments.
Then in tries to reconstruct multivariate GCD from GCD
of univariate images.  There are various obstacles to
this, if they happen we speak about bad reduction.  If
bad reduction is detected PGCD should try different evaluation
point.

What I wrote above is standard background explained in papers
about polynomial GCD.  If you did not know this, then IMHO
searching for information is appropriate action.

Once you accept randomized nature of PGCD it should be clear
that repeating test is a method to increase probability of
hitting the bug.

You could easily check what was wrong by running the code,
it requires less effort than writing the message to the list.
The error is:

   >> Error detected within library code:
   (1 . failed) cannot be coerced to mode (SparseUnivariatePolynomial (Integer))

which means that division which code expected to be exact failed.
Looking at change in 'lift' function you will notice that instead
of coercing result of division to SUP I first check for "failed"
and in such case return "failed" from 'lift'.  I coerce only
when division was exact.  From this alone it should be clear
that we get different behaviour only in case when old code
was wrong (produced runtime error).  The testcase shows that
such cases do happen.  It is easy to prove that in case of
good reduction division is always exact.  So the new failing
cases are all bad reductions.  Since 'lift' should fail in
case of bad reduction this is correct behaviour.

> Waldek, you probably misunderstood my comment. I am not saying that you
> should write tons of explanations or even a whole paper on the issue.
> I was just saying that in a commit message that fixes a bug, it must be
> clearly stated what the bug actually was and how it can be reproduced. A
> link to the bugtracker would be enough, but better, if it is explicitly
> in the commit message. Then I expect some hint of why the old code was
> wrong and why the new code is better. References to the literature that
> was used to fix the issue would be wonderful.

There is a testcase in the commit, so no question how to reproduce.
IMO it rarely matters what the problem was.  What matters is
if new code is correct.  Most bug fixes (including this one)
are a posteriori obvious, frequently they add previously
unhandled cases.  Of course, if you want to have review system,
then explaning problem for reviever is needed as part of validation.
But there is little value in storing such information.  Without
reviewers colecting explanations is just useless bureaucracy.

> Don't you think we should increase the bus factor of FriCAS?

Well, the question is how?  You seem to believe in magic
attractive force of documentation.  I have seen folks
who get scared when they saw how much documentation
some systems have.  The sucessful ones frequently
ignore most of documentation.

IMO critial factor for FriCAS is to became a research
platform.  In other words, to attract researchers to
develop new code on top of FriCAS.  For this:

1) It must be possible to do the job using FriCAS.  If
   types system stays in a way or some builtin function
   rejects useful borderline case, then we failed.
   Performance must be adequate.
2) FriCAS must offer some attractive features.  Beeing free
   is a plus, but we need much more.  For example, if
   we are the only system having a feature that helps to
   solve problem.  Or if we offer best performance.
3) We need to get attention.  For this we need some publications
   featuring FriCAS.

Documentation helps.  But for me it is pretty clear that
more documentation has only marginal effect.  Having
formally verified system could be a big selling point.

 
> > 7) Sorry if the above sound too harsh.  But it looked
> >    like you turned off thinking: PGCD is _randomized_
> >    (Las Vegas) algorithm and we need several tries to
> >    hit the bug (1000 is a compromise between time to run
> >    the test and probability of hitting the bug).
> 
> Whether harsh or not, I don't care. And if I am able to think or not, I
> also don't care.
> 
> But if you think just from the file
> 
> https://github.com/fricas/fricas/blob/master/src/input/pgcd.input
> 
> one can figure out which implementation in what domain is actually
> tested, then I am not on your side.
> 
> And repeating a call to gcd 1000 times looks really strange if there is
> not comment next to it.

There is: "bad reduction"

-- 
                              Waldek Hebisch
[email protected] 

-- 
You received this message because you are subscribed to the Google Groups 
"FriCAS - computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/fricas-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to