I think we need to have formal policy about including the code which can
potentially give wrong results.

There can be two cases.

1. We don't know what use case leads to wrong to the wrong result but we are
   not sure that there won't be any. As Sergey B Kirpichev in the 
discussion on
   the PR #2723, hack and pray is not a good strategy for maths related
   softwares. Other than compulsory test coverage and code review, which are
   already doing we can do two things.

   First we can might make it necessary to give a formal proof of the 
algorithm used.
   This way, given that the implementation is correct, we can be sure that 
the
   code will give correct output. But the cons are that it can be hard to 
turn
   up with a formal proof especially if the developer has come up with the
   algorithm by himself.

   Second we can ask for a formal description and justification of the
   algorithm. This way it will be easier to figure out logical fallacies 
and as
   a bonus we can add the formal description in the documentation.

2. This is the case where we know of certain use cases which give the wrong
   result but allowing them helps us solve a larger set of usecases.

   For example while working on evaluating imageset we had an algorithm 
that worked
   well given that the input function is continuous and the solve returns 
the all the 
   solutions of the derivative. But we don't have any easy method to check 
for the continuity.
   So ultimately we decided to restrict ourself to rational functions.

   But there can be cases where the safe set of inputs are either too hard 
to
   isolate or becomes too small to be of any use. For example the
   oscillating nature of trigonometric functions at infinity leads to wrong
   results in some of the cases of limit evaluation. But isolating the 
cases where it will give wrong
   result might turn to be harder problem than computing the limit itself. 
And by
   not allowing trigonometric functions or not allowing limit computation at
   infinity will reduce the usefulness of limit to a great extent.

Because I don't think we can avoid wrong results we should:

- Document and tell users that this is what sympy cannot do and where can 
silently go wrong.

- Create a subset of sympy that is complete and reliable, backed by formal
  proofs of correctness. I speculate that the common use cases of sympy are
  teaching and homework of students. A reliable subset of sympy might find
  more real world application, say in designing hybrid algorithms for
  computation. We might go about implementing by first figuring out the 
modules
  and algorithms which are complete, then we can have an environment 
variable
  `ONLY_COMPLETE_MODULES` which when set to true will allow only complete
  modules to be used.

--
Harsh Gupta

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sympy.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to