I just realised I was confusing how mpmath works (thinking it was decimal
rather than binary) so a few of the things I said were incorrect. When I'm
next at a computer I'll start a new thread and clarify what I mean.
On 7 Apr 2016 15:12, "Oscar Benjamin" <oscar.j.benja...@gmail.com> wrote:

> On 5 April 2016 at 18:08, Aaron Meurer <asmeu...@gmail.com> wrote:
> > On Tue, Apr 5, 2016 at 12:54 PM, Oscar Benjamin
> > <oscar.j.benja...@gmail.com> wrote:
> >>
> >>>
> >>> I don't know if it should be considered a bug, but it's worth noting
> >>> that if you want SymPy to give the right precision in general you have
> >>> to start with Float objects that are set with the precision you need.
> >>> To me it feels like a bug because it negates the purpose of the evalf
> >>> precision argument.
> >>
> >> Is there a coherent policy on float-handling in sympy?
> >>
> >> My ideal would be:
> >>
> >> 1) All float objects are created exact (having the exact value of the
> >> object passed in).
> >> 2) No inexact auto-evaluation.
> >> 3) .evalf() can be used to fully evaluate the expression with desired
> precision.
> >> 4) Ideally the precision argument to .evalf would be the precision of
> >> the *output* rather than the internal precision of the intermediate
> >> calculations
> >
> > Can you clarify what you mean by "exact" here?
> >
> > Note that there's no way to know what the input value of a float is.
> > That is, there's no way to write Float(0.2) (with no quotes) and have
> > it be treated as Float(2/10).  The 0.2 object is converted to a Python
> > floating point by the time that Float sees it, and it's not a decimal:
> >
> > In [49]: (0.2).as_integer_ratio()
> > Out[49]: (3602879701896397, 18014398509481984)
>
> Passing a Python float into a sympy expression (I know I accidentally
> did it earlier in the thread) is usually not going to do what is
> wanted e.g. 0.1*x creates a number not truly equal to 0.1 and passes
> it to x.__rmul__. The good fix for this is as you say to use a string.
> However there are times when it would be good to pass in a float that
> you have obtained from some other calculation and have sympy work with
> it.
>
> Currently Sympy will round an input float to 15d.p. and as a result
> S(0.1) will result in an mpf which really does have a true value of
> 0.1. This is useful for novices but IMO just hides the binary float
> problem a bit. The right solution is for users to understand that they
> should be using S("0.1") or something.
>
> If OTOH I received my float from some other source (rather than trying
> to make a simple number like 0.1) then sympy is rounding the number
> rather than taking it in exactly. I would prefer it in this case if
> sympy would retain the exact value of the input number the same way
> that Fraction and Decimal do:
>
> In [19]: from fractions import Fraction
>
> In [20]: Fraction(0.1)
> Out[20]: Fraction(3602879701896397, 36028797018963968)
>
> In [21]: from decimal import Decimal
>
> In [22]: Decimal(0.1)
> Out[22]:
>
> Decimal('0.1000000000000000055511151231257827021181583404541015625')
> In [23]: from sympy import Float
>
> In [24]: Float(0.1)
> Out[24]: 0.100000000000000
>
> Both Fraction and Decimal always retain the exact input value. If you
> want to round it then you need to do that explicitly. This is useful
> because once you're numbers are converted to say Decimal then you can
> use the calculation contexts to control exactly how the rounding
> occurs (or to prevent any rounding) etc. If the rounding occurs
> straight away at input then you cannot.
>
> > That's why Float allows string input (and it's the recommended way of
> > creating them).
> >
> > With that being said, I don't think the fact that
> > (1.4142).as_integer_ratio() isn't (7071, 5000) is the problem here.
> > Float(1.4142) is indeed inexact compared to Float('1.4142'), but the
> > wrong answers from x**6000%400 come from lack of computing precision,
> > not lack of input accuracy.
>
> It is a separate but related issue to this. In this particular case
> S(1.4142) does what a user may be intending it to do and creates
> mpf("1.4142") so the issue occurs later.
>
> >> Currently 1) already occurs for decimal strings but Float(float)
> >> rounds to 15 digits and you can explicitly force something impossible
> >> as a ratio string: Float("1/3"). I think Float should be more like
> >> decimal.Decimal here: all input arguments are treated as exact
> >> regardless of precision etc. (and I don't see any good reason for
> >> allowing Float("1/3"))
> >>
> >> Without 2) it is impossible to achieve 3). If approximate
> >> auto-evaluation can occur before calling .evalf then there's no way
> >> for evalf to set the precision to be used for the auto-evaluation.
> >>
> >> Obviously 4) is harder than current behaviour and perhaps impossible
> >> in general but it is achievable for simple cases like in this thread.
> >> From a user perspective it is definitely what is actually wanted and
> >> much easier to understand.
> >
> > I'm unclear how this works, because if you take my example above with
> > x = nsimplify("1.4142"), evalf() gave the right answer with the
> > default precision (15). That is, when everything in the expression is
> > a non-float, it gives the right answer. However, it seems that as soon
> > as an expression contains a Float, that Float must have whatever
> > precision you need set on it.
>
> I assume that you mean this:
>
> In [4]: ((nsimplify("1.4142") ** 6000) % 400).evalf()
> Out[4]: 271.048181008631
>
> So let's pull that apart:
>
> When you do nsimplify("1.4142") you get back a Rational.
> Exponentiation that with an integer gives another Rational. The
> expression is auto-evaluated to get the new rational but it is done
> exactly. Likewise the modulo 400 is auto-evaluated but again it gives
> an exact Rational.
>
> Print this to see the massive numerator/denominator that are created:
>     nsimplify("1.4142") ** 6000 % 400
>
> Since the final .evalf() is only called on a Rational which was itself
> computed exactly and can easily give as many correct digits as you
> desire.
>
> However when you do this:
>
> In [13]: Float("1.4142") ** 6000 % 400
> Out[13]: 32.0000000000000
>
> It works out differently. So what happens here is the expression
> Float("1.4142") ** 6000 is auto-evaluated but only using the default
> precision that is attached to the mpf Float (or maybe the context):
>
> In [15]: Float("1.4142") ** 6000
> Out[15]: 1.16144178843571e+903
>
> So here we have auto-evaluation that has changed the numeric value of
> the expression. This is not the same as simplifying 2**3 -> 8 or some
> other simplification that is *exactly* correct because it has
> introduced a numerical error. I think that this kind of
> auto-evaluation should not occur by default.
>
> So in the ideal scenario any auto-evaluation/simplification that would
> not be exactly correct should not be applied. Then at the end when
> calling .evalf it is possible to specify exactly the precision used
> for *all* of the approximate steps in the calculation.
>
> However as I said earlier a better general approach (although harder)
> would be to have an API that allows to specify the accuracy of the
> desired answer rather than the precision used for calculations. So in
> this case it works back recursively. I have an expression:
>
>     s = Mod(Pow(Float("1.4142"), 6000), 400)
>
> and I call
>
>     s.evalf(correct_digits=10)
>
> and then this recursively calls evalf along the tree asking each time
> for the number of correct digits needed in order to ensure that the
> *end result* is correct to 10 digits.
>
> --
> Oscar
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at https://groups.google.com/group/sympy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/CAHVvXxQNqEcLXYcFcVn0bW_9BNmO0O1xqLDrHmgfJURzdwv_DA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to