On Mon, Oct 2, 2017 at 3:30 PM, Fredrik Johansson
<[email protected]> wrote:
> On Mon, Oct 2, 2017 at 9:10 PM, Aaron Meurer <[email protected]> wrote:
>>
>> That's a great blog post Fredrik. Since your blog doesn't seem to have
>> a comments box, I will add my comments here.
>
>
> Thanks!
>
>>
>> One thing I would add to the mpmath history is the SymPy 1.0 release
>> (March 2016), which officially made mpmath an external dependency of
>> SymPy. Prior to that, a copy of mpmath shipped as sympy.mpmath.
>>
>
> Good point. I've shamelessly copied this into the post.
>
>>
>> I've been using mpmath (via SymPy) myself quite a bit in my own recent
>> research (computing the CRAM approximation to exp(-t) on [0, oo) to
>> arbitrary precision). I'm always amazed at how stable mpmath is. It
>> always gives what seem to be correct answers, or fails nicely if it
>> can't. While I did find some minor holes in mpmath (I had to tweak the
>> maxsteps and tol parameters of findroot (via sympy.nsolve), see
>> https://github.com/fredrik-johansson/mpmath/issues/339), it was quite
>> easy to work around it.
>>
>> Regarding Arb, I would love to see Python bindings. I would suggest
>> writing some ArbPy wrapper library, so that people can use it in
>> Python on its own, and then we can use that to improve mpmath and
>> SymPy. There's been some interest in using something like Arb for code
>> generation. The idea is this: you can use SymPy to create a model for
>> something, and then use the codegen module to generate fast machine
>> code to compute it. But the problem is that you don't necessarily know
>> how precise that machine code is. What if there are numerical issues
>> that lead to highly inaccurate results? So the idea is to swap out the
>> backend for the code generator to something like Arb, and perform the
>> same computation with guaranteed bounds. This will obviously be slower
>> than the machine code, so you wouldn't use it in practice, but instead
>> you'd use it to get some assurance on the accuracy of your results
>> with machine floats. If the accuracy is bad, you might have to look
>> into modifying the algorithm. Or in the worst case, you just have to
>> use a slower arbitrary precision library to get the precision you
>> need. But critically, since everything is code generated, the whole
>> thing would (in theory at least) be as simple as changing some flag in
>> the code generator.
>
>
> As I wrote in the post, python-flint
> (https://github.com/fredrik-johansson/python-flint) already exists:
>
>>>> from flint import arb, good
>>>> arb(3).sqrt()
> [1.73205080756888 +/- 3.03e-15]
>>>> good(lambda: arb(1) + arb("1e-1000") - arb(1), maxprec=10000)
> [1.00000000000000e-1000 +/- 3e-1019]
>
> It still needs work with the setup code, documentation, tests, general code
> cleanup, and interface tweaks... volunteers are welcome.

Thanks. It wasn't clear to me that python-flint also wraps Arb.

I think getting FLINT and python-flint on conda-forge would help a
lot, as it would make it much easier for most people to install it.

Aaron Meurer

>
> Fredrik
>
> --
> You received this message because you are subscribed to the Google Groups
> "mpmath" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/mpmath.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/sympy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/CAKgW%3D6JBQKu8bieJJX35tuOEjEfQtso%2BoOT--9ruW%3D14vxHXcA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to