I think it might actually be easier for BigFloat since BigFloats are
fixed-size, whereas BigInts are variable-size.

Chris, there is a DoubleDouble package
<https://github.com/simonbyrne/DoubleDouble.jl>, which implements efficient
higher-precision floating-point arithmetic, albeit not IEEE 128-bit floats.
As soon as hardware and LLVM support 128-bit IEEE floats, Julia can easily
support them as well – as I'm sure you realize, much more easily than any
other system.

Nobody wants BigFloats to be inefficient; the current state of the
compiler's ability to reuse them and eliminate allocations simply isn't as
good as it could potentially be. That doesn't mean that this won't be
improved in the future – it will be, although it's hard to say when since
there are a lot of competing priorities and a limited number of people who
can do the kind of compiler work necessary to improve this situation.
Fortunately, the problem is closely related to a number of other
performance issues that we also need to address (strings, array views).

On Sun, Jun 19, 2016 at 11:20 PM, Tim Holy <tim.h...@gmail.com> wrote:

> On Sunday, June 19, 2016 5:51:35 PM CDT Chris Rackauckas wrote:
> > But as a user, I do find it troubling that the only reliable
> high-precision
> > number type is clearly not aiming for performance.
>
> More like "it's hard to make it high performance" because the interaction
> with
> the C library means that memory has to be allocated & reclaimed.
> Presumably if
> someone sat down and implemented similar operations in pure-julia using,
> say,
> tuples, it would blow away what we have now. Naively, it doesn't seem so
> hard
> for BigInt; BigFloat could be a different matter entirely.
>
> Best,
> --Tim
>
>

Reply via email to