Nobody is currently working on optimizing BigInts.

It is definitely planned to do it eventually, but it's still unclear which
cases matter. Specifically, "BigInts that fit in 64 bits" sounds easy on a
high level, but in practice there's a crucial difference between signed and
unsigned 64-bit values (what C++ calls "int64_t" and "uint64_t"). If V8
could get by with supporting just one of those, that would make the
implementation reasonably straightforward, but would create unintuitive
performance cliffs (e.g. deopts when the first value exceeds the range of
the common subset, i.e. uint63) or differences (people would reasonably ask
"why is asIntN(64, ...) so much faster than asUintN(64, ...)?" or the other
way round). If V8 had to support both, however, that would make things
vastly more complicated: we would then probably need 4 internal
representations (uint63, uint64, int64, big) forming a diamond-shaped
lattice of transitions and an (up to?) 16-way dispatch in front of every
binary operation, with a bunch of both implementation complexity and
runtime cost. So going to these lengths requires some convincing data that
it's actually needed.

Maybe we could/should try to start a discussion with the wider community
about which BigInt usage patterns are relevant for high performance.

In the meantime, if you want 64-bit integers, modeling them as a class
wrapping two 32-bit integers (like Scala.js does currently) is probably
going to give you pretty good performance, because the optimizing compiler
will be able to eliminate most or all of the allocations. On 32-bit systems
(of which there are still hundreds of millions on the internet), that might
well be as fast as 64-bit ranged BigInts ever will be.


On Wed, Jan 16, 2019 at 5:40 AM <[email protected]> wrote:

> Hi,
>
> just wanted to ask if anyone is currently working on optimizing BigInts
> (especially ones that fit in 64bit), or if it is being considered in some
> prioritetization efforts?
>
> The proposal [0] and the blog post [1] suggest that it is on the roadmap,
> I'm just wondering if anything is already happening.
>
> I think this would be particulary useful for compilers targeting js (as
> changing int backend is clearly much easier than changing the target to
> wasm), such as Scala.js [2], or js_of_ocaml (which is what I have a direct
> interest in).
>
> Could someone estimate how hard would it be to implement the basic
> optimizations (like BigInt.asIntN(64, b1 + b2) => native addition)?
>
> [0] https://github.com/tc39/proposal-bigint#dont-break-good-performance
> [1] https://v8.dev/blog/bigint
> [2] https://github.com/tc39/proposal-bigint/issues/117
>
> --
> --
> v8-dev mailing list
> [email protected]
> http://groups.google.com/group/v8-dev
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to