As long as you're running unoptimized code on a 32-bit V8, this is
expected: 31-bit integers are stored directly as "small integers", the 32nd
bit is used to tag them as such, whereas 32-bit integers are converted to
doubles and stored as objects on the heap, which makes accessing them more
expensive.

When your code runs long enough for the optimizer to kick in, it should
recognize this situation, use untagged 32-bit integer values in optimized
code, and the difference between 31-bit and 32-bit values should go away.
If it doesn't, please post a reduced test case that exhibits the behavior
so that we can investigate. (Running the code for a second or so should be
enough to get the full effect of optimization and make the initial
difference negligible.)


On Wed, May 30, 2012 at 4:31 PM, Joran Greef <[email protected]> wrote:

> I am implementing a table hash (
> http://en.wikipedia.org/wiki/Tabulation_hashing) and noticed that a table
> hash using a table of 31-bit unsigned integers is almost an order of
> magnitude faster than a table hash using a table of 32-bit unsigned
> integers.
>
> The former has an average hash time of 0.00007ms per 20 byte key for a
> 31-bit hash, and the latter has an average hash time of 0.00034ms per 20
> byte key for a 32-bit hash.
>
> I figured that XOR on 8-bit integers would be faster than XOR on 16-bit
> integers would be faster than XOR on 24-bit integers would be faster than
> XOR on 32-bit integers, but did not anticipate such a difference between
> 31-bit and 32-bit integers.
>
> Is there something regarding XOR that I may be missing that could explain
> the difference?
>
>
>

-- 
v8-users mailing list
[email protected]
http://groups.google.com/group/v8-users

Reply via email to