On 2012/01/10 08:57:52, Sven wrote:
I have a general remark about the approach in this CL: If I read the patch
correctly, we lose a lot of information when we convert the BitRange back to a Range. This happens e.g. when several bitops are chained, which is probably
not
an uncommon pattern.

How much this matters depends on how the information is used. Current use is range checking against the Smi range. For this, the biggest loss of information
is not going from BitRange->Range, but going from Range->BitRange.
BitRange->Range loses information on 'interior' bits, but often that does not
affect whether the range is Smi.


I think the right approach would be that the computed range contains both
extreme values *and* the known bits/bit values, so the necessary information
can
be carried between several operations. Note that memory consumption for the additional fields could be a problem, but we'll have to measure that instead
of
doing wild guesses.

If you write such an analysis you will want to know if the analysis provides
sufficient optimization opportunities to be worth the extra memory, cpu and
development effort.
It is not clear to me how you would make that judgement that the new analysis is better without improving the current range analysis to overcome the weakness of
the bitwise transfer function.

If you were to move forward with my suggestion, you would be in a good position to do 'good science'. With my suggestion, *all* of the range transfer functions achieve copy propagation and constant folding. It would no longer be the case that the analysis produces terrible results just because a variable is used like
a named constant.  The current transfer function on

  var a = 6, b = 10, c = a|b, d = a&b, e = a^b;

estimates c = [0, 15], d = [0, 7], e = [-2^31,2^31-1] where it should really be
c = [14, 14], d = [2, 2], e = [12, 12].

My motivation is the hash function http://jsperf.com/hashcode-string-x/2 (b29)
that has three unnecessary overflow checks.
These persist even if I do manual constant folding (we are using that version in
the core libraries for Dart).


A related, but separate problem would be a better handling of loops in the
range
analysis, which we should probably handle first to see the real impact of a
better per-instruction range analysis.

Yes! Please do that first.
Would that be sufficient to remove the redundant bound check?

We still need a better XOR transfer function to eliminate the overflow check at the first addition in the loop of the hash function and the first addition after
the loop.


So in a nutshell: I like the general idea of the CL, but it things should
probably be done in a different place.


http://codereview.chromium.org/9156001/

--
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev

Reply via email to