On Wed, 2005-11-23 at 15:05 +0100, Michael Matz wrote:
> > Spill Cost Engine [page(s) 26-29]:
> > * The register allocator should not be estimating the execution
> > frequency of a basic block as 10^nesting level. That information
> > should be coming from the cfg which comes from profile data or
> > from a good static profile. The problem with 10^loop nesting
> > level is that we can overestimate the spill costs for some
> > pseudos. For example:
> > while (...) {
> > <use of "a">
> > if (...)
> > <use of "b">
> > else
> > <use of "b"
> > }
> > In the code above, "b"'s spill cost will be twice that of "a",
> > when they really should have the same spill cost.
>
> Nearly. "b" _is_ more costly to spill, code size wise. All else being
> equal it's better to spill "a" in this case. But the cost is of course
> not twice as large, as you say. I.e. I agree with you that the metric
> should be based exclusively on the BB frequencies attached to the CFG, not
> any nesting level. Also like in new-ra ;)
The spill cost for a pseudo in a classic Chaitin/Briggs allocator does
not take number of spill instructions inserted into account, so "b"'s
spill cost would be twice that of "a" if we were to use 10^nesting
level. That said, I think we're all in agreement that using basic
block frequencies from the cfg is the correct thing to do and that
taking static spill instruction counts into account is a good idea
which Andrew's proposal does by using it as a tie breaker.
I assume it goes without saying that when using -Os, spill cost will
be used as the tie breaker when two pseudos have the same static spill
instruction counts.
Peter