https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110852

--- Comment #6 from Jan Hubicka <hubicka at ucw dot cz> ---
> which fixes the ICE by preferring PRED_BUILTIN_EXPECT* over others.
> At least in this case when one operand is a constant and another one is
> __builtin_expect* result that seems like the right choice to me, the fact that
> one operand is constant doesn't mean the outcome of the binary operation is
> unconditionally constant when the other operand is __builtin_expect* based.

The code attempt to solve an problem with no good solution.  If you have two
probabilities that certain thing happens, the combination of them is
neither correct (since we do not know that the both probabilities are
independent) nor the resulting value corresponds to one of the two
predictors contributing to the outcome.

One exception is when one value is 100% sure, which is the case of
PRED_UNCDITIONAL.  So I would say we could simply special case 0% and
100% probability and pick the other predictor in that case.

> But reading the
> "This incorrectly takes precedence over more reliable heuristics predicting
> that call
> to cold noreturn is likely not going to happen."
> in the description makes me wonder whether that is what we want always 
> (though,
> I must say I don't understand the cold noreturn argument because
> PRED_COLD_FUNCTION is never returned from expr_expected_value_1.  But

Once expr_epeced_value finishes its job, it attaches precitor to an edge
and later all predictions sitting on an edge are combined.  If you end
up declaring prediction as BUILTIN_EXPECT_WITH_PROBABILITY, the logic
combining precitions will believe that the value is very reliable and 
will ignore other predictors.  This is the PRED_FLAG_FIRST_MATCH mode.

So computing uncertain probability and declaring it to be
BUILTIN_EXPECT_WITH_PROBABILITY is worse than declaring it to be a
predictor with PRED_DS_THEORY merging mode
(which assumes that the value is detrmined by possibly unreliable
heuristics)

So I would go with special casing 0% and 100% predictors (which can be
simply stripped and forgotten). For the rest we could probably introduce
PRED_COMBINED_VALUE which will be like BUILTIN_EXPECT_WITH_PROBABILITY
but with DS_THEORY meging mode.  It is probably better than nothing, but
certainly can not be trusted anymore.
> 
> Oh, another thing is that before the patch there were two spots that merged 
> the
> predictors, one for the binary operands (which previously picked the weaker
> predictor and now picks stronger predictor), but also in the PHI handling
> (which still picks weaker predictor); shouldn't that be changed to match,
> including the != -1 handling?

PHI is even more fishy, since we have no idea of probability of entering
the basic block with a given edge.  We can probably ignore basic blocks
that are already having profile_count of 0.

Otherwise we may try to do DS theory combination of the incomming
values.  I can cook up a patch.

(also fixing other profile related issue is very high on my TODO now)

Reply via email to