On Tue, Feb 7, 2017 at 7:34 PM, Jeff Law <l...@redhat.com> wrote: > On 02/07/2017 01:39 AM, Richard Biener wrote: >> >> On Mon, Feb 6, 2017 at 10:57 PM, Jeff Law <l...@redhat.com> wrote: >>> >>> On 02/06/2017 08:33 AM, Richard Biener wrote: >>> >>>> ah, indeed vr0type is VR_ANTI_RANGE and yes we have the case of a >>>> range with an anti-range "inside". This also covers [-1,1] v >>>> ~[0,0] where you choose the much larger anti-range now. So at >>>> least we want to have some idea about the sizes of the ranges >>>> (ideally we'd choose the smaller though for most further >>>> propagations anti-ranges often degenerate to varying...) >>> >>> >>> vr0 as an anti-singleton range like ~[0,0] is the only one likely >>> of any interest right now and that's always going to have a range >>> that is all but one value :-) >>> >>> vr1 is the tricky case. We could do v1.max - vr1.min and if that >>> overflows or is some "large" value (say > 65536 just to throw out a >>> value), then we conclude creating the singleton anti-range like >>> ~[0,0] is more useful. >> >> >> Yes, but it's hard to tell. As said, anti-ranges quickly degrade in >> further propagation and I fear that without a better range >> representation it's hard to do better in all cases here. > > All very true. What I threw into testing overnight was the >65536 > heuristic. While I don't expect we're going to see any testing failures, I > also suspect we don't have a lot of cases where wide ranges are all that > useful and even less testsuite coverage of those cases. > > One more targeted approach would be to *wipe* the old range when it is wide > and we discover ~[0,0] as a new range. We could do this just for > EXACT_DIV_EXPR. We'd do this much earlier than vrp_intersect so that we'd > still have enough context to know we're dealing with EXACT_DIV_EXPR. It's > obviously much less likely to have side effects, but feels even more like a > hack to me. > > *If* we come up with something useful, it might suggest a different approach > to even discovering the ~[0,0] case for EXACT_DIV_EXPR. Part of the problem > is we have a numerator of ~[0,0]. We covert that into [MININT,-1] U > [1,MAXINT] Then apply the denominator's range to both generating > [MININT/4,0] U [0, MAXINT/4], which we convert to [MININT/4,MAXINT/4] > > But the initial conversion of the numerator's range is imprecise for > EXACT_DIV_EXPR. If we mask out the low order bits at conversion time we end > up generating the (reasonable) [MININT/4,-1] U [1,MAXINT/4]. Then we union > those together into [MININT/4,MAXINT/4], but if we had a good heuristic we > might be able to realize ~[0,0] is better. > > > > The fact is >> >> we can't represent the result of the intersection and thus we have to >> conservatively choose an approximation. > > Right. There's no way apriori to absolutely know which range representation > is better. Though we have some sense that a wide range is not generally > useful (for some definition of wide) and a singleton anti-range sometimes > has value (because it can be used to propagate simple equivalences). We > could approximate the latter by looking at uses of the SSA_NAME and guess > which of those would benefit from the propagation enabled by a singleton > anti-range. That sounds painful though. > > > Sometimes we have the other >> >> range on an SSA name and thus can use equivalences (when coming from >> assert processing), but most of the time not and thus we can't use >> equivalences (which use SSA name versions rather than an index into >> a ranges array - one possible improvement to the range >> representation). > > Right. > > Iff ~[0,0] is useful information querying sth for >> >> non-null should also look at equivalences btw. > > ~[0,0] is primarily useful in pointer contexts -- null pointer check > elimination and pointer arithmetic. Outside those contexts I doubt it has > all that much value. > > Sadly, I don't think we can rely on types either -- IIRC everything is > casted to a generic integer type, so we can't key on ptrdiff_t.
Bah. Add a nonnull_p flag to value_range ... /me runs... Honestly, the way to go is ditch VR_ANTI_RANGE and make value_range be // template <unsigned N> struct GTY(()) value_range { enum value_range_type type; // The value range is the union of [ min[0], max[0] ], [ min[1], max[1] ] ... tree min[N]; tree max[N]; bitmap equiv; }; or better use a sub-struct for the individual sub-ranges (and maybe a vec so that the actual N is variable and we just impose an upper limit somewhere). for the moment with fixed N == 2. Then we can represent [MIN/4, 1] U [MAX/4, 1] exactly. Btw, there's always much talk about that mysterious VRP improvements by Andrew. I hope he doesn't mind if his work gets torn apart during hypothetical review which usually happens if you develop stuff somewhere hidden in a dark corner... and yes, I didn't want to duplicate any work and thus I refrained from doing the above kind of refactorings at the end of stage1. Richard. > Jeff >