David Rowley <david.row...@2ndquadrant.com> writes: > It's certainly possible to do more here. I'm uncertain what needs to > be done in regards to cached plan invalidation, but we'd certainly > need to ensure cached plans are invalidated whenever the attnotnull is > changed.
They already are; any change at all in a pg_class or pg_attribute row results in a relcache inval for the associated table, cf CacheInvalidateHeapTuple. As far as the invalidation machinery is concerned, changing attnotnull is indistinguishable from changing atttypid, which I'm sure you'll agree must force plan regeneration. So this line of thought leads to the conclusion that we were too conservative in not allowing unique constraints to be used along with actual pkeys in remove_useless_groupby_columns. We do have the additional requirement of checking attnotnull, but I think we can assume that a plan inval will occur if that changes. In fact, now that I think about it, I wonder why we're insisting on tracking the constraint OID either. Don't index drops force relcache invals regardless? If not, how do we ensure that an indexscan plan relying on that index gets redone? Part of this probably comes from looking at the parser's check_ungrouped_columns(), but that is operating under different constraints, since it is generating what might be a long-lived parse tree, eg something that gets stored as a view. We do need to be able to register an actual dependency on the uniqueness constraint in that situation. But plans get tossed on the basis of relcache invals, so I think they don't need it. Anyway, that's clearly a future improvement rather than something I'd insist on this patch tackling immediately. My gripe about this as it stands is mainly that it seems like it's going to do a lot of catalog-lookup work twice over, at least in cases that have both DISTINCT and GROUP BY --- or is that too small a set to worry about? regards, tom lane