On Mon, Dec 5, 2016 at 9:54 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

>  The concern is that "scan every row" could be very expensive - though in
> writing this I'm thinking that you'd quickly find a non-match even in a
> large dataset - and so a less perfect but still valid solution is to simply
> discard the typemod if there is more than one row.

​My folly here - and the actual question to ask - is if you are faced with
large dataset that does have consistent typmods - is the benefit of knowing
what it is and carrying it to the next layer worth the cost of scanning
every single row to prove it is consistent?​

My vote would be no - and the only question to ask is whether n = 1 and n >
1 behaviors should differ - to which I'd say no as well - at least in
master.  In the back branches the current behavior would be retained if the
n = 1 behavior is kept different than the n > 1 behavior which is a worthy

David J.

Reply via email to