Pavel Stehule <pavel.steh...@gmail.com> writes:
> 2018-02-09 12:02 GMT+01:00 Marko Tiikkaja <ma...@joh.to>:
>> This is quite short-sighted. The better way to do this is to complain if
>> the number of expressions is different from the number of target variables
>> (and the target variable is not a record-ish type). There's been at least
>> two patches for this earlier (one my me, and one by, I think Pavel
>> Stehule). I urge you to dig around in the archives to avoid wasting your
> This issue can be detected by plpgsql_check and commitfest pipe is patch
> that raise warning or error in this case.
I think the issue basically arises from this concern in exec_move_row:
* Row is a bit more complicated in that we assign the individual
* attributes of the tuple to the variables the row points to.
* NOTE: this code used to demand row->nfields ==
* HeapTupleHeaderGetNatts(tup->t_data), but that's wrong. The tuple
* might have more fields than we expected if it's from an
* inheritance-child table of the current table, or it might have fewer if
* the table has had columns added by ALTER TABLE. Ignore extra columns
* and assume NULL for missing columns, the same as heap_getattr would do.
* We also have to skip over dropped columns in either the source or
As things stand today, we would have a hard time tightening that up
without producing unwanted complaints about the cases mentioned in
this comment, because the DTYPE_ROW logic is used for both "INTO a,b,c"
and composite-type variables. However, my pending patch at
gets rid of the use of DTYPE_ROW for composite types, and once that
is in it might well be reasonable to just throw a flat-out error for
wrong number of source values for a DTYPE_ROW target. I can't
immediately think of any good reason why you'd want to allow for
the number of INTO items not matching what the query produces.
regards, tom lane