(2018/04/27 14:40), Ashutosh Bapat wrote:
Here's updated patch set.

Thanks for updating the patch! Here are my review comments on patch 0003-postgres_fdw-child-join-with-ConvertRowtypeExprs-cau.patch:

* This assertion in deparseConvertRowtypeExpr wouldn't always be true because of cases like ALTER FOREIGN TABLE DROP COLUMN plus ALTER FOREIGN TABLE ADD COLUMN:

+   Assert(parent_desc->natts == child_desc->natts);

Here is such an example:

postgres=# create table fprt1 (a int, b int, c varchar) partition by range (a);
postgres=# create table fprt1_p1 (like fprt1);
postgres=# create table fprt1_p2 (like fprt1);
postgres=# create foreign table ftprt1_p1 (a int, b int, c varchar) server loopback options (table_name 'fprt1_p1', use_remote_estimate 'true');
postgres=# alter foreign table ftprt1_p1 drop column c;
postgres=# alter foreign table ftprt1_p1 add column c varchar;
postgres=# alter table fprt1 attach partition ftprt1_p1 for values from (0) to (250); postgres=# create foreign table ftprt1_p2 partition of fprt1 for values from (250) to (500) server loopback options (table_name 'fprt1_p2', use_remote_estimate 'true'); postgres=# insert into fprt1 select i, i, to_char(i/50, 'FM0000') from generate_series(0, 499, 2) i;
postgres=# analyze fprt1;
postgres=# analyze fprt1_p1;
postgres=# analyze fprt1_p2;
postgres=# create table fprt2 (a int, b int, c varchar) partition by range (b);
postgres=# create table fprt2_p1 (like fprt2);
postgres=# create table fprt2_p2 (like fprt2);
postgres=# create foreign table ftprt2_p1 partition of fprt2 for values from (0) to (250) server loopback options (table_name 'fprt2_p1', use_remote_estimate 'true'); postgres=# create foreign table ftprt2_p2 partition of fprt2 for values from (250) to (500) server loopback options (table_name 'fprt2_p2', use_remote_estimate 'true'); postgres=# insert into fprt2 select i, i, to_char(i/50, 'FM0000') from generate_series(0, 499, 3) i;
postgres=# analyze fprt2;
postgres=# analyze fprt2_p1;
postgres=# analyze fprt2_p2;
postgres=# set enable_partitionwise_join to true;
postgres=# select t1, t2 from fprt1 t1 join fprt2 t2 on (t1.a = t2.b) where t1.a % 25 = 0;
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

I think we should remove that assertion.

* But I don't think that change is enough. Here is another oddity with that assertion removed.

postgres=# alter table fprt1 detach partition ftprt1_p1;
postgres=# alter table fprt1 detach partition ftprt1_p2;
postgres=# alter table fprt1 drop column c;
postgres=# alter table fprt1 add column c varchar;
postgres=# alter table fprt1 attach partition ftprt1_p1 for values from (0) to (250); postgres=# alter table fprt1 attach partition ftprt1_p2 for values from (250) to (500);
postgres=# set enable_partitionwise_join to true;
postgres=# select t1, t2 from fprt1 t1 join fprt2 t2 on (t1.a = t2.b) where t1.a % 25 = 0;
ERROR:  malformed record literal: "(300,300,"(300,300,0006)",0006)"
DETAIL:  Too many columns.
CONTEXT:  processing expression at position 1 in select list

The reason for that would be: in the following, the patch doesn't take care of dropped columns in the parent table (ie, columns such that attrMap[cnt]=0).

+   /* Construct ROW expression according to the conversion map. */
+   appendStringInfoString(buf, "ROW(");
+   for (cnt = 0; cnt < natts; cnt++)
+   {
+       if (cnt > 0)
+           appendStringInfoString(buf, ", ");
+       deparseColumnRef(buf, child_var->varno, attrMap[cnt], root,
+                        qualify_col);
+   }
+   appendStringInfoChar(buf, ')');

To fix this, I think we should skip the deparseColumnRef processing for dropped columns in the parent table.

* I think it would be better to do EXPLAIN with the VERBOSE option to the queries this patch added to the regression tests, to see if ConvertRowtypeExprs are correctly deparsed in the remote queries.

That's all I have for now.

Best regards,
Etsuro Fujita

Reply via email to