Re: Column Filtering in Logical Replication

2022-09-07 Thread Peter Smith
On Wed, Sep 7, 2022 at 8:49 PM Amit Kapila  wrote:
>
> On Tue, Sep 6, 2022 at 2:15 PM Amit Kapila  wrote:
> >
> > On Tue, Sep 6, 2022 at 5:08 AM Peter Smith  wrote:
> > >
> > > You are right - that REFRESH PUBLICATION was not necessary for this
> > > example. The patch is modified to use your suggested text.
> > >
> > > PSA v8
> > >
> >
> > LGTM. I'll push this once the tag appears for v15.
> >
>
> Pushed!

Thanks for pushing.

--
Kind Regards,
Peter Smith.
Fujitsu Australia.




Re: Column Filtering in Logical Replication

2022-09-07 Thread Amit Kapila
On Tue, Sep 6, 2022 at 2:15 PM Amit Kapila  wrote:
>
> On Tue, Sep 6, 2022 at 5:08 AM Peter Smith  wrote:
> >
> > You are right - that REFRESH PUBLICATION was not necessary for this
> > example. The patch is modified to use your suggested text.
> >
> > PSA v8
> >
>
> LGTM. I'll push this once the tag appears for v15.
>

Pushed!

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-09-06 Thread Amit Kapila
On Tue, Sep 6, 2022 at 5:08 AM Peter Smith  wrote:
>
> You are right - that REFRESH PUBLICATION was not necessary for this
> example. The patch is modified to use your suggested text.
>
> PSA v8
>

LGTM. I'll push this once the tag appears for v15.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-09-05 Thread Peter Smith
On Mon, Sep 5, 2022 at 8:46 PM Amit Kapila  wrote:
>
> On Mon, Sep 5, 2022 at 3:46 PM Peter Smith  wrote:
> >
> >
> > PSA v7.
> >
>
> For example, if additional columns are added to the table, then
> +(after a REFRESH PUBLICATION) if there was a column 
> list
> +only those named columns will continue to be replicated.
>
> This looks a bit unclear to me w.r.t the refresh publication step. Why
> exactly you have used refresh publication in the above para? It is
> used to add new tables if any added to the publication, so not clear
> to me how it helps in this case. If that is not required then we can
> change it to: "For example, if additional columns are added to the
> table then only those named columns mentioned in the column list will
> continue to be replicated."
>

You are right - that REFRESH PUBLICATION was not necessary for this
example. The patch is modified to use your suggested text.

PSA v8

--
Kind Regards,
Peter Smith.
Fujitsu Australia


v8-0001-Column-List-new-pgdocs-section.patch
Description: Binary data


Re: Column Filtering in Logical Replication

2022-09-05 Thread Amit Kapila
On Mon, Sep 5, 2022 at 3:46 PM Peter Smith  wrote:
>
>
> PSA v7.
>

For example, if additional columns are added to the table, then
+(after a REFRESH PUBLICATION) if there was a column list
+only those named columns will continue to be replicated.

This looks a bit unclear to me w.r.t the refresh publication step. Why
exactly you have used refresh publication in the above para? It is
used to add new tables if any added to the publication, so not clear
to me how it helps in this case. If that is not required then we can
change it to: "For example, if additional columns are added to the
table then only those named columns mentioned in the column list will
continue to be replicated."

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-09-05 Thread Peter Smith
On Mon, Sep 5, 2022 at 1:42 PM shiy.f...@fujitsu.com
 wrote:
>
> On Mon, Sep 5, 2022 8:28 AM Peter Smith  wrote:
> >
> > I have rebased the remaining patch (v6-0001 is the same as v5-0002)
> >
>
> Thanks for updating the patch. Here are some comments.
>
> 1.
> + the  will be successful but later
> + the WalSender on the publisher, or the subscriber may throw an error. In
> + this scenario, the user needs to recreate the subscription after 
> adjusting
>
> Should "WalSender" be changed to "walsender"? I saw "walsender" is used in 
> other
> places in the documentation.

Modified.

>
> 2.
> +test_pub=# CREATE TABLE t1(id int, a text, b text, c text, d text, e text, 
> PRIMARY KEY(id));
> +CREATE TABLE
> +test_pub=#
>
> +test_pub=# CREATE PUBLICATION p1 FOR TABLE t1 (id, b, a, d);
> +CREATE PUBLICATION
> +test_pub=#
>
> I think the redundant "test_pub=#" can be removed.
>

Modified.

>
> Besides, I tested the examples in the patch, there's no problem.
>

Thanks for the review comments, and testing.

I made both fixes as suggested.

PSA v7.

--
Kind Regards,
Peter Smith.
Fujitsu Australia


v7-0001-Column-List-new-pgdocs-section.patch
Description: Binary data


RE: Column Filtering in Logical Replication

2022-09-04 Thread shiy.f...@fujitsu.com
On Mon, Sep 5, 2022 8:28 AM Peter Smith  wrote:
> 
> I have rebased the remaining patch (v6-0001 is the same as v5-0002)
> 

Thanks for updating the patch. Here are some comments.

1.
+ the  will be successful but later
+ the WalSender on the publisher, or the subscriber may throw an error. In
+ this scenario, the user needs to recreate the subscription after adjusting

Should "WalSender" be changed to "walsender"? I saw "walsender" is used in other
places in the documentation.

2.
+test_pub=# CREATE TABLE t1(id int, a text, b text, c text, d text, e text, 
PRIMARY KEY(id));
+CREATE TABLE
+test_pub=#

+test_pub=# CREATE PUBLICATION p1 FOR TABLE t1 (id, b, a, d);
+CREATE PUBLICATION
+test_pub=#

I think the redundant "test_pub=#" can be removed.


Besides, I tested the examples in the patch, there's no problem.

Regards,
Shi yu


Re: Column Filtering in Logical Replication

2022-09-04 Thread Peter Smith
On Fri, Sep 2, 2022 at 11:40 PM Amit Kapila  wrote:
>
> On Fri, Sep 2, 2022 at 8:45 AM Peter Smith  wrote:
> >
> > On Thu, Sep 1, 2022 at 7:53 PM Amit Kapila  wrote:
> > >
> > > On Fri, Aug 26, 2022 at 7:33 AM Peter Smith  wrote:
> > > >
> > >
> > > Few comments on both the patches:
> > > v4-0001*
> > > =
> > > 1.
> > > Furthermore, if the table uses
> > > +   REPLICA IDENTITY FULL, specifying a column list is 
> > > not
> > > +   allowed (it will cause publication errors for 
> > > UPDATE or
> > > +   DELETE operations).
> > >
> > > This line sounds a bit unclear to me. From this like it appears that
> > > the following operation is not allowed:
> > >
> > > postgres=# create table t1(c1 int, c2 int, c3 int);
> > > CREATE TABLE
> > > postgres=# Alter Table t1 replica identity full;
> > > ALTER TABLE
> > > postgres=# create publication pub1 for table t1(c1);
> > > CREATE PUBLICATION
> > >
> > > However, what is not allowed is the following:
> > > postgres=# delete from t1;
> > > ERROR:  cannot delete from table "t1"
> > > DETAIL:  Column list used by the publication does not cover the
> > > replica identity.
> > >
> > > I am not sure if we really need this line but if so then please try to
> > > make it more clear. I think the similar text is present in 0002 patch
> > > which should be modified accordingly.
> > >
> >
> > The "Furthermore…" sentence came from the commit message [1]. But I
> > agree it seems redundant/ambiguous, so I have removed it (and removed
> > the same in patch 0002).
> >
>
> Thanks, pushed your first patch.
>

Thanks for the push.

I have rebased the remaining patch (v6-0001 is the same as v5-0002)

--
Kind Regards,
Peter Smith.
Fujitsu Australia


v6-0001-Column-List-new-pgdocs-section.patch
Description: Binary data


Re: Column Filtering in Logical Replication

2022-09-02 Thread Amit Kapila
On Fri, Sep 2, 2022 at 8:45 AM Peter Smith  wrote:
>
> On Thu, Sep 1, 2022 at 7:53 PM Amit Kapila  wrote:
> >
> > On Fri, Aug 26, 2022 at 7:33 AM Peter Smith  wrote:
> > >
> >
> > Few comments on both the patches:
> > v4-0001*
> > =
> > 1.
> > Furthermore, if the table uses
> > +   REPLICA IDENTITY FULL, specifying a column list is 
> > not
> > +   allowed (it will cause publication errors for UPDATE 
> > or
> > +   DELETE operations).
> >
> > This line sounds a bit unclear to me. From this like it appears that
> > the following operation is not allowed:
> >
> > postgres=# create table t1(c1 int, c2 int, c3 int);
> > CREATE TABLE
> > postgres=# Alter Table t1 replica identity full;
> > ALTER TABLE
> > postgres=# create publication pub1 for table t1(c1);
> > CREATE PUBLICATION
> >
> > However, what is not allowed is the following:
> > postgres=# delete from t1;
> > ERROR:  cannot delete from table "t1"
> > DETAIL:  Column list used by the publication does not cover the
> > replica identity.
> >
> > I am not sure if we really need this line but if so then please try to
> > make it more clear. I think the similar text is present in 0002 patch
> > which should be modified accordingly.
> >
>
> The "Furthermore…" sentence came from the commit message [1]. But I
> agree it seems redundant/ambiguous, so I have removed it (and removed
> the same in patch 0002).
>

Thanks, pushed your first patch.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-09-01 Thread Peter Smith
On Thu, Sep 1, 2022 at 7:53 PM Amit Kapila  wrote:
>
> On Fri, Aug 26, 2022 at 7:33 AM Peter Smith  wrote:
> >
>
> Few comments on both the patches:
> v4-0001*
> =
> 1.
> Furthermore, if the table uses
> +   REPLICA IDENTITY FULL, specifying a column list is not
> +   allowed (it will cause publication errors for UPDATE or
> +   DELETE operations).
>
> This line sounds a bit unclear to me. From this like it appears that
> the following operation is not allowed:
>
> postgres=# create table t1(c1 int, c2 int, c3 int);
> CREATE TABLE
> postgres=# Alter Table t1 replica identity full;
> ALTER TABLE
> postgres=# create publication pub1 for table t1(c1);
> CREATE PUBLICATION
>
> However, what is not allowed is the following:
> postgres=# delete from t1;
> ERROR:  cannot delete from table "t1"
> DETAIL:  Column list used by the publication does not cover the
> replica identity.
>
> I am not sure if we really need this line but if so then please try to
> make it more clear. I think the similar text is present in 0002 patch
> which should be modified accordingly.
>

The "Furthermore…" sentence came from the commit message [1]. But I
agree it seems redundant/ambiguous, so I have removed it (and removed
the same in patch 0002).


> V4-0002*
> =
> 2.
> However, if a
> +   column list is specified then only the columns 
> named
> +   in the list will be replicated. This means the subscriber-side table only
> +   needs to have those columns named by the column list. A user might choose 
> to
> +   use column lists for behavioral, security or performance reasons.
> +  
> +
> +  
> +   Column List Rules
> +
> +   
> +A column list is specified per table following the table name, and 
> enclosed
> +by parentheses. See  for details.
> +   
> +
> +   
> +When a column list is specified, only the named columns are replicated.
> +The list order is not important.
>
> It seems like "When a column list is specified, only the named columns
> are replicated." is almost a duplicate of the line in the first para.
> So, I think we can remove it. And if we do so then the second line
> could be changed to something like: "While specifying column list, the
> order of columns is not important."
>

Modified as suggested.

> 3. It seems information about initial table synchronization is
> missing. We copy only columns specified in the column list. Also, it
> would be good to add a Note similar to Row Filter to indicate that
> this list won't be used by pre-15 publishers.
>

Done as suggested. Added a new "Initial Data Synchronization" section
with content similar to that of the Row Filters section.

~~~

Thanks for your review comments.

PSA v5* patches where all the above have been addressed.

--
[1] 
https://github.com/postgres/postgres/commit/923def9a533a7d986acfb524139d8b9e5466d0a5

Kind Regards,
Peter Smith.
Fujitsu Australia


v5-0002-Column-List-new-pgdocs-section.patch
Description: Binary data


v5-0001-Column-List-replica-identity-rules.patch
Description: Binary data


Re: Column Filtering in Logical Replication

2022-09-01 Thread Amit Kapila
On Fri, Aug 26, 2022 at 7:33 AM Peter Smith  wrote:
>

Few comments on both the patches:
v4-0001*
=
1.
Furthermore, if the table uses
+   REPLICA IDENTITY FULL, specifying a column list is not
+   allowed (it will cause publication errors for UPDATE or
+   DELETE operations).

This line sounds a bit unclear to me. From this like it appears that
the following operation is not allowed:

postgres=# create table t1(c1 int, c2 int, c3 int);
CREATE TABLE
postgres=# Alter Table t1 replica identity full;
ALTER TABLE
postgres=# create publication pub1 for table t1(c1);
CREATE PUBLICATION

However, what is not allowed is the following:
postgres=# delete from t1;
ERROR:  cannot delete from table "t1"
DETAIL:  Column list used by the publication does not cover the
replica identity.

I am not sure if we really need this line but if so then please try to
make it more clear. I think the similar text is present in 0002 patch
which should be modified accordingly.

V4-0002*
=
2.
However, if a
+   column list is specified then only the columns named
+   in the list will be replicated. This means the subscriber-side table only
+   needs to have those columns named by the column list. A user might choose to
+   use column lists for behavioral, security or performance reasons.
+  
+
+  
+   Column List Rules
+
+   
+A column list is specified per table following the table name, and enclosed
+by parentheses. See  for details.
+   
+
+   
+When a column list is specified, only the named columns are replicated.
+The list order is not important.

It seems like "When a column list is specified, only the named columns
are replicated." is almost a duplicate of the line in the first para.
So, I think we can remove it. And if we do so then the second line
could be changed to something like: "While specifying column list, the
order of columns is not important."

3. It seems information about initial table synchronization is
missing. We copy only columns specified in the column list. Also, it
would be good to add a Note similar to Row Filter to indicate that
this list won't be used by pre-15 publishers.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-08-25 Thread Peter Smith
On Thu, Aug 25, 2022 at 7:38 PM vignesh C  wrote:
>
...
> > PSA the v3* patch set.
>
> Thanks for the updated patch.
> Few comments:
> 1) We can shuffle the columns in publisher table and subscriber to
> show that the order of the column does not matter
> +   
> +Create a publication p1. A column list is defined for
> +table t1 to reduce the number of columns that will be
> +replicated.
> +
> +test_pub=# CREATE PUBLICATION p1 FOR TABLE t1 (id, a, b, c);
> +CREATE PUBLICATION
> +test_pub=#
> +
>

OK. I made the following changes to the example.
- now the subscriber table defines cols in a different order than that
of the publisher table
- now the publisher column list defines col names in a different order
than that of the table
- now the column list avoids using only adjacent column names

> 2) We can try to keep the line content to less than 80 chars
> +   
> +A column list is specified per table following the tablename, and
> enclosed by
> +parenthesis. See  for details.
> +   
>

OK. Modified to use < 80 chars

> 3) tablename should be table name like it is used in other places
> +   
> +A column list is specified per table following the tablename, and
> enclosed by
> +parenthesis. See  for details.
> +   
>

Sorry, I don't see this problem. AFAIK this same issue was already
fixed in the v3* patches.  Notice in the cited fragment that
'parenthesis' is misspelt but that was also fixed in v3. Maybe you are
looking at an old patch file (??)

> 4a) In the below, you could include mentioning "Only the column list
> data of publication p1 are replicated."
> +
> + Insert some rows to table t1.
> +
> +test_pub=# INSERT INTO t1 VALUES(1, 'a-1', 'b-1', 'c-1', 'd-1', 'e-1');
> +INSERT 0 1
>

OK. Modified to say this.

> 4b) In the above, we could mention that the insert should be done on
> the "publisher side" as the previous statements are executed on the
> subscriber side.

OK. Modified to say this.

~~~

Thanks for the feedback.

PSA patch set v4* where all of the above comments are now addressed.

--
Kind Regards,
Peter Smith.
Fujitsu Australia


v4-0001-Column-List-replica-identity-rules.patch
Description: Binary data


v4-0002-Column-List-new-pgdocs-section.patch
Description: Binary data


Re: Column Filtering in Logical Replication

2022-08-25 Thread vignesh C
On Tue, Aug 23, 2022 at 7:52 AM Peter Smith  wrote:
>
> On Mon, Aug 22, 2022 at 9:25 PM vignesh C  wrote:
> >
> ...
>
> > Few comments:
> > 1) I felt no expressions are allowed in case of column filters. Only
> > column names can be specified. The second part of the sentence
> > confuses what is allowed and what is not allowed. Won't it be better
> > to remove the second sentence and mention that only column names can
> > be specified.
> > +   
> > +Column list can contain only simple column references. Complex
> > +expressions, function calls etc. are not allowed.
> > +   
> >
>
> This wording was lifted verbatim from the commit message [1]. But I
> see your point that it just seems to be overcomplicating a simple
> rule. Modified as suggested.
>
> > 2) tablename should be table name.
> > +   
> > +A column list is specified per table following the tablename, and
> > enclosed by
> > +parenthesis. See  for details.
> > +   
> >
> > We have used table name in the same page in other instances like:
> > a) The row filter is defined per table. Use a WHERE clause after the
> > table name for each published table that requires data to be filtered
> > out. The WHERE clause must be enclosed by parentheses.
> > b) The tables are matched between the publisher and the subscriber
> > using the fully qualified table name.
> >
>
> Fixed as suggested.
>
> > 3) One small whitespace issue:
> > git am v2-0001-Column-List-replica-identity-rules.patch
> > Applying: Column List replica identity rules.
> > .git/rebase-apply/patch:30: trailing whitespace.
> >if the publication publishes only INSERT operations.
> > warning: 1 line adds whitespace errors.
> >
>
> Fixed.
>
> ~~~
>
> PSA the v3* patch set.

Thanks for the updated patch.
Few comments:
1) We can shuffle the columns in publisher table and subscriber to
show that the order of the column does not matter
+   
+Create a publication p1. A column list is defined for
+table t1 to reduce the number of columns that will be
+replicated.
+
+test_pub=# CREATE PUBLICATION p1 FOR TABLE t1 (id, a, b, c);
+CREATE PUBLICATION
+test_pub=#
+

2) We can try to keep the line content to less than 80 chars
+   
+A column list is specified per table following the tablename, and
enclosed by
+parenthesis. See  for details.
+   

3) tablename should be table name like it is used in other places
+   
+A column list is specified per table following the tablename, and
enclosed by
+parenthesis. See  for details.
+   

4a) In the below, you could include mentioning "Only the column list
data of publication p1 are replicated."
+
+ Insert some rows to table t1.
+
+test_pub=# INSERT INTO t1 VALUES(1, 'a-1', 'b-1', 'c-1', 'd-1', 'e-1');
+INSERT 0 1

4b) In the above, we could mention that the insert should be done on
the "publisher side" as the previous statements are executed on the
subscriber side.

Regards,
Vignesh




Re: Column Filtering in Logical Replication

2022-08-22 Thread Peter Smith
On Mon, Aug 22, 2022 at 7:11 PM Erik Rijkers  wrote:
>
> Op 22-08-2022 om 10:27 schreef Peter Smith:
> >
> > PSA new set of v2* patches.
>
> Hi,
>
> In the second file a small typo, I think:
>
> "enclosed by parenthesis"  should be
> "enclosed by parentheses"
>

Thanks for your feedback.

Fixed in the v3* patches [1].

--
[1] 
https://www.postgresql.org/message-id/CAHut%2BPtHgQbFs9DDeOoqqQLZmMBD8FQPK2WOXJpR1nyDQy8AGA%40mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia




Re: Column Filtering in Logical Replication

2022-08-22 Thread Peter Smith
On Mon, Aug 22, 2022 at 9:25 PM vignesh C  wrote:
>
...

> Few comments:
> 1) I felt no expressions are allowed in case of column filters. Only
> column names can be specified. The second part of the sentence
> confuses what is allowed and what is not allowed. Won't it be better
> to remove the second sentence and mention that only column names can
> be specified.
> +   
> +Column list can contain only simple column references. Complex
> +expressions, function calls etc. are not allowed.
> +   
>

This wording was lifted verbatim from the commit message [1]. But I
see your point that it just seems to be overcomplicating a simple
rule. Modified as suggested.

> 2) tablename should be table name.
> +   
> +A column list is specified per table following the tablename, and
> enclosed by
> +parenthesis. See  for details.
> +   
>
> We have used table name in the same page in other instances like:
> a) The row filter is defined per table. Use a WHERE clause after the
> table name for each published table that requires data to be filtered
> out. The WHERE clause must be enclosed by parentheses.
> b) The tables are matched between the publisher and the subscriber
> using the fully qualified table name.
>

Fixed as suggested.

> 3) One small whitespace issue:
> git am v2-0001-Column-List-replica-identity-rules.patch
> Applying: Column List replica identity rules.
> .git/rebase-apply/patch:30: trailing whitespace.
>if the publication publishes only INSERT operations.
> warning: 1 line adds whitespace errors.
>

Fixed.

~~~

PSA the v3* patch set.

--
[1] 
https://github.com/postgres/postgres/commit/923def9a533a7d986acfb524139d8b9e5466d0a5

Kind Regards,
Peter Smith.
Fujitsu Australia


v3-0001-Column-List-replica-identity-rules.patch
Description: Binary data


v3-0002-Column-Lists-new-pgdocs-section.patch
Description: Binary data


Re: Column Filtering in Logical Replication

2022-08-22 Thread vignesh C
On Mon, Aug 22, 2022 at 1:58 PM Peter Smith  wrote:
>
> Thanks for the view of v1-0001.
>
> On Wed, Aug 17, 2022 at 3:04 AM vignesh C  wrote:
> ...
> > 1) Row filters mentions that "It has no effect on TRUNCATE commands.",
> > the same is not present in case of column filters. We should keep the
> > changes similarly for consistency.
> > --- a/doc/src/sgml/ref/create_publication.sgml
> > +++ b/doc/src/sgml/ref/create_publication.sgml
> > @@ -90,8 +90,7 @@ CREATE PUBLICATION  > class="parameter">name
> >   
> >When a column list is specified, only the named columns are 
> > replicated.
> >If no column list is specified, all columns of the table are 
> > replicated
> > -  through this publication, including any columns added later.  If a 
> > column
> > -  list is specified, it must include the replica identity columns.
> > +  through this publication, including any columns added later.
>
> Modified as suggested.
>
> >
> > 2) The document says that "if the table uses REPLICA IDENTITY FULL,
> > specifying a column list is not allowed.":
> > +   publishes only INSERT operations. Furthermore, if the
> > +   table uses REPLICA IDENTITY FULL, specifying a column
> > +   list is not allowed.
> > +  
> >
> > Did you mean specifying a column list during create publication for
> > REPLICA IDENTITY FULL table like below scenario:
> > postgres=# create table t2(c1 int, c2 int, c3 int);
> > CREATE TABLE
> > postgres=# alter table t2 replica identity full ;
> > ALTER TABLE
> > postgres=# create publication pub1 for table t2(c1,c2);
> > CREATE PUBLICATION
> >
> > If so, the document says specifying column list is not allowed, but
> > creating a publication with column list on replica identity full was
> > successful.
>
> That patch v1-0001 was using the same wording from the github commit
> message [1]. I agree it was a bit vague.
>
> In fact the replica identity validation is done at DML execution time
> so your example will fail as expected when you attempt to do a UPDATE
> operation.
>
> e.g.
> test_pub=# update t2 set c2=23 where c1=1;
> ERROR:  cannot update table "t2"
> DETAIL:  Column list used by the publication does not cover the
> replica identity.
>
> I modified the wording for this part of the docs.

Few comments:
1) I felt no expressions are allowed in case of column filters. Only
column names can be specified. The second part of the sentence
confuses what is allowed and what is not allowed. Won't it be better
to remove the second sentence and mention that only column names can
be specified.
+   
+Column list can contain only simple column references. Complex
+expressions, function calls etc. are not allowed.
+   

2) tablename should be table name.
+   
+A column list is specified per table following the tablename, and
enclosed by
+parenthesis. See  for details.
+   

We have used table name in the same page in other instances like:
a) The row filter is defined per table. Use a WHERE clause after the
table name for each published table that requires data to be filtered
out. The WHERE clause must be enclosed by parentheses.
b) The tables are matched between the publisher and the subscriber
using the fully qualified table name.

3) One small whitespace issue:
git am v2-0001-Column-List-replica-identity-rules.patch
Applying: Column List replica identity rules.
.git/rebase-apply/patch:30: trailing whitespace.
   if the publication publishes only INSERT operations.
warning: 1 line adds whitespace errors.

Regards,
Vignesh




Re: Column Filtering in Logical Replication

2022-08-22 Thread Erik Rijkers

Op 22-08-2022 om 10:27 schreef Peter Smith:


PSA new set of v2* patches.


Hi,

In the second file a small typo, I think:

"enclosed by parenthesis"  should be
"enclosed by parentheses"

thanks,
Erik





Re: Column Filtering in Logical Replication

2022-08-22 Thread Peter Smith
Thanks for the view of v1-0001.

On Wed, Aug 17, 2022 at 3:04 AM vignesh C  wrote:
...
> 1) Row filters mentions that "It has no effect on TRUNCATE commands.",
> the same is not present in case of column filters. We should keep the
> changes similarly for consistency.
> --- a/doc/src/sgml/ref/create_publication.sgml
> +++ b/doc/src/sgml/ref/create_publication.sgml
> @@ -90,8 +90,7 @@ CREATE PUBLICATION  class="parameter">name
>   
>When a column list is specified, only the named columns are replicated.
>If no column list is specified, all columns of the table are replicated
> -  through this publication, including any columns added later.  If a 
> column
> -  list is specified, it must include the replica identity columns.
> +  through this publication, including any columns added later.

Modified as suggested.

>
> 2) The document says that "if the table uses REPLICA IDENTITY FULL,
> specifying a column list is not allowed.":
> +   publishes only INSERT operations. Furthermore, if the
> +   table uses REPLICA IDENTITY FULL, specifying a column
> +   list is not allowed.
> +  
>
> Did you mean specifying a column list during create publication for
> REPLICA IDENTITY FULL table like below scenario:
> postgres=# create table t2(c1 int, c2 int, c3 int);
> CREATE TABLE
> postgres=# alter table t2 replica identity full ;
> ALTER TABLE
> postgres=# create publication pub1 for table t2(c1,c2);
> CREATE PUBLICATION
>
> If so, the document says specifying column list is not allowed, but
> creating a publication with column list on replica identity full was
> successful.

That patch v1-0001 was using the same wording from the github commit
message [1]. I agree it was a bit vague.

In fact the replica identity validation is done at DML execution time
so your example will fail as expected when you attempt to do a UPDATE
operation.

e.g.
test_pub=# update t2 set c2=23 where c1=1;
ERROR:  cannot update table "t2"
DETAIL:  Column list used by the publication does not cover the
replica identity.

I modified the wording for this part of the docs.

~~~

PSA new set of v2* patches.

--
[1] - 
https://github.com/postgres/postgres/commit/923def9a533a7d986acfb524139d8b9e5466d0a5

Kind Regards,
Peter Smith
Fujitsu Australia


v2-0001-Column-List-replica-identity-rules.patch
Description: Binary data


v2-0002-Column-Lists-new-pgdocs-section.patch
Description: Binary data


Re: Column Filtering in Logical Replication

2022-08-16 Thread vignesh C
On Mon, Aug 8, 2022 at 2:08 PM Peter Smith  wrote:
>
> PSA patch version v1* for a new "Column Lists" pgdocs section
>
> This is just a first draft, but I wanted to post it as-is, with the
> hope that I can get some feedback while continuing to work on it.

Few comments:
1) Row filters mentions that "It has no effect on TRUNCATE commands.",
the same is not present in case of column filters. We should keep the
changes similarly for consistency.
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -90,8 +90,7 @@ CREATE PUBLICATION name
  
   When a column list is specified, only the named columns are replicated.
   If no column list is specified, all columns of the table are replicated
-  through this publication, including any columns added later.  If a column
-  list is specified, it must include the replica identity columns.
+  through this publication, including any columns added later.

2) The document says that "if the table uses REPLICA IDENTITY FULL,
specifying a column list is not allowed.":
+   publishes only INSERT operations. Furthermore, if the
+   table uses REPLICA IDENTITY FULL, specifying a column
+   list is not allowed.
+  

Did you mean specifying a column list during create publication for
REPLICA IDENTITY FULL table like below scenario:
postgres=# create table t2(c1 int, c2 int, c3 int);
CREATE TABLE
postgres=# alter table t2 replica identity full ;
ALTER TABLE
postgres=# create publication pub1 for table t2(c1,c2);
CREATE PUBLICATION

If so, the document says specifying column list is not allowed, but
creating a publication with column list on replica identity full was
successful.

Regards,
Vignesh




Re: Column Filtering in Logical Replication

2022-08-08 Thread Peter Smith
PSA patch version v1* for a new "Column Lists" pgdocs section

This is just a first draft, but I wanted to post it as-is, with the
hope that I can get some feedback while continuing to work on it.

--
Kind Regards,
Peter Smith.
Fujitsu Australia


v1-0001-Column-List-replica-identity-rules.patch
Description: Binary data


v1-0002-Column-Lists-new-pgdocs-section.patch
Description: Binary data


Re: Column Filtering in Logical Replication

2022-08-02 Thread Peter Smith
On Tue, Aug 2, 2022 at 6:57 PM Amit Kapila  wrote:
>
> On Mon, Jul 25, 2022 at 1:27 PM Peter Smith  wrote:
> >
> > On Sat, Dec 11, 2021 at 12:24 AM Alvaro Herrera  
> > wrote:
> > >
> > > On 2021-Dec-10, Peter Eisentraut wrote:
> > >
> > ...
> > >
> > > > There was no documentation, so I wrote a bit (patch 0001).  It only 
> > > > touches
> > > > the CREATE PUBLICATION and ALTER PUBLICATION pages at the moment.  
> > > > There was
> > > > no mention in the Logical Replication chapter that warranted updating.
> > > > Perhaps we should revisit that chapter at the end of the release cycle.
> > >
> > > Thanks.  I hadn't looked at the docs yet, so I'll definitely take this.
> > >
> >
> > Was this documentation ever written?
> >
> > My assumption was that for PG15 there might be a whole new section
> > added to Chapter 31 [1] for describing "Column Lists" (i.e. the Column
> > List equivalent of the "Row Filters" section)
> >
>
> +1. I think it makes sense to give more description about this feature
> similar to Row Filters. Note that apart from the main feature commit
> [1], we have prohibited certain cases in commit [2]. So, one might
> want to cover that as well.
>
> [1] - 
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=923def9a533a7d986acfb524139d8b9e5466d0a5
> [2] - 
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=fd0b9dcebda7b931a41ce5c8e86d13f2efd0af2e
>

OK. Unless somebody else has already started this work then I can do
this. I will post a draft patch in a few days.

--
Kind Regards,
Peter Smith.
Fujitsu Australia




Re: Column Filtering in Logical Replication

2022-08-02 Thread Amit Kapila
On Mon, Jul 25, 2022 at 1:27 PM Peter Smith  wrote:
>
> On Sat, Dec 11, 2021 at 12:24 AM Alvaro Herrera  
> wrote:
> >
> > On 2021-Dec-10, Peter Eisentraut wrote:
> >
> ...
> >
> > > There was no documentation, so I wrote a bit (patch 0001).  It only 
> > > touches
> > > the CREATE PUBLICATION and ALTER PUBLICATION pages at the moment.  There 
> > > was
> > > no mention in the Logical Replication chapter that warranted updating.
> > > Perhaps we should revisit that chapter at the end of the release cycle.
> >
> > Thanks.  I hadn't looked at the docs yet, so I'll definitely take this.
> >
>
> Was this documentation ever written?
>
> My assumption was that for PG15 there might be a whole new section
> added to Chapter 31 [1] for describing "Column Lists" (i.e. the Column
> List equivalent of the "Row Filters" section)
>

+1. I think it makes sense to give more description about this feature
similar to Row Filters. Note that apart from the main feature commit
[1], we have prohibited certain cases in commit [2]. So, one might
want to cover that as well.

[1] - 
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=923def9a533a7d986acfb524139d8b9e5466d0a5
[2] - 
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=fd0b9dcebda7b931a41ce5c8e86d13f2efd0af2e

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-07-25 Thread Peter Smith
On Sat, Dec 11, 2021 at 12:24 AM Alvaro Herrera  wrote:
>
> On 2021-Dec-10, Peter Eisentraut wrote:
>
...
>
> > There was no documentation, so I wrote a bit (patch 0001).  It only touches
> > the CREATE PUBLICATION and ALTER PUBLICATION pages at the moment.  There was
> > no mention in the Logical Replication chapter that warranted updating.
> > Perhaps we should revisit that chapter at the end of the release cycle.
>
> Thanks.  I hadn't looked at the docs yet, so I'll definitely take this.
>

Was this documentation ever written?

My assumption was that for PG15 there might be a whole new section
added to Chapter 31 [1] for describing "Column Lists" (i.e. the Column
List equivalent of the "Row Filters" section)

--
[1] https://www.postgresql.org/docs/15/logical-replication.html

Kind Regards,
Peter Smith.
Fujitsu Australia




Re: Column Filtering in Logical Replication

2022-05-10 Thread Amit Kapila
On Tue, May 10, 2022 at 7:25 PM Jonathan S. Katz  wrote:
>
> On 4/19/22 12:53 AM, Amit Kapila wrote:
> > On Tue, Apr 19, 2022 at 6:58 AM Masahiko Sawada  
> > wrote:
> >>
> >> +1. I also think it's a bug so back-patching makes sense to me.
> >>
> >
> > Pushed. Thanks Tomas and Sawada-San.
>
> This is still on the PG15 open items list[1] though marked as with a fix.
>
> Did dd4ab6fd resolve the issue, or does this need more work?
>

The commit dd4ab6fd resolved this issue. I didn't notice it after that commit.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-05-10 Thread Jonathan S. Katz

On 5/10/22 3:17 PM, Tomas Vondra wrote:

On 5/10/22 15:55, Jonathan S. Katz wrote:

Hi,

On 4/19/22 12:53 AM, Amit Kapila wrote:

On Tue, Apr 19, 2022 at 6:58 AM Masahiko Sawada
 wrote:


+1. I also think it's a bug so back-patching makes sense to me.



Pushed. Thanks Tomas and Sawada-San.


This is still on the PG15 open items list[1] though marked as with a fix.

Did dd4ab6fd resolve the issue, or does this need more work?



I believe that's fixed, the buildfarm does not seem to show any relevant
failures in subscriptionCheck since dd4ab6fd got committed.


Great. I'm moving it off of open items.

Thanks for confirming!

Jonathan


OpenPGP_signature
Description: OpenPGP digital signature


Re: Column Filtering in Logical Replication

2022-05-10 Thread Tomas Vondra
On 5/10/22 15:55, Jonathan S. Katz wrote:
> Hi,
> 
> On 4/19/22 12:53 AM, Amit Kapila wrote:
>> On Tue, Apr 19, 2022 at 6:58 AM Masahiko Sawada
>>  wrote:
>>>
>>> +1. I also think it's a bug so back-patching makes sense to me.
>>>
>>
>> Pushed. Thanks Tomas and Sawada-San.
> 
> This is still on the PG15 open items list[1] though marked as with a fix.
> 
> Did dd4ab6fd resolve the issue, or does this need more work?
> 

I believe that's fixed, the buildfarm does not seem to show any relevant
failures in subscriptionCheck since dd4ab6fd got committed.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-05-10 Thread Jonathan S. Katz

Hi,

On 4/19/22 12:53 AM, Amit Kapila wrote:

On Tue, Apr 19, 2022 at 6:58 AM Masahiko Sawada  wrote:


+1. I also think it's a bug so back-patching makes sense to me.



Pushed. Thanks Tomas and Sawada-San.


This is still on the PG15 open items list[1] though marked as with a fix.

Did dd4ab6fd resolve the issue, or does this need more work?

Thanks,

Jonathan

[1] https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items


OpenPGP_signature
Description: OpenPGP digital signature


Re: Column Filtering in Logical Replication

2022-04-18 Thread Amit Kapila
On Tue, Apr 19, 2022 at 6:58 AM Masahiko Sawada  wrote:
>
> On Mon, Apr 18, 2022 at 8:04 PM Amit Kapila  wrote:
> >
> > On Thu, Apr 14, 2022 at 9:09 AM Amit Kapila  wrote:
> > >
> > > On Thu, Apr 14, 2022 at 8:32 AM Masahiko Sawada  
> > > wrote:
> > > > >
> > > > > The other part of the puzzle is the below check in the code:
> > > > > /*
> > > > > * If we reached the sync worker limit per subscription, just exit
> > > > > * silently as we might get here because of an otherwise harmless race
> > > > > * condition.
> > > > > */
> > > > > if (nsyncworkers >= max_sync_workers_per_subscription)
> > > > >
> > > > > It is not clear to me why this check is there, if this wouldn't be
> > > > > there, the user would have got either a WARNING to increase the
> > > > > max_logical_replication_workers or the apply worker would have been
> > > > > restarted. Do you have any idea about this?
> > > >
> > > > Yeah, I'm also puzzled with this check. It seems that this function
> > > > doesn't work well when the apply worker is not running and some
> > > > tablesync workers are running. I initially thought that the apply
> > > > worker calls to this function as many as tables that needs to be
> > > > synced, but it checks the max_sync_workers_per_subscription limit
> > > > before calling to logicalrep_worker_launch(). So I'm not really sure
> > > > we need this check.
> > > >
> > >
> > > I just hope that the original author Petr J. responds to this point. I
> > > have added him to this email. This will help us to find the best
> > > solution for this problem.
> > >
> >
> > I did some more investigation for this code. It is added by commit [1]
> > and the patch that led to this commit is first time posted on -hackers
> > in email [2]. Now, neither the commit message nor the patch (comments)
> > gives much idea as to why this part of code is added but I think there
> > is some hint in the email [2]. In particular, read the paragraph in
> > the email [2] that has the lines: " and limiting sync workers per
> > subscription theoretically wasn't either (although I don't think it
> > could happen in practice).".
> >
> > It seems that this check has been added to theoretically limit the
> > sync workers even though that can't happen because apply worker
> > ensures that before trying to launch the sync worker. Does this theory
> > make sense to me? If so, I think we can change the check as: "if
> > (OidIsValid(relid) && nsyncworkers >=
> > max_sync_workers_per_subscription)" in launcher.c. This will serve the
> > purpose of the original code and will solve the issue being discussed
> > here. I think we can even backpatch this. What do you think?
>
> +1. I also think it's a bug so back-patching makes sense to me.
>

Pushed. Thanks Tomas and Sawada-San.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-04-18 Thread Masahiko Sawada
On Mon, Apr 18, 2022 at 8:04 PM Amit Kapila  wrote:
>
> On Thu, Apr 14, 2022 at 9:09 AM Amit Kapila  wrote:
> >
> > On Thu, Apr 14, 2022 at 8:32 AM Masahiko Sawada  
> > wrote:
> > > >
> > > > The other part of the puzzle is the below check in the code:
> > > > /*
> > > > * If we reached the sync worker limit per subscription, just exit
> > > > * silently as we might get here because of an otherwise harmless race
> > > > * condition.
> > > > */
> > > > if (nsyncworkers >= max_sync_workers_per_subscription)
> > > >
> > > > It is not clear to me why this check is there, if this wouldn't be
> > > > there, the user would have got either a WARNING to increase the
> > > > max_logical_replication_workers or the apply worker would have been
> > > > restarted. Do you have any idea about this?
> > >
> > > Yeah, I'm also puzzled with this check. It seems that this function
> > > doesn't work well when the apply worker is not running and some
> > > tablesync workers are running. I initially thought that the apply
> > > worker calls to this function as many as tables that needs to be
> > > synced, but it checks the max_sync_workers_per_subscription limit
> > > before calling to logicalrep_worker_launch(). So I'm not really sure
> > > we need this check.
> > >
> >
> > I just hope that the original author Petr J. responds to this point. I
> > have added him to this email. This will help us to find the best
> > solution for this problem.
> >
>
> I did some more investigation for this code. It is added by commit [1]
> and the patch that led to this commit is first time posted on -hackers
> in email [2]. Now, neither the commit message nor the patch (comments)
> gives much idea as to why this part of code is added but I think there
> is some hint in the email [2]. In particular, read the paragraph in
> the email [2] that has the lines: " and limiting sync workers per
> subscription theoretically wasn't either (although I don't think it
> could happen in practice).".
>
> It seems that this check has been added to theoretically limit the
> sync workers even though that can't happen because apply worker
> ensures that before trying to launch the sync worker. Does this theory
> make sense to me? If so, I think we can change the check as: "if
> (OidIsValid(relid) && nsyncworkers >=
> max_sync_workers_per_subscription)" in launcher.c. This will serve the
> purpose of the original code and will solve the issue being discussed
> here. I think we can even backpatch this. What do you think?

+1. I also think it's a bug so back-patching makes sense to me.

Regards,

-- 
Masahiko Sawada
EDB:  https://www.enterprisedb.com/




Re: Column Filtering in Logical Replication

2022-04-18 Thread Tomas Vondra
On 4/18/22 13:04, Amit Kapila wrote:
> On Thu, Apr 14, 2022 at 9:09 AM Amit Kapila  wrote:
>>
>> On Thu, Apr 14, 2022 at 8:32 AM Masahiko Sawada  
>> wrote:

 The other part of the puzzle is the below check in the code:
 /*
 * If we reached the sync worker limit per subscription, just exit
 * silently as we might get here because of an otherwise harmless race
 * condition.
 */
 if (nsyncworkers >= max_sync_workers_per_subscription)

 It is not clear to me why this check is there, if this wouldn't be
 there, the user would have got either a WARNING to increase the
 max_logical_replication_workers or the apply worker would have been
 restarted. Do you have any idea about this?
>>>
>>> Yeah, I'm also puzzled with this check. It seems that this function
>>> doesn't work well when the apply worker is not running and some
>>> tablesync workers are running. I initially thought that the apply
>>> worker calls to this function as many as tables that needs to be
>>> synced, but it checks the max_sync_workers_per_subscription limit
>>> before calling to logicalrep_worker_launch(). So I'm not really sure
>>> we need this check.
>>>
>>
>> I just hope that the original author Petr J. responds to this point. I
>> have added him to this email. This will help us to find the best
>> solution for this problem.
>>
> 
> I did some more investigation for this code. It is added by commit [1]
> and the patch that led to this commit is first time posted on -hackers
> in email [2]. Now, neither the commit message nor the patch (comments)
> gives much idea as to why this part of code is added but I think there
> is some hint in the email [2]. In particular, read the paragraph in
> the email [2] that has the lines: " and limiting sync workers per
> subscription theoretically wasn't either (although I don't think it
> could happen in practice).".
> 
> It seems that this check has been added to theoretically limit the
> sync workers even though that can't happen because apply worker
> ensures that before trying to launch the sync worker. Does this theory
> make sense to me? If so, I think we can change the check as: "if
> (OidIsValid(relid) && nsyncworkers >=
> max_sync_workers_per_subscription)" in launcher.c. This will serve the
> purpose of the original code and will solve the issue being discussed
> here. I think we can even backpatch this. What do you think?
> 

Sounds reasonable to me. It's unfortunate there's no explanation of what
exactly is the commit message fixing (and why), but I doubt anyone will
remember the details after 5 years.

+1 to backpatching, I consider this to be a bug


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-04-18 Thread Amit Kapila
On Thu, Apr 14, 2022 at 9:09 AM Amit Kapila  wrote:
>
> On Thu, Apr 14, 2022 at 8:32 AM Masahiko Sawada  wrote:
> > >
> > > The other part of the puzzle is the below check in the code:
> > > /*
> > > * If we reached the sync worker limit per subscription, just exit
> > > * silently as we might get here because of an otherwise harmless race
> > > * condition.
> > > */
> > > if (nsyncworkers >= max_sync_workers_per_subscription)
> > >
> > > It is not clear to me why this check is there, if this wouldn't be
> > > there, the user would have got either a WARNING to increase the
> > > max_logical_replication_workers or the apply worker would have been
> > > restarted. Do you have any idea about this?
> >
> > Yeah, I'm also puzzled with this check. It seems that this function
> > doesn't work well when the apply worker is not running and some
> > tablesync workers are running. I initially thought that the apply
> > worker calls to this function as many as tables that needs to be
> > synced, but it checks the max_sync_workers_per_subscription limit
> > before calling to logicalrep_worker_launch(). So I'm not really sure
> > we need this check.
> >
>
> I just hope that the original author Petr J. responds to this point. I
> have added him to this email. This will help us to find the best
> solution for this problem.
>

I did some more investigation for this code. It is added by commit [1]
and the patch that led to this commit is first time posted on -hackers
in email [2]. Now, neither the commit message nor the patch (comments)
gives much idea as to why this part of code is added but I think there
is some hint in the email [2]. In particular, read the paragraph in
the email [2] that has the lines: " and limiting sync workers per
subscription theoretically wasn't either (although I don't think it
could happen in practice).".

It seems that this check has been added to theoretically limit the
sync workers even though that can't happen because apply worker
ensures that before trying to launch the sync worker. Does this theory
make sense to me? If so, I think we can change the check as: "if
(OidIsValid(relid) && nsyncworkers >=
max_sync_workers_per_subscription)" in launcher.c. This will serve the
purpose of the original code and will solve the issue being discussed
here. I think we can even backpatch this. What do you think?

[1]
commit de4389712206d2686e09ad8d6dd112dc4b6c6d42
Author: Peter Eisentraut 
Date:   Wed Apr 26 10:43:04 2017 -0400

Fix various concurrency issues in logical replication worker launching

[2] - 
https://www.postgresql.org/message-id/fa387e24-0e26-c02d-ef16-7e46ada200dd%402ndquadrant.com

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-04-13 Thread Amit Kapila
On Thu, Apr 14, 2022 at 8:32 AM Masahiko Sawada  wrote:
>
> On Wed, Apr 13, 2022 at 6:45 PM Amit Kapila  wrote:
> >
> > On Wed, Apr 13, 2022 at 1:41 PM Masahiko Sawada  
> > wrote:
> > >
> > > I've looked at this issue and had the same analysis. Also, I could
> > > reproduce this issue with the steps shared by Amit.
> > >
> > > As I mentioned in another thread[1], the fact that the tablesync
> > > worker doesn't check the return value from
> > > wait_for_worker_state_change() seems a bug to me. So my initial
> > > thought of the solution is that we can have the tablesync worker check
> > > the return value and exit if it's false. That way, the apply worker
> > > can restart and request to launch the tablesync worker again. What do
> > > you think?
> > >
> >
> > I think that will fix this symptom but I am not sure if that would be
> > the best way to deal with this because we have a mechanism where the
> > sync worker can continue even if we don't do anything as a result of
> > wait_for_worker_state_change() provided apply worker restarts.
>
> I think we can think this is a separate issue. That is, if tablesync
> worker can start streaming changes even without waiting for the apply
> worker to set SUBREL_STATE_CATCHUP, do we really need the wait? I'm
> not sure it's really safe. If it's safe, the tablesync worker will no
> longer need to wait there.
>

As per my understanding, it is safe, whatever is streamed by tablesync
worker will be skipped later by apply worker. The wait here avoids
streaming the same data both by the apply worker and table sync worker
which I think is good even if it is not a must.

> >
> > The other part of the puzzle is the below check in the code:
> > /*
> > * If we reached the sync worker limit per subscription, just exit
> > * silently as we might get here because of an otherwise harmless race
> > * condition.
> > */
> > if (nsyncworkers >= max_sync_workers_per_subscription)
> >
> > It is not clear to me why this check is there, if this wouldn't be
> > there, the user would have got either a WARNING to increase the
> > max_logical_replication_workers or the apply worker would have been
> > restarted. Do you have any idea about this?
>
> Yeah, I'm also puzzled with this check. It seems that this function
> doesn't work well when the apply worker is not running and some
> tablesync workers are running. I initially thought that the apply
> worker calls to this function as many as tables that needs to be
> synced, but it checks the max_sync_workers_per_subscription limit
> before calling to logicalrep_worker_launch(). So I'm not really sure
> we need this check.
>

I just hope that the original author Petr J. responds to this point. I
have added him to this email. This will help us to find the best
solution for this problem.

Note: I'll be away for the remaining week, so will join the discussion
next week unless we reached the conclusion by that time.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-04-13 Thread Masahiko Sawada
(

On Wed, Apr 13, 2022 at 6:45 PM Amit Kapila  wrote:
>
> On Wed, Apr 13, 2022 at 1:41 PM Masahiko Sawada  wrote:
> >
> > I've looked at this issue and had the same analysis. Also, I could
> > reproduce this issue with the steps shared by Amit.
> >
> > As I mentioned in another thread[1], the fact that the tablesync
> > worker doesn't check the return value from
> > wait_for_worker_state_change() seems a bug to me. So my initial
> > thought of the solution is that we can have the tablesync worker check
> > the return value and exit if it's false. That way, the apply worker
> > can restart and request to launch the tablesync worker again. What do
> > you think?
> >
>
> I think that will fix this symptom but I am not sure if that would be
> the best way to deal with this because we have a mechanism where the
> sync worker can continue even if we don't do anything as a result of
> wait_for_worker_state_change() provided apply worker restarts.

I think we can think this is a separate issue. That is, if tablesync
worker can start streaming changes even without waiting for the apply
worker to set SUBREL_STATE_CATCHUP, do we really need the wait? I'm
not sure it's really safe. If it's safe, the tablesync worker will no
longer need to wait there.

>
> The other part of the puzzle is the below check in the code:
> /*
> * If we reached the sync worker limit per subscription, just exit
> * silently as we might get here because of an otherwise harmless race
> * condition.
> */
> if (nsyncworkers >= max_sync_workers_per_subscription)
>
> It is not clear to me why this check is there, if this wouldn't be
> there, the user would have got either a WARNING to increase the
> max_logical_replication_workers or the apply worker would have been
> restarted. Do you have any idea about this?

Yeah, I'm also puzzled with this check. It seems that this function
doesn't work well when the apply worker is not running and some
tablesync workers are running. I initially thought that the apply
worker calls to this function as many as tables that needs to be
synced, but it checks the max_sync_workers_per_subscription limit
before calling to logicalrep_worker_launch(). So I'm not really sure
we need this check.

>
> Yet another option is that we ensure that before launching sync
> workers (say in process_syncing_tables_for_apply->FetchTableStates,
> when we have to start a new transaction) we again call
> maybe_reread_subscription(), which should also fix this symptom. But
> again, I am not sure why it should be compulsory to call
> maybe_reread_subscription() in such a situation, there are no comments
> which suggest it,

Yes, it will fix this issue.

>
> Now, the reason why it appeared recently in commit c91f71b9dc is that
> I think we have increased the number of initial table syncs in that
> test, and probably increasing
> max_sync_workers_per_subscription/max_logical_replication_workers
> should fix that test.

I think so too.

Regards,

-- 
Masahiko Sawada
EDB:  https://www.enterprisedb.com/




Re: Column Filtering in Logical Replication

2022-04-13 Thread Amit Kapila
On Wed, Apr 13, 2022 at 1:41 PM Masahiko Sawada  wrote:
>
> I've looked at this issue and had the same analysis. Also, I could
> reproduce this issue with the steps shared by Amit.
>
> As I mentioned in another thread[1], the fact that the tablesync
> worker doesn't check the return value from
> wait_for_worker_state_change() seems a bug to me. So my initial
> thought of the solution is that we can have the tablesync worker check
> the return value and exit if it's false. That way, the apply worker
> can restart and request to launch the tablesync worker again. What do
> you think?
>

I think that will fix this symptom but I am not sure if that would be
the best way to deal with this because we have a mechanism where the
sync worker can continue even if we don't do anything as a result of
wait_for_worker_state_change() provided apply worker restarts.

The other part of the puzzle is the below check in the code:
/*
* If we reached the sync worker limit per subscription, just exit
* silently as we might get here because of an otherwise harmless race
* condition.
*/
if (nsyncworkers >= max_sync_workers_per_subscription)

It is not clear to me why this check is there, if this wouldn't be
there, the user would have got either a WARNING to increase the
max_logical_replication_workers or the apply worker would have been
restarted. Do you have any idea about this?

Yet another option is that we ensure that before launching sync
workers (say in process_syncing_tables_for_apply->FetchTableStates,
when we have to start a new transaction) we again call
maybe_reread_subscription(), which should also fix this symptom. But
again, I am not sure why it should be compulsory to call
maybe_reread_subscription() in such a situation, there are no comments
which suggest it,

Now, the reason why it appeared recently in commit c91f71b9dc is that
I think we have increased the number of initial table syncs in that
test, and probably increasing
max_sync_workers_per_subscription/max_logical_replication_workers
should fix that test.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-04-13 Thread Masahiko Sawada
On Sun, Mar 20, 2022 at 3:23 PM Amit Kapila  wrote:
>
> On Sun, Mar 20, 2022 at 8:41 AM Amit Kapila  wrote:
> >
> > On Fri, Mar 18, 2022 at 10:42 PM Tomas Vondra
> >  wrote:
> >
> > > So the question is why those two sync workers never complete - I guess
> > > there's some sort of lock wait (deadlock?) or infinite loop.
> > >
> >
> > It would be a bit tricky to reproduce this even if the above theory is
> > correct but I'll try it today or tomorrow.
> >
>
> I am able to reproduce it with the help of a debugger. Firstly, I have
> added the LOG message and some While (true) loops to debug sync and
> apply workers. Test setup
>
> Node-1:
> create table t1(c1);
> create table t2(c1);
> insert into t1 values(1);
> create publication pub1 for table t1;
> create publication pu2;
>
> Node-2:
> change max_sync_workers_per_subscription to 1 in potgresql.conf
> create table t1(c1);
> create table t2(c1);
> create subscription sub1 connection 'dbname = postgres' publication pub1;
>
> Till this point, just allow debuggers in both workers just continue.
>
> Node-1:
> alter publication pub1 add table t2;
> insert into t1 values(2);
>
> Here, we have to debug the apply worker such that when it tries to
> apply the insert, stop the debugger in function apply_handle_insert()
> after doing begin_replication_step().
>
> Node-2:
> alter subscription sub1 set pub1, pub2;
>
> Now, continue the debugger of apply worker, it should first start the
> sync worker and then exit because of parameter change. All of these
> debugging steps are to just ensure the point that it should first
> start the sync worker and then exit. After this point, table sync
> worker never finishes and log is filled with messages: "reached
> max_sync_workers_per_subscription limit" (a newly added message by me
> in the attached debug patch).
>
> Now, it is not completely clear to me how exactly '013_partition.pl'
> leads to this situation but there is a possibility based on the LOGs

I've looked at this issue and had the same analysis. Also, I could
reproduce this issue with the steps shared by Amit.

As I mentioned in another thread[1], the fact that the tablesync
worker doesn't check the return value from
wait_for_worker_state_change() seems a bug to me. So my initial
thought of the solution is that we can have the tablesync worker check
the return value and exit if it's false. That way, the apply worker
can restart and request to launch the tablesync worker again. What do
you think?

Regards,

-- 
Masahiko Sawada
EDB:  https://www.enterprisedb.com/




Re: Column Filtering in Logical Replication

2022-03-30 Thread Tomas Vondra



On 3/30/22 04:46, Amit Kapila wrote:
> On Tue, Mar 29, 2022 at 6:09 PM Tomas Vondra
>  wrote:
>>
>> On 3/29/22 13:47, Amit Kapila wrote:
>>> On Tue, Mar 29, 2022 at 4:33 PM Tomas Vondra
>>>  wrote:

 On 3/29/22 12:00, Amit Kapila wrote:
>>
>> Thanks, I'll take a look later.
>>
>
> This is still failing [1][2].
>
> [1] - 
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican=2022-03-28%2005%3A16%3A53
> [2] - 
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris=2022-03-24%2013%3A13%3A08
>

 AFAICS we've concluded this is a pre-existing issue, not something
 introduced by a recently committed patch, and I don't think there's any
 proposal how to fix that.

>>>
>>> I have suggested in email [1] that increasing values
>>> max_sync_workers_per_subscription/max_logical_replication_workers
>>> should solve this issue. Now, whether this is a previous issue or
>>> behavior can be debatable but I think it happens for the new test case
>>> added by commit c91f71b9dc.
>>>
>>
>> IMHO that'd be just hiding the actual issue, which is the failure to
>> sync the subscription in some circumstances. We should fix that, not
>> just make sure the tests don't trigger it.
>>
> 
> I am in favor of fixing/changing some existing behavior to make it
> better and would be ready to help in that investigation as well but
> was just not sure if it is a good idea to let some of the buildfarm
> member(s) fail for a number of days. Anyway, I leave this judgment to
> you.
> 

OK. If it affected more animals, and/or if they were failing more often,
it'd definitely warrant a more active approach. But AFAICS it affects
only a tiny fraction, and even there it fails maybe 1 in 20 runs ...

Plus the symptoms are pretty clear, it's unlikely to cause enigmatic
failures, forcing people to spend time on investigating it.

Of course, that's my assessment and it feels weird as it goes directly
against my instincts to keep all tests working :-/


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-29 Thread Amit Kapila
On Tue, Mar 29, 2022 at 6:09 PM Tomas Vondra
 wrote:
>
> On 3/29/22 13:47, Amit Kapila wrote:
> > On Tue, Mar 29, 2022 at 4:33 PM Tomas Vondra
> >  wrote:
> >>
> >> On 3/29/22 12:00, Amit Kapila wrote:
> 
>  Thanks, I'll take a look later.
> 
> >>>
> >>> This is still failing [1][2].
> >>>
> >>> [1] - 
> >>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican=2022-03-28%2005%3A16%3A53
> >>> [2] - 
> >>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris=2022-03-24%2013%3A13%3A08
> >>>
> >>
> >> AFAICS we've concluded this is a pre-existing issue, not something
> >> introduced by a recently committed patch, and I don't think there's any
> >> proposal how to fix that.
> >>
> >
> > I have suggested in email [1] that increasing values
> > max_sync_workers_per_subscription/max_logical_replication_workers
> > should solve this issue. Now, whether this is a previous issue or
> > behavior can be debatable but I think it happens for the new test case
> > added by commit c91f71b9dc.
> >
>
> IMHO that'd be just hiding the actual issue, which is the failure to
> sync the subscription in some circumstances. We should fix that, not
> just make sure the tests don't trigger it.
>

I am in favor of fixing/changing some existing behavior to make it
better and would be ready to help in that investigation as well but
was just not sure if it is a good idea to let some of the buildfarm
member(s) fail for a number of days. Anyway, I leave this judgment to
you.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-29 Thread Tomas Vondra



On 3/29/22 13:47, Amit Kapila wrote:
> On Tue, Mar 29, 2022 at 4:33 PM Tomas Vondra
>  wrote:
>>
>> On 3/29/22 12:00, Amit Kapila wrote:

 Thanks, I'll take a look later.

>>>
>>> This is still failing [1][2].
>>>
>>> [1] - 
>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican=2022-03-28%2005%3A16%3A53
>>> [2] - 
>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris=2022-03-24%2013%3A13%3A08
>>>
>>
>> AFAICS we've concluded this is a pre-existing issue, not something
>> introduced by a recently committed patch, and I don't think there's any
>> proposal how to fix that.
>>
> 
> I have suggested in email [1] that increasing values
> max_sync_workers_per_subscription/max_logical_replication_workers
> should solve this issue. Now, whether this is a previous issue or
> behavior can be debatable but I think it happens for the new test case
> added by commit c91f71b9dc.
> 

IMHO that'd be just hiding the actual issue, which is the failure to
sync the subscription in some circumstances. We should fix that, not
just make sure the tests don't trigger it.

>> So I've put that on the back burner until
>> after the current CF.
>>
> 
> Okay, last time you didn't mention that you want to look at it after
> CF. I just assumed that you want to take a look after pushing the main
> column list patch, so thought of sending a reminder but I am fine if
> you want to look at it after CF.
> 

OK, sorry for not being clearer in my response.

regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-29 Thread Amit Kapila
On Tue, Mar 29, 2022 at 4:33 PM Tomas Vondra
 wrote:
>
> On 3/29/22 12:00, Amit Kapila wrote:
> >>
> >> Thanks, I'll take a look later.
> >>
> >
> > This is still failing [1][2].
> >
> > [1] - 
> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican=2022-03-28%2005%3A16%3A53
> > [2] - 
> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris=2022-03-24%2013%3A13%3A08
> >
>
> AFAICS we've concluded this is a pre-existing issue, not something
> introduced by a recently committed patch, and I don't think there's any
> proposal how to fix that.
>

I have suggested in email [1] that increasing values
max_sync_workers_per_subscription/max_logical_replication_workers
should solve this issue. Now, whether this is a previous issue or
behavior can be debatable but I think it happens for the new test case
added by commit c91f71b9dc.

> So I've put that on the back burner until
> after the current CF.
>

Okay, last time you didn't mention that you want to look at it after
CF. I just assumed that you want to take a look after pushing the main
column list patch, so thought of sending a reminder but I am fine if
you want to look at it after CF.

[1] - 
https://www.postgresql.org/message-id/CAA4eK1LpBFU49Ohbnk%3Ddv_v9YP%2BKqh1%2BSf8i%2B%2B_s-QhD1Gy4Qw%40mail.gmail.com

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-29 Thread Tomas Vondra



On 3/29/22 12:00, Amit Kapila wrote:
> On Sun, Mar 20, 2022 at 4:53 PM Tomas Vondra
>  wrote:
>>
>> On 3/20/22 07:23, Amit Kapila wrote:
>>> On Sun, Mar 20, 2022 at 8:41 AM Amit Kapila  wrote:

 On Fri, Mar 18, 2022 at 10:42 PM Tomas Vondra
  wrote:

> So the question is why those two sync workers never complete - I guess
> there's some sort of lock wait (deadlock?) or infinite loop.
>

 It would be a bit tricky to reproduce this even if the above theory is
 correct but I'll try it today or tomorrow.

>>>
>>> I am able to reproduce it with the help of a debugger. Firstly, I have
>>> added the LOG message and some While (true) loops to debug sync and
>>> apply workers. Test setup
>>>
>>> Node-1:
>>> create table t1(c1);
>>> create table t2(c1);
>>> insert into t1 values(1);
>>> create publication pub1 for table t1;
>>> create publication pu2;
>>>
>>> Node-2:
>>> change max_sync_workers_per_subscription to 1 in potgresql.conf
>>> create table t1(c1);
>>> create table t2(c1);
>>> create subscription sub1 connection 'dbname = postgres' publication pub1;
>>>
>>> Till this point, just allow debuggers in both workers just continue.
>>>
>>> Node-1:
>>> alter publication pub1 add table t2;
>>> insert into t1 values(2);
>>>
>>> Here, we have to debug the apply worker such that when it tries to
>>> apply the insert, stop the debugger in function apply_handle_insert()
>>> after doing begin_replication_step().
>>>
>>> Node-2:
>>> alter subscription sub1 set pub1, pub2;
>>>
>>> Now, continue the debugger of apply worker, it should first start the
>>> sync worker and then exit because of parameter change. All of these
>>> debugging steps are to just ensure the point that it should first
>>> start the sync worker and then exit. After this point, table sync
>>> worker never finishes and log is filled with messages: "reached
>>> max_sync_workers_per_subscription limit" (a newly added message by me
>>> in the attached debug patch).
>>>
>>> Now, it is not completely clear to me how exactly '013_partition.pl'
>>> leads to this situation but there is a possibility based on the LOGs
>>> it shows.
>>>
>>
>> Thanks, I'll take a look later.
>>
> 
> This is still failing [1][2].
> 
> [1] - 
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican=2022-03-28%2005%3A16%3A53
> [2] - 
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris=2022-03-24%2013%3A13%3A08
> 

AFAICS we've concluded this is a pre-existing issue, not something
introduced by a recently committed patch, and I don't think there's any
proposal how to fix that. So I've put that on the back burner until
after the current CF.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-29 Thread Amit Kapila
On Sun, Mar 20, 2022 at 4:53 PM Tomas Vondra
 wrote:
>
> On 3/20/22 07:23, Amit Kapila wrote:
> > On Sun, Mar 20, 2022 at 8:41 AM Amit Kapila  wrote:
> >>
> >> On Fri, Mar 18, 2022 at 10:42 PM Tomas Vondra
> >>  wrote:
> >>
> >>> So the question is why those two sync workers never complete - I guess
> >>> there's some sort of lock wait (deadlock?) or infinite loop.
> >>>
> >>
> >> It would be a bit tricky to reproduce this even if the above theory is
> >> correct but I'll try it today or tomorrow.
> >>
> >
> > I am able to reproduce it with the help of a debugger. Firstly, I have
> > added the LOG message and some While (true) loops to debug sync and
> > apply workers. Test setup
> >
> > Node-1:
> > create table t1(c1);
> > create table t2(c1);
> > insert into t1 values(1);
> > create publication pub1 for table t1;
> > create publication pu2;
> >
> > Node-2:
> > change max_sync_workers_per_subscription to 1 in potgresql.conf
> > create table t1(c1);
> > create table t2(c1);
> > create subscription sub1 connection 'dbname = postgres' publication pub1;
> >
> > Till this point, just allow debuggers in both workers just continue.
> >
> > Node-1:
> > alter publication pub1 add table t2;
> > insert into t1 values(2);
> >
> > Here, we have to debug the apply worker such that when it tries to
> > apply the insert, stop the debugger in function apply_handle_insert()
> > after doing begin_replication_step().
> >
> > Node-2:
> > alter subscription sub1 set pub1, pub2;
> >
> > Now, continue the debugger of apply worker, it should first start the
> > sync worker and then exit because of parameter change. All of these
> > debugging steps are to just ensure the point that it should first
> > start the sync worker and then exit. After this point, table sync
> > worker never finishes and log is filled with messages: "reached
> > max_sync_workers_per_subscription limit" (a newly added message by me
> > in the attached debug patch).
> >
> > Now, it is not completely clear to me how exactly '013_partition.pl'
> > leads to this situation but there is a possibility based on the LOGs
> > it shows.
> >
>
> Thanks, I'll take a look later.
>

This is still failing [1][2].

[1] - 
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican=2022-03-28%2005%3A16%3A53
[2] - 
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris=2022-03-24%2013%3A13%3A08

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-26 Thread Tomas Vondra
On 3/26/22 22:58, Tomas Vondra wrote:
> On 3/26/22 22:55, Tom Lane wrote:
>> Tomas Vondra  writes:
>>> On 3/26/22 22:37, Tom Lane wrote:
 This smells like an uninitialized-variable problem, but I've had
 no luck finding any problem under valgrind.  Not sure how to progress
 from here.
>>
>>> I think I see the problem - there's a CREATE SUBSCRIPTION but the test
>>> is not waiting for the tablesync to complete, so sometimes it finishes
>>> in time and sometimes not. That'd explain the flaky behavior, and it's
>>> just this one test that misses the sync AFAICS.
>>
>> Ah, that would also fit the symptoms.
>>
> 
> I'll go over the test to check if some other test misses that, and
> perhaps do a bit of testing, and then push a fix.
> 

Pushed. I checked the other tests in 031_column_list.pl and I AFAICS all
of them are waiting for the sync correctly.


[rolls eyes] I just noticed I listed the file as .sql in the commit
message. Not great.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-26 Thread Tomas Vondra
On 3/26/22 22:55, Tom Lane wrote:
> Tomas Vondra  writes:
>> On 3/26/22 22:37, Tom Lane wrote:
>>> This smells like an uninitialized-variable problem, but I've had
>>> no luck finding any problem under valgrind.  Not sure how to progress
>>> from here.
> 
>> I think I see the problem - there's a CREATE SUBSCRIPTION but the test
>> is not waiting for the tablesync to complete, so sometimes it finishes
>> in time and sometimes not. That'd explain the flaky behavior, and it's
>> just this one test that misses the sync AFAICS.
> 
> Ah, that would also fit the symptoms.
> 

I'll go over the test to check if some other test misses that, and
perhaps do a bit of testing, and then push a fix.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-26 Thread Tom Lane
Tomas Vondra  writes:
> On 3/26/22 22:37, Tom Lane wrote:
>> This smells like an uninitialized-variable problem, but I've had
>> no luck finding any problem under valgrind.  Not sure how to progress
>> from here.

> I think I see the problem - there's a CREATE SUBSCRIPTION but the test
> is not waiting for the tablesync to complete, so sometimes it finishes
> in time and sometimes not. That'd explain the flaky behavior, and it's
> just this one test that misses the sync AFAICS.

Ah, that would also fit the symptoms.

regards, tom lane




Re: Column Filtering in Logical Replication

2022-03-26 Thread Tomas Vondra
On 3/26/22 22:37, Tom Lane wrote:
> Tomas Vondra  writes:
>> I went over the patch again, polished the commit message a bit, and
>> pushed. May the buildfarm be merciful!
> 
> Initial results aren't that great.  komodoensis[1], petalura[2],
> and snapper[3] have all shown variants of
> 
> #   Failed test 'partitions with different replica identities not replicated 
> correctly'
> #   at t/031_column_list.pl line 734.
> #  got: '2|4|
> # 4|9|'
> # expected: '1||5
> # 2|4|
> # 3||8
> # 4|9|'
> # Looks like you failed 1 test of 34.
> [18:19:36] t/031_column_list.pl ... 
> Dubious, test returned 1 (wstat 256, 0x100)
> Failed 1/34 subtests 
> 
> snapper reported different actual output than the other two:
> #  got: '1||5
> # 3||8'
> 
> The failure seems intermittent, as both komodoensis and petalura
> have also passed cleanly since the commit (snapper's only run once).
> 
> This smells like an uninitialized-variable problem, but I've had
> no luck finding any problem under valgrind.  Not sure how to progress
> from here.
> 

I think I see the problem - there's a CREATE SUBSCRIPTION but the test
is not waiting for the tablesync to complete, so sometimes it finishes
in time and sometimes not. That'd explain the flaky behavior, and it's
just this one test that misses the sync AFAICS.

FWIW I did run this under valgrind a number of times, and also on
various ARM machines that tend to trip over memory issues.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-26 Thread Tom Lane
Tomas Vondra  writes:
> I went over the patch again, polished the commit message a bit, and
> pushed. May the buildfarm be merciful!

Initial results aren't that great.  komodoensis[1], petalura[2],
and snapper[3] have all shown variants of

#   Failed test 'partitions with different replica identities not replicated 
correctly'
#   at t/031_column_list.pl line 734.
#  got: '2|4|
# 4|9|'
# expected: '1||5
# 2|4|
# 3||8
# 4|9|'
# Looks like you failed 1 test of 34.
[18:19:36] t/031_column_list.pl ... 
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/34 subtests 

snapper reported different actual output than the other two:
#  got: '1||5
# 3||8'

The failure seems intermittent, as both komodoensis and petalura
have also passed cleanly since the commit (snapper's only run once).

This smells like an uninitialized-variable problem, but I've had
no luck finding any problem under valgrind.  Not sure how to progress
from here.

regards, tom lane

[1] 
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=komodoensis=2022-03-26%2015%3A54%3A04
[2] 
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura=2022-03-26%2004%3A20%3A04
[3] 
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snapper=2022-03-26%2018%3A46%3A28




Re: Column Filtering in Logical Replication

2022-03-26 Thread Tomas Vondra
On 3/26/22 10:58, Tomas Vondra wrote:
> On 3/26/22 05:09, Shinoda, Noriyoshi (PN Japan FSIP) wrote:
>> Hello, 
>>
>> The 'prattrs' column has been added to the pg_publication_rel catalog, 
>> but the current commit to catalog.sgml seems to have added it to 
>> pg_publication_namespace. 
>> The attached patch fixes this.
>>
> 
> Thanks, I'll get this pushed.
> 

Pushed. Thanks for noticing this!


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-26 Thread Tomas Vondra
On 3/26/22 05:09, Shinoda, Noriyoshi (PN Japan FSIP) wrote:
> Hello, 
> 
> The 'prattrs' column has been added to the pg_publication_rel catalog, 
> but the current commit to catalog.sgml seems to have added it to 
> pg_publication_namespace. 
> The attached patch fixes this.
> 

Thanks, I'll get this pushed.

Sadly, while looking at the catalog docs I realized I forgot to bump the
catversion :-(


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




RE: Column Filtering in Logical Replication

2022-03-25 Thread Shinoda, Noriyoshi (PN Japan FSIP)
Hello, 

The 'prattrs' column has been added to the pg_publication_rel catalog, 
but the current commit to catalog.sgml seems to have added it to 
pg_publication_namespace. 
The attached patch fixes this.

Regards,
Noriyoshi Shinoda
-Original Message-
From: Tomas Vondra  
Sent: Saturday, March 26, 2022 9:35 AM
To: Amit Kapila 
Cc: Peter Eisentraut ; 
houzj.f...@fujitsu.com; Alvaro Herrera ; Justin Pryzby 
; Rahila Syed ; Peter Smith 
; pgsql-hackers ; 
shiy.f...@fujitsu.com
Subject: Re: Column Filtering in Logical Replication

On 3/26/22 01:18, Tomas Vondra wrote:
>
> ...
> 
> I went over the patch again, polished the commit message a bit, and 
> pushed. May the buildfarm be merciful!
> 

There's a couple failures immediately after the push, which caused me a minor 
heart attack. But it seems all of those are strange failures related to 
configure (which the patch did not touch at all), on animals managed by Andres. 
And a couple animals succeeded since then.

So I guess the animals were reconfigured, or something ...


regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




prattrs_column_fix_v1.diff
Description: prattrs_column_fix_v1.diff


Re: Column Filtering in Logical Replication

2022-03-25 Thread Tomas Vondra
On 3/26/22 01:18, Tomas Vondra wrote:
>
> ...
> 
> I went over the patch again, polished the commit message a bit, and
> pushed. May the buildfarm be merciful!
> 

There's a couple failures immediately after the push, which caused me a
minor heart attack. But it seems all of those are strange failures
related to configure (which the patch did not touch at all), on animals
managed by Andres. And a couple animals succeeded since then.

So I guess the animals were reconfigured, or something ...


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-25 Thread Tomas Vondra
On 3/21/22 15:12, Amit Kapila wrote:
> On Sat, Mar 19, 2022 at 11:11 PM Tomas Vondra
>  wrote:
>>
>> On 3/19/22 18:11, Tomas Vondra wrote:
>>> Fix a compiler warning reported by cfbot.
>>
>> Apologies, I failed to actually commit the fix. So here we go again.
>>
> 
> Few comments:
> ===
> 1.
> +/*
> + * Gets a list of OIDs of all partial-column publications of the given
> + * relation, that is, those that specify a column list.
> + */
> +List *
> +GetRelationColumnPartialPublications(Oid relid)
> {
> ...
> }
> 
> ...
> +/*
> + * For a relation in a publication that is known to have a non-null column
> + * list, return the list of attribute numbers that are in it.
> + */
> +List *
> +GetRelationColumnListInPublication(Oid relid, Oid pubid)
> {
> ...
> }
> 
> Both these functions are not required now. So, we can remove them.
> 

Good catch, removed.

> 2.
> @@ -464,11 +478,11 @@ logicalrep_write_update(StringInfo out,
> TransactionId xid, Relation rel,
>   pq_sendbyte(out, 'O'); /* old tuple follows */
>   else
>   pq_sendbyte(out, 'K'); /* old key follows */
> - logicalrep_write_tuple(out, rel, oldslot, binary);
> + logicalrep_write_tuple(out, rel, oldslot, binary, columns);
>   }
> 
> As mentioned previously, here, we should pass NULL similar to
> logicalrep_write_delete as we don't need to use column list for old
> tuples.
> 

Fixed.

> 3.
> + * XXX The name is a bit misleading, because we don't really transform
> + * anything here - we merely check the column list is compatible with the
> + * definition of the publication (with publish_via_partition_root=false)
> + * we only allow column lists on the leaf relations. So maybe rename it?
> + */
> +static void
> +TransformPubColumnList(List *tables, const char *queryString,
> +bool pubviaroot)
> 
> The second parameter is not used in this function. As noted in the
> comments, I also think it is better to rename this. How about
> ValidatePubColumnList?
> 
> 4.
> @@ -821,6 +942,9 @@ fetch_remote_table_info(char *nspname, char *relname,
>   *
>   * 3) one of the subscribed publications is declared as ALL TABLES IN
>   * SCHEMA that includes this relation
> + *
> + * XXX Does this actually handle puballtables and schema publications
> + * correctly?
>   */
>   if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 15)
> 
> Why is this comment added in the row filter code? Now, both row filter
> and column list are fetched in the same way, so not sure what exactly
> this comment is referring to.
> 

I added that comment as a note to myself while learning about how the
code works, forgot to remove that.

> 5.
> +/* qsort comparator for attnums */
> +static int
> +compare_int16(const void *a, const void *b)
> +{
> + int av = *(const int16 *) a;
> + int bv = *(const int16 *) b;
> +
> + /* this can't overflow if int is wider than int16 */
> + return (av - bv);
> +}
> 
> The exact same code exists in statscmds.c. Do we need a second copy of the 
> same?
> 

Yeah, I thought about moving it to some common header, but I think it's
not really worth it at this point.

> 6.
>  static void pgoutput_row_filter_init(PGOutputData *data,
>   List *publications,
>   RelationSyncEntry *entry);
> +
>  static bool pgoutput_row_filter_exec_expr(ExprState *state,
> 
> Spurious line addition.
> 

Fixed.

> 7. The tests in 030_column_list.pl take a long time as compared to all
> other similar individual tests in the subscription folder. I haven't
> checked whether there is any need to reduce some tests but it seems
> worth checking.
> 

On my machine, 'make check' in src/test/subscription takes ~150 seconds
(with asserts and -O0), and the new script takes ~14 seconds, while most
other tests have 3-6 seconds.

AFAICS that's simply due to the number of tests in the script, and I
don't think there are any unnecessary ones. I was actually adding them
in response to issues reported during development, or to test various
important cases. So I don't think we can remove some of them easily :-(

And it's not like the tests are using massive amounts of data either.

We could split the test, but that obviously won't reduce the duration,
of course.

So I decided to keep the test as is, for now, and maybe we can try
reducing the test after a couple buildfarm runs.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-25 Thread Tomas Vondra



On 3/25/22 04:10, Amit Kapila wrote:
> On Fri, Mar 25, 2022 at 5:44 AM Tomas Vondra
>  wrote:
>>
>> Attached is a patch, rebased on top of the sequence decoding stuff I
>> pushed earlier today, also including the comments rewording, and
>> renaming the "transform" function.
>>
>> I'll go over it again and get it pushed soon, unless someone objects.
>>
> 
> You haven't addressed the comments given by me earlier this week. See
> https://www.postgresql.org/message-id/CAA4eK1LY_JGL7LvdT64ujEiEAVaADuhdej1QNnwxvO_-KPzeEg%40mail.gmail.com.
> 

Thanks for noticing that! Thunderbird did not include that message into
the patch thread for some reason, so I did not notice that!

> *
> + * XXX The name is a bit misleading, because we don't really transform
> + * anything here - we merely check the column list is compatible with the
> + * definition of the publication (with publish_via_partition_root=false)
> + * we only allow column lists on the leaf relations. So maybe rename it?
> + */
> +static void
> +CheckPubRelationColumnList(List *tables, const char *queryString,
> +bool pubviaroot)
> 
> After changing this function name, the comment above is not required.
> 

Thanks, comment updated.

I went over the patch again, polished the commit message a bit, and
pushed. May the buildfarm be merciful!

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-24 Thread Amit Kapila
On Fri, Mar 25, 2022 at 5:44 AM Tomas Vondra
 wrote:
>
> Attached is a patch, rebased on top of the sequence decoding stuff I
> pushed earlier today, also including the comments rewording, and
> renaming the "transform" function.
>
> I'll go over it again and get it pushed soon, unless someone objects.
>

You haven't addressed the comments given by me earlier this week. See
https://www.postgresql.org/message-id/CAA4eK1LY_JGL7LvdT64ujEiEAVaADuhdej1QNnwxvO_-KPzeEg%40mail.gmail.com.

*
+ * XXX The name is a bit misleading, because we don't really transform
+ * anything here - we merely check the column list is compatible with the
+ * definition of the publication (with publish_via_partition_root=false)
+ * we only allow column lists on the leaf relations. So maybe rename it?
+ */
+static void
+CheckPubRelationColumnList(List *tables, const char *queryString,
+bool pubviaroot)

After changing this function name, the comment above is not required.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-24 Thread Peter Eisentraut


On 17.03.22 20:11, Tomas Vondra wrote:

But the comment describes the error for the whole block, which looks
like this:

-- error: replica identity "a" not included in the column list
ALTER PUBLICATION testpub_fortable ADD TABLE testpub_tbl5 (b, c);
UPDATE testpub_tbl5 SET a = 1;
ERROR:  cannot update table "testpub_tbl5"
DETAIL:  Column list used by the publication does not cover the replica
identity.

So IMHO the comment is correct.


Ok, that makes sense.  I read all the comments in the test file again. 
There were a couple that I think could use tweaking; see attached file. 
The ones with "???" didn't make sense to me:  The first one is before a 
command that doesn't seem to change anything, the second one I didn't 
understand the meaning.  Please take a look.


(The patch is actually based on your 20220318c patch, but I'm adding it 
here since we have the discussion here.)From 2e6352791e5418bb0726a051660d44311046fc28 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Thu, 24 Mar 2022 17:30:32 +0100
Subject: [PATCH] fixup! Allow specifying column lists for logical replication

---
 src/test/regress/sql/publication.sql | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/src/test/regress/sql/publication.sql 
b/src/test/regress/sql/publication.sql
index aeb1b572af..d50052ef9d 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -399,14 +399,14 @@ CREATE TABLE testpub_tbl5 (a int PRIMARY KEY, b text, c 
text,
 -- ok
 ALTER PUBLICATION testpub_fortable ADD TABLE testpub_tbl5 (a, c);
 ALTER TABLE testpub_tbl5 DROP COLUMN c;-- no dice
--- ok: for insert-only publication, the column list is arbitrary
+-- ok: for insert-only publication, any column list is acceptable
 ALTER PUBLICATION testpub_fortable_insert ADD TABLE testpub_tbl5 (b, c);
 
 /* not all replica identities are good enough */
 CREATE UNIQUE INDEX testpub_tbl5_b_key ON testpub_tbl5 (b, c);
 ALTER TABLE testpub_tbl5 ALTER b SET NOT NULL, ALTER c SET NOT NULL;
 ALTER TABLE testpub_tbl5 REPLICA IDENTITY USING INDEX testpub_tbl5_b_key;
--- error: replica identity (b,c) is covered by column list (a, c)
+-- error: replica identity (b,c) is not covered by column list (a, c)
 UPDATE testpub_tbl5 SET a = 1;
 ALTER PUBLICATION testpub_fortable DROP TABLE testpub_tbl5;
 
@@ -423,7 +423,7 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 
'insert, truncate');
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);
-- ok
 \dRp+ testpub_table_ins
 
--- with REPLICA IDENTITY FULL, column lists are not allowed
+-- tests with REPLICA IDENTITY FULL
 CREATE TABLE testpub_tbl6 (a int, b text, c text);
 ALTER TABLE testpub_tbl6 REPLICA IDENTITY FULL;
 
@@ -434,11 +434,11 @@ CREATE TABLE testpub_tbl6 (a int, b text, c text);
 ALTER PUBLICATION testpub_fortable ADD TABLE testpub_tbl6; -- ok
 UPDATE testpub_tbl6 SET a = 1;
 
--- make sure changing the column list is updated in SET TABLE
+-- make sure changing the column list is updated in SET TABLE ???
 CREATE TABLE testpub_tbl7 (a int primary key, b text, c text);
 ALTER PUBLICATION testpub_fortable ADD TABLE testpub_tbl7 (a, b);
 \d+ testpub_tbl7
--- ok: we'll skip this table
+-- ok: we'll skip this table ???
 ALTER PUBLICATION testpub_fortable SET TABLE testpub_tbl7 (a, b);
 \d+ testpub_tbl7
 -- ok: update the column list
-- 
2.35.1



Re: Column Filtering in Logical Replication

2022-03-23 Thread Amit Kapila
On Thu, Mar 24, 2022 at 4:11 AM Tomas Vondra
 wrote:
>
> On 3/21/22 12:28, Amit Kapila wrote:
> > On Fri, Mar 18, 2022 at 8:13 PM Tomas Vondra
> >  wrote:
> >>
> >> Ah, thanks for reminding me - it's hard to keep track of all the issues
> >> in threads as long as this one.
> >>
> >> BTW do you have any opinion on the SET COLUMNS syntax? Peter Smith
> >> proposed to get rid of it in [1] but I'm not sure that's a good idea.
> >> Because if we ditch it, then removing the column list would look like this:
> >>
> >> ALTER PUBLICATION pub ALTER TABLE tab;
> >>
> >> And if we happen to add other per-table options, this would become
> >> pretty ambiguous.
> >>
> >> Actually, do we even want to allow resetting column lists like this? We
> >> don't allow this for row filters, so if you want to change a row filter
> >> you have to re-add the table, right?
> >>
> >
> > We can use syntax like: "alter publication pub1 set table t1 where (c2
> >> 10);" to reset the existing row filter. It seems similar thing works
> > for column list as well ("alter publication pub1 set table t1 (c2)
> > where (c2 > 10)"). If I am not missing anything, I don't think we need
> > additional Alter Table syntax.
> >
> >> So maybe we should just ditch ALTER
> >> TABLE entirely.
> >>
> >
> > Yeah, I agree especially if my above understanding is correct.
> >
>
> I think there's a gotcha that
>
>ALTER PUBLICATION pub SET TABLE t ...
>
> also removes all other relations from the publication, and it removes
> and re-adds the table anyway. So I'm not sure what's the advantage?
>

I think it could be used when the user has fewer tables and she wants
to change the list of published tables or their row/column filters. I
am not sure of the value of this to users but this was a pre-existing
syntax.

> Anyway, I don't see why we would need such ALTER TABLE only for column
> filters and not for row filters - either we need to allow this for both
> options or none of them.
>

+1. I think for now we can leave this new ALTER TABLE syntax and do it
for both column and row filters together.


-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-23 Thread Tomas Vondra



On 3/21/22 12:28, Amit Kapila wrote:
> On Fri, Mar 18, 2022 at 8:13 PM Tomas Vondra
>  wrote:
>>
>> Ah, thanks for reminding me - it's hard to keep track of all the issues
>> in threads as long as this one.
>>
>> BTW do you have any opinion on the SET COLUMNS syntax? Peter Smith
>> proposed to get rid of it in [1] but I'm not sure that's a good idea.
>> Because if we ditch it, then removing the column list would look like this:
>>
>> ALTER PUBLICATION pub ALTER TABLE tab;
>>
>> And if we happen to add other per-table options, this would become
>> pretty ambiguous.
>>
>> Actually, do we even want to allow resetting column lists like this? We
>> don't allow this for row filters, so if you want to change a row filter
>> you have to re-add the table, right?
>>
> 
> We can use syntax like: "alter publication pub1 set table t1 where (c2
>> 10);" to reset the existing row filter. It seems similar thing works
> for column list as well ("alter publication pub1 set table t1 (c2)
> where (c2 > 10)"). If I am not missing anything, I don't think we need
> additional Alter Table syntax.
> 
>> So maybe we should just ditch ALTER
>> TABLE entirely.
>>
> 
> Yeah, I agree especially if my above understanding is correct.
> 

I think there's a gotcha that

   ALTER PUBLICATION pub SET TABLE t ...

also removes all other relations from the publication, and it removes
and re-adds the table anyway. So I'm not sure what's the advantage?

Anyway, I don't see why we would need such ALTER TABLE only for column
filters and not for row filters - either we need to allow this for both
options or none of them.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-23 Thread Tomas Vondra
On 3/21/22 12:55, Amit Kapila wrote:
> On Sat, Mar 19, 2022 at 3:56 AM Tomas Vondra
>  wrote:
>>
>> ...
>>
>> However, while looking at how pgoutput, I realized one thing - for row
>> filters we track them "per operation", depending on which operations are
>> defined for a given publication. Shouldn't we do the same thing for
>> column lists, really?
>>
>> I mean, if there are two publications with different column lists, one
>> for inserts and the other one for updates, isn't it wrong to merge these
>> two column lists?
>>
> 
> The reason we can't combine row filters for inserts with
> updates/deletes is that if inserts have some column that is not
> present in RI then during update filtering (for old tuple) it will
> give an error as the column won't be present in WAL log.
> 
> OTOH, the same problem won't be there for the column list/filter patch
> because all the RI columns are there in the column list (for
> update/delete) and we don't need to apply a column filter for old
> tuples in either update or delete.
> 
> Basically, the filter rules are slightly different for row filters and
> column lists, so we need them (combine of filters) for one but not for
> the other. Now, for the sake of consistency with row filters, we can
> do it but as such there won't be any problem or maybe we can just add
> a comment for the same in code.
> 

OK, thanks for the explanation. I'll add a comment explaining this to
the function initializing the column filter.

regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-23 Thread Alvaro Herrera
On 2022-Mar-23, Amit Kapila wrote:

> On Wed, Mar 23, 2022 at 12:54 AM Alvaro Herrera  
> wrote:
> >
> > On 2022-Mar-19, Tomas Vondra wrote:
> >
> > > @@ -174,7 +182,13 @@ ALTER PUBLICATION noinsert SET (publish = 'update, 
> > > delete');
> > >
> > > Add some tables to the publication:
> > >  
> > > -ALTER PUBLICATION mypublication ADD TABLE users, departments;
> > > +ALTER PUBLICATION mypublication ADD TABLE users (user_id, firstname), 
> > > departments;
> > > +
> > > +
> > > +  
> > > +   Change the set of columns published for a table:
> > > +
> > > +ALTER PUBLICATION mypublication SET TABLE users (user_id, firstname, 
> > > lastname), TABLE departments;
> > >  
> > >
> > >
> >
> > Hmm, it seems to me that if you've removed the feature to change the set
> > of columns published for a table, then the second example should be
> > removed as well.
> 
> As per my understanding, the removed feature is "Alter Publication ...
> Alter Table ...". The example here "Alter Publication ... Set Table
> .." should still work as mentioned in my email[1].

Ah, I see.  Yeah, that makes sense.  In that case, the leading text
seems a bit confusing.  I would suggest "Change the set of tables in the
publication, specifying a different set of columns for one of them:"

I think it would make the example more useful if we table for which the
columns are changing is a different one.  Maybe do this:

Add some tables to the publication:
 
-ALTER PUBLICATION mypublication ADD TABLE users, departments;
+ALTER PUBLICATION mypublication ADD TABLE users (user_id, firstname), 
departments;
+
+
+  
+   Change the set of tables in the publication, keeping the column list
+   in the users table and specifying a different column list for the
+   departments table. Note that previously published tables not mentioned
+   in this command are removed from the publication:
+
+
+ALTER PUBLICATION mypublication SET TABLE users (user_id, firstname), TABLE 
departments (dept_id, deptname);
 

so that it is clear that if you want to keep the column list unchanged
in one table, you are forced to specify it again.

(Frankly, this ALTER PUBLICATION SET command seems pretty useless from a
user PoV.)

-- 
Álvaro Herrera PostgreSQL Developer  —  https://www.EnterpriseDB.com/
"Investigación es lo que hago cuando no sé lo que estoy haciendo"
(Wernher von Braun)




Re: Column Filtering in Logical Replication

2022-03-22 Thread Amit Kapila
On Wed, Mar 23, 2022 at 12:54 AM Alvaro Herrera  wrote:
>
> On 2022-Mar-19, Tomas Vondra wrote:
>
> > @@ -174,7 +182,13 @@ ALTER PUBLICATION noinsert SET (publish = 'update, 
> > delete');
> >
> > Add some tables to the publication:
> >  
> > -ALTER PUBLICATION mypublication ADD TABLE users, departments;
> > +ALTER PUBLICATION mypublication ADD TABLE users (user_id, firstname), 
> > departments;
> > +
> > +
> > +  
> > +   Change the set of columns published for a table:
> > +
> > +ALTER PUBLICATION mypublication SET TABLE users (user_id, firstname, 
> > lastname), TABLE departments;
> >  
> >
> >
>
> Hmm, it seems to me that if you've removed the feature to change the set
> of columns published for a table, then the second example should be
> removed as well.
>

As per my understanding, the removed feature is "Alter Publication ...
Alter Table ...". The example here "Alter Publication ... Set Table
.." should still work as mentioned in my email[1].

[1] - 
https://www.postgresql.org/message-id/CAA4eK1L6YTcx%3DyJfdudr-y98Wcn4rWX4puHGAa2nxSCRb3fzQw%40mail.gmail.com

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-22 Thread Alvaro Herrera
On 2022-Mar-19, Tomas Vondra wrote:

> @@ -174,7 +182,13 @@ ALTER PUBLICATION noinsert SET (publish = 'update, 
> delete');
>
> Add some tables to the publication:
>  
> -ALTER PUBLICATION mypublication ADD TABLE users, departments;
> +ALTER PUBLICATION mypublication ADD TABLE users (user_id, firstname), 
> departments;
> +
> +
> +  
> +   Change the set of columns published for a table:
> +
> +ALTER PUBLICATION mypublication SET TABLE users (user_id, firstname, 
> lastname), TABLE departments;
>  
>  
>

Hmm, it seems to me that if you've removed the feature to change the set
of columns published for a table, then the second example should be
removed as well.

> +/*
> + * Transform the publication column lists expression for all the relations
> + * in the list.
> + *
> + * XXX The name is a bit misleading, because we don't really transform
> + * anything here - we merely check the column list is compatible with the
> + * definition of the publication (with publish_via_partition_root=false)
> + * we only allow column lists on the leaf relations. So maybe rename it?
> + */
> +static void
> +TransformPubColumnList(List *tables, const char *queryString,
> +bool pubviaroot)
> +{

I agree with renaming this function.  Maybe CheckPubRelationColumnList() ?

-- 
Álvaro Herrera PostgreSQL Developer  —  https://www.EnterpriseDB.com/
"This is a foot just waiting to be shot"(Andrew Dunstan)




Re: Column Filtering in Logical Replication

2022-03-21 Thread Amit Kapila
On Sat, Mar 19, 2022 at 11:11 PM Tomas Vondra
 wrote:
>
> On 3/19/22 18:11, Tomas Vondra wrote:
> > Fix a compiler warning reported by cfbot.
>
> Apologies, I failed to actually commit the fix. So here we go again.
>

Few comments:
===
1.
+/*
+ * Gets a list of OIDs of all partial-column publications of the given
+ * relation, that is, those that specify a column list.
+ */
+List *
+GetRelationColumnPartialPublications(Oid relid)
{
...
}

...
+/*
+ * For a relation in a publication that is known to have a non-null column
+ * list, return the list of attribute numbers that are in it.
+ */
+List *
+GetRelationColumnListInPublication(Oid relid, Oid pubid)
{
...
}

Both these functions are not required now. So, we can remove them.

2.
@@ -464,11 +478,11 @@ logicalrep_write_update(StringInfo out,
TransactionId xid, Relation rel,
  pq_sendbyte(out, 'O'); /* old tuple follows */
  else
  pq_sendbyte(out, 'K'); /* old key follows */
- logicalrep_write_tuple(out, rel, oldslot, binary);
+ logicalrep_write_tuple(out, rel, oldslot, binary, columns);
  }

As mentioned previously, here, we should pass NULL similar to
logicalrep_write_delete as we don't need to use column list for old
tuples.

3.
+ * XXX The name is a bit misleading, because we don't really transform
+ * anything here - we merely check the column list is compatible with the
+ * definition of the publication (with publish_via_partition_root=false)
+ * we only allow column lists on the leaf relations. So maybe rename it?
+ */
+static void
+TransformPubColumnList(List *tables, const char *queryString,
+bool pubviaroot)

The second parameter is not used in this function. As noted in the
comments, I also think it is better to rename this. How about
ValidatePubColumnList?

4.
@@ -821,6 +942,9 @@ fetch_remote_table_info(char *nspname, char *relname,
  *
  * 3) one of the subscribed publications is declared as ALL TABLES IN
  * SCHEMA that includes this relation
+ *
+ * XXX Does this actually handle puballtables and schema publications
+ * correctly?
  */
  if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 15)

Why is this comment added in the row filter code? Now, both row filter
and column list are fetched in the same way, so not sure what exactly
this comment is referring to.

5.
+/* qsort comparator for attnums */
+static int
+compare_int16(const void *a, const void *b)
+{
+ int av = *(const int16 *) a;
+ int bv = *(const int16 *) b;
+
+ /* this can't overflow if int is wider than int16 */
+ return (av - bv);
+}

The exact same code exists in statscmds.c. Do we need a second copy of the same?

6.
 static void pgoutput_row_filter_init(PGOutputData *data,
  List *publications,
  RelationSyncEntry *entry);
+
 static bool pgoutput_row_filter_exec_expr(ExprState *state,

Spurious line addition.

7. The tests in 030_column_list.pl take a long time as compared to all
other similar individual tests in the subscription folder. I haven't
checked whether there is any need to reduce some tests but it seems
worth checking.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-21 Thread Alvaro Herrera
Hello,

Please add me to the list of authors of this patch.  I made a large
number of nontrivial changes to it early on.  Thanks.  I have modified
the entry in the CF app (which sorts alphabetically, it was not my
intention to put my name first.)

-- 
Álvaro Herrera PostgreSQL Developer  —  https://www.EnterpriseDB.com/




Re: Column Filtering in Logical Replication

2022-03-21 Thread Amit Kapila
On Sat, Mar 19, 2022 at 3:56 AM Tomas Vondra
 wrote:
>
> On 3/18/22 15:43, Tomas Vondra wrote:
> >
>
> As for the issue reported by Shi-San about replica identity full and
> column filters, presumably you're referring to this:
>
>   create table tbl (a int, b int, c int);
>   create publication pub for table tbl (a, b, c);
>   alter table tbl replica identity full;
>
>   postgres=# delete from tbl;
>   ERROR:  cannot delete from table "tbl"
>   DETAIL:  Column list used by the publication does not cover the
>replica identity.
>
> I believe not allowing column lists with REPLICA IDENTITY FULL is
> expected / correct behavior. I mean, for that to work the column list
> has to always include all columns anyway, so it's pretty pointless. Of
> course, we might check that the column list contains everything, but
> considering the list does always have to contain all columns, and it
> break as soon as you add any columns, it seems reasonable (cheaper) to
> just require no column lists.
>

Fair point. We can leave this as it is.

> I also went through the patch and made the naming more consistent. The
> comments used both "column filter" and "column list" randomly, and I
> think the agreement is to use "list" so I adopted that wording.
>
>
> However, while looking at how pgoutput, I realized one thing - for row
> filters we track them "per operation", depending on which operations are
> defined for a given publication. Shouldn't we do the same thing for
> column lists, really?
>
> I mean, if there are two publications with different column lists, one
> for inserts and the other one for updates, isn't it wrong to merge these
> two column lists?
>

The reason we can't combine row filters for inserts with
updates/deletes is that if inserts have some column that is not
present in RI then during update filtering (for old tuple) it will
give an error as the column won't be present in WAL log.

OTOH, the same problem won't be there for the column list/filter patch
because all the RI columns are there in the column list (for
update/delete) and we don't need to apply a column filter for old
tuples in either update or delete.

Basically, the filter rules are slightly different for row filters and
column lists, so we need them (combine of filters) for one but not for
the other. Now, for the sake of consistency with row filters, we can
do it but as such there won't be any problem or maybe we can just add
a comment for the same in code.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-21 Thread Amit Kapila
On Fri, Mar 18, 2022 at 8:13 PM Tomas Vondra
 wrote:
>
> Ah, thanks for reminding me - it's hard to keep track of all the issues
> in threads as long as this one.
>
> BTW do you have any opinion on the SET COLUMNS syntax? Peter Smith
> proposed to get rid of it in [1] but I'm not sure that's a good idea.
> Because if we ditch it, then removing the column list would look like this:
>
> ALTER PUBLICATION pub ALTER TABLE tab;
>
> And if we happen to add other per-table options, this would become
> pretty ambiguous.
>
> Actually, do we even want to allow resetting column lists like this? We
> don't allow this for row filters, so if you want to change a row filter
> you have to re-add the table, right?
>

We can use syntax like: "alter publication pub1 set table t1 where (c2
> 10);" to reset the existing row filter. It seems similar thing works
for column list as well ("alter publication pub1 set table t1 (c2)
where (c2 > 10)"). If I am not missing anything, I don't think we need
additional Alter Table syntax.

> So maybe we should just ditch ALTER
> TABLE entirely.
>

Yeah, I agree especially if my above understanding is correct.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-21 Thread Amit Kapila
On Sun, Mar 20, 2022 at 4:53 PM Tomas Vondra
 wrote:
>
> On 3/20/22 07:23, Amit Kapila wrote:
> > On Sun, Mar 20, 2022 at 8:41 AM Amit Kapila  wrote:
> >>
> >> On Fri, Mar 18, 2022 at 10:42 PM Tomas Vondra
> >>  wrote:
> >>
> >>> So the question is why those two sync workers never complete - I guess
> >>> there's some sort of lock wait (deadlock?) or infinite loop.
> >>>
> >>
> >> It would be a bit tricky to reproduce this even if the above theory is
> >> correct but I'll try it today or tomorrow.
> >>
> >
> > I am able to reproduce it with the help of a debugger. Firstly, I have
> > added the LOG message and some While (true) loops to debug sync and
> > apply workers. Test setup
> >
> > Node-1:
> > create table t1(c1);
> > create table t2(c1);
> > insert into t1 values(1);
> > create publication pub1 for table t1;
> > create publication pu2;
> >
> > Node-2:
> > change max_sync_workers_per_subscription to 1 in potgresql.conf
> > create table t1(c1);
> > create table t2(c1);
> > create subscription sub1 connection 'dbname = postgres' publication pub1;
> >
> > Till this point, just allow debuggers in both workers just continue.
> >
> > Node-1:
> > alter publication pub1 add table t2;
> > insert into t1 values(2);
> >
> > Here, we have to debug the apply worker such that when it tries to
> > apply the insert, stop the debugger in function apply_handle_insert()
> > after doing begin_replication_step().
> >
> > Node-2:
> > alter subscription sub1 set pub1, pub2;
> >
> > Now, continue the debugger of apply worker, it should first start the
> > sync worker and then exit because of parameter change. All of these
> > debugging steps are to just ensure the point that it should first
> > start the sync worker and then exit. After this point, table sync
> > worker never finishes and log is filled with messages: "reached
> > max_sync_workers_per_subscription limit" (a newly added message by me
> > in the attached debug patch).
> >
> > Now, it is not completely clear to me how exactly '013_partition.pl'
> > leads to this situation but there is a possibility based on the LOGs
> > it shows.
> >
>
> Thanks, I'll take a look later. From the description it seems this is an
> issue that existed before any of the patches, right? It might be more
> likely to hit due to some test changes, but the root cause is older.
>

Yes, your understanding is correct. If my understanding is correct,
then we need probably just need some changes in the new test to make
it behave as per the current code.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-20 Thread Tomas Vondra



On 3/20/22 07:23, Amit Kapila wrote:
> On Sun, Mar 20, 2022 at 8:41 AM Amit Kapila  wrote:
>>
>> On Fri, Mar 18, 2022 at 10:42 PM Tomas Vondra
>>  wrote:
>>
>>> So the question is why those two sync workers never complete - I guess
>>> there's some sort of lock wait (deadlock?) or infinite loop.
>>>
>>
>> It would be a bit tricky to reproduce this even if the above theory is
>> correct but I'll try it today or tomorrow.
>>
> 
> I am able to reproduce it with the help of a debugger. Firstly, I have
> added the LOG message and some While (true) loops to debug sync and
> apply workers. Test setup
> 
> Node-1:
> create table t1(c1);
> create table t2(c1);
> insert into t1 values(1);
> create publication pub1 for table t1;
> create publication pu2;
> 
> Node-2:
> change max_sync_workers_per_subscription to 1 in potgresql.conf
> create table t1(c1);
> create table t2(c1);
> create subscription sub1 connection 'dbname = postgres' publication pub1;
> 
> Till this point, just allow debuggers in both workers just continue.
> 
> Node-1:
> alter publication pub1 add table t2;
> insert into t1 values(2);
> 
> Here, we have to debug the apply worker such that when it tries to
> apply the insert, stop the debugger in function apply_handle_insert()
> after doing begin_replication_step().
> 
> Node-2:
> alter subscription sub1 set pub1, pub2;
> 
> Now, continue the debugger of apply worker, it should first start the
> sync worker and then exit because of parameter change. All of these
> debugging steps are to just ensure the point that it should first
> start the sync worker and then exit. After this point, table sync
> worker never finishes and log is filled with messages: "reached
> max_sync_workers_per_subscription limit" (a newly added message by me
> in the attached debug patch).
> 
> Now, it is not completely clear to me how exactly '013_partition.pl'
> leads to this situation but there is a possibility based on the LOGs
> it shows.
> 

Thanks, I'll take a look later. From the description it seems this is an
issue that existed before any of the patches, right? It might be more
likely to hit due to some test changes, but the root cause is older.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-20 Thread Amit Kapila
On Sun, Mar 20, 2022 at 8:41 AM Amit Kapila  wrote:
>
> On Fri, Mar 18, 2022 at 10:42 PM Tomas Vondra
>  wrote:
>
> > So the question is why those two sync workers never complete - I guess
> > there's some sort of lock wait (deadlock?) or infinite loop.
> >
>
> It would be a bit tricky to reproduce this even if the above theory is
> correct but I'll try it today or tomorrow.
>

I am able to reproduce it with the help of a debugger. Firstly, I have
added the LOG message and some While (true) loops to debug sync and
apply workers. Test setup

Node-1:
create table t1(c1);
create table t2(c1);
insert into t1 values(1);
create publication pub1 for table t1;
create publication pu2;

Node-2:
change max_sync_workers_per_subscription to 1 in potgresql.conf
create table t1(c1);
create table t2(c1);
create subscription sub1 connection 'dbname = postgres' publication pub1;

Till this point, just allow debuggers in both workers just continue.

Node-1:
alter publication pub1 add table t2;
insert into t1 values(2);

Here, we have to debug the apply worker such that when it tries to
apply the insert, stop the debugger in function apply_handle_insert()
after doing begin_replication_step().

Node-2:
alter subscription sub1 set pub1, pub2;

Now, continue the debugger of apply worker, it should first start the
sync worker and then exit because of parameter change. All of these
debugging steps are to just ensure the point that it should first
start the sync worker and then exit. After this point, table sync
worker never finishes and log is filled with messages: "reached
max_sync_workers_per_subscription limit" (a newly added message by me
in the attached debug patch).

Now, it is not completely clear to me how exactly '013_partition.pl'
leads to this situation but there is a possibility based on the LOGs
it shows.

-- 
With Regards,
Amit Kapila.


debug_sub_workers_1.patch
Description: Binary data


Re: Column Filtering in Logical Replication

2022-03-19 Thread Amit Kapila
On Fri, Mar 18, 2022 at 10:42 PM Tomas Vondra
 wrote:
>
> On 3/18/22 15:43, Tomas Vondra wrote:
> >>
> >
> > Hmmm. So the theory is that in most runs we manage to sync the tables
> > faster than starting the workers, so we don't hit the limit. But on some
> > machines the sync worker takes a bit longer, we hit the limit. Seems
> > possible, yes. Unfortunately we don't seem to log anything when we hit
> > the limit, so hard to say for sure :-( I suggest we add a WARNING
> > message to logicalrep_worker_launch or something. Not just because of
> > this test, it seems useful in general.
> >
> > However, how come we don't retry the sync? Surely we don't just give up
> > forever, that'd be a pretty annoying behavior. Presumably we just end up
> > sleeping for a long time before restarting the sync worker, somewhere.
> >
>
> I tried lowering the max_sync_workers_per_subscription to 1 and making
> the workers to run for a couple seconds (doing some CPU intensive
> stuff), but everything still works just fine.
>

Did the apply worker restarts during that time? If not you can try by
changing some subscription parameters which leads to its restart. This
has to happen before copy_table has finished. In the LOGS, you should
see the message: "logical replication apply worker for subscription
"" will restart because of a parameter change".
IIUC, the code which doesn't allow to restart the apply worker after
the max_sync_workers_per_subscription is reached is as below:
logicalrep_worker_launch()
{
...
if (nsyncworkers >= max_sync_workers_per_subscription)
{
LWLockRelease(LogicalRepWorkerLock);
return;
}
...
}

This happens before we allocate a worker to apply. So, it can happen
only during the restart of the apply worker because we always first
the apply worker, so in that case, it will never restart.

> Looking a bit closer at the logs (from pogona and other), I doubt this
> is about hitting the max_sync_workers_per_subscription limit. Notice we
> start two sync workers, but neither of them ever completes. So we never
> update the sync status or start syncing the remaining tables.
>

I think they are never completed because they are in a sort of
infinite loop. If you see process_syncing_tables_for_sync(), it will
never mark the status as SUBREL_STATE_SYNCDONE unless apply worker has
set it to SUBREL_STATE_CATCHUP. In LogicalRepSyncTableStart(), we do
wait for a state change to catchup via wait_for_worker_state_change(),
but we bail out in that function if the apply worker has died. After
that tablesync worker won't be able to complete because in our case
apply worker won't be able to restart.

> So the question is why those two sync workers never complete - I guess
> there's some sort of lock wait (deadlock?) or infinite loop.
>

It would be a bit tricky to reproduce this even if the above theory is
correct but I'll try it today or tomorrow.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-18 Thread Tomas Vondra



On 3/18/22 15:43, Tomas Vondra wrote:
> 
> 
> On 3/18/22 06:52, Amit Kapila wrote:
>> On Fri, Mar 18, 2022 at 12:47 AM Tomas Vondra
>>  wrote:
>>>
>>> I pushed the second fix. Interestingly enough, wrasse failed in the
>>> 013_partition test. I don't see how that could be caused by this
>>> particular commit, though - see the pgsql-committers thread [1].
>>>
>>
>> I have a theory about what's going on here. I think this is due to a
>> test added in your previous commit c91f71b9dc. The newly added test
>> added hangs in tablesync because there was no apply worker to set the
>> state to SUBREL_STATE_CATCHUP which blocked tablesync workers from
>> proceeding.
>>
>> See below logs from pogona [1].
>> 2022-03-18 01:33:15.190 CET [2551176][client
>> backend][3/74:0][013_partition.pl] LOG:  statement: ALTER SUBSCRIPTION
>> sub2 SET PUBLICATION pub_lower_level, pub_all
>> 2022-03-18 01:33:15.354 CET [2551193][logical replication
>> worker][4/57:0][] LOG:  logical replication apply worker for
>> subscription "sub2" has started
>> 2022-03-18 01:33:15.605 CET [2551176][client
>> backend][:0][013_partition.pl] LOG:  disconnection: session time:
>> 0:00:00.415 user=bf database=postgres host=[local]
>> 2022-03-18 01:33:15.607 CET [2551209][logical replication
>> worker][3/76:0][] LOG:  logical replication table synchronization
>> worker for subscription "sub2", table "tab4_1" has started
>> 2022-03-18 01:33:15.609 CET [2551211][logical replication
>> worker][5/11:0][] LOG:  logical replication table synchronization
>> worker for subscription "sub2", table "tab3" has started
>> 2022-03-18 01:33:15.617 CET [2551193][logical replication
>> worker][4/62:0][] LOG:  logical replication apply worker for
>> subscription "sub2" will restart because of a parameter change
>>
>> You will notice that the apply worker is never restarted after a
>> parameter change. The reason was that the particular subscription
>> reaches the limit of max_sync_workers_per_subscription after which we
>> don't allow to restart the apply worker. I think you might want to
>> increase the values of
>> max_sync_workers_per_subscription/max_logical_replication_workers to
>> make it work.
>>
> 
> Hmmm. So the theory is that in most runs we manage to sync the tables
> faster than starting the workers, so we don't hit the limit. But on some
> machines the sync worker takes a bit longer, we hit the limit. Seems
> possible, yes. Unfortunately we don't seem to log anything when we hit
> the limit, so hard to say for sure :-( I suggest we add a WARNING
> message to logicalrep_worker_launch or something. Not just because of
> this test, it seems useful in general.
> 
> However, how come we don't retry the sync? Surely we don't just give up
> forever, that'd be a pretty annoying behavior. Presumably we just end up
> sleeping for a long time before restarting the sync worker, somewhere.
> 

I tried lowering the max_sync_workers_per_subscription to 1 and making
the workers to run for a couple seconds (doing some CPU intensive
stuff), but everything still works just fine.

Looking a bit closer at the logs (from pogona and other), I doubt this
is about hitting the max_sync_workers_per_subscription limit. Notice we
start two sync workers, but neither of them ever completes. So we never
update the sync status or start syncing the remaining tables.

So the question is why those two sync workers never complete - I guess
there's some sort of lock wait (deadlock?) or infinite loop.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-18 Thread Tomas Vondra



On 3/18/22 06:52, Amit Kapila wrote:
> On Fri, Mar 18, 2022 at 12:47 AM Tomas Vondra
>  wrote:
>>
>> I pushed the second fix. Interestingly enough, wrasse failed in the
>> 013_partition test. I don't see how that could be caused by this
>> particular commit, though - see the pgsql-committers thread [1].
>>
> 
> I have a theory about what's going on here. I think this is due to a
> test added in your previous commit c91f71b9dc. The newly added test
> added hangs in tablesync because there was no apply worker to set the
> state to SUBREL_STATE_CATCHUP which blocked tablesync workers from
> proceeding.
> 
> See below logs from pogona [1].
> 2022-03-18 01:33:15.190 CET [2551176][client
> backend][3/74:0][013_partition.pl] LOG:  statement: ALTER SUBSCRIPTION
> sub2 SET PUBLICATION pub_lower_level, pub_all
> 2022-03-18 01:33:15.354 CET [2551193][logical replication
> worker][4/57:0][] LOG:  logical replication apply worker for
> subscription "sub2" has started
> 2022-03-18 01:33:15.605 CET [2551176][client
> backend][:0][013_partition.pl] LOG:  disconnection: session time:
> 0:00:00.415 user=bf database=postgres host=[local]
> 2022-03-18 01:33:15.607 CET [2551209][logical replication
> worker][3/76:0][] LOG:  logical replication table synchronization
> worker for subscription "sub2", table "tab4_1" has started
> 2022-03-18 01:33:15.609 CET [2551211][logical replication
> worker][5/11:0][] LOG:  logical replication table synchronization
> worker for subscription "sub2", table "tab3" has started
> 2022-03-18 01:33:15.617 CET [2551193][logical replication
> worker][4/62:0][] LOG:  logical replication apply worker for
> subscription "sub2" will restart because of a parameter change
> 
> You will notice that the apply worker is never restarted after a
> parameter change. The reason was that the particular subscription
> reaches the limit of max_sync_workers_per_subscription after which we
> don't allow to restart the apply worker. I think you might want to
> increase the values of
> max_sync_workers_per_subscription/max_logical_replication_workers to
> make it work.
> 

Hmmm. So the theory is that in most runs we manage to sync the tables
faster than starting the workers, so we don't hit the limit. But on some
machines the sync worker takes a bit longer, we hit the limit. Seems
possible, yes. Unfortunately we don't seem to log anything when we hit
the limit, so hard to say for sure :-( I suggest we add a WARNING
message to logicalrep_worker_launch or something. Not just because of
this test, it seems useful in general.

However, how come we don't retry the sync? Surely we don't just give up
forever, that'd be a pretty annoying behavior. Presumably we just end up
sleeping for a long time before restarting the sync worker, somewhere.

>> I'd like to test & polish the main patch over the weekend, and get it
>> committed early next week. Unless someone thinks it's definitely not
>> ready for that ...
>>
> 
> I think it is in good shape but apart from cleanup, there are issues
> with dependency handling which I have analyzed and reported as one of
> the comments in the email [2]. I was getting some weird behavior
> during my testing due to that. Apart from that still the patch has DDL
> handling code in tablecmds.c which probably is not required.
> Similarly, Shi-San has reported an issue with replica full in her
> email [3]. It is up to you what to do here but it would be good if you
> can once share the patch after fixing these issues so that we can
> re-test/review it.

Ah, thanks for reminding me - it's hard to keep track of all the issues
in threads as long as this one.

BTW do you have any opinion on the SET COLUMNS syntax? Peter Smith
proposed to get rid of it in [1] but I'm not sure that's a good idea.
Because if we ditch it, then removing the column list would look like this:

ALTER PUBLICATION pub ALTER TABLE tab;

And if we happen to add other per-table options, this would become
pretty ambiguous.

Actually, do we even want to allow resetting column lists like this? We
don't allow this for row filters, so if you want to change a row filter
you have to re-add the table, right? So maybe we should just ditch ALTER
TABLE entirely.

regards

[4]
https://www.postgresql.org/message-id/CAHut%2BPtc7Rh187eQKrxdUmUNWyfxz7OkhYAX%3DAW411Qwxya0LQ%40mail.gmail.com

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-17 Thread Amit Kapila
On Fri, Mar 18, 2022 at 12:47 AM Tomas Vondra
 wrote:
>
> I pushed the second fix. Interestingly enough, wrasse failed in the
> 013_partition test. I don't see how that could be caused by this
> particular commit, though - see the pgsql-committers thread [1].
>

I have a theory about what's going on here. I think this is due to a
test added in your previous commit c91f71b9dc. The newly added test
added hangs in tablesync because there was no apply worker to set the
state to SUBREL_STATE_CATCHUP which blocked tablesync workers from
proceeding.

See below logs from pogona [1].
2022-03-18 01:33:15.190 CET [2551176][client
backend][3/74:0][013_partition.pl] LOG:  statement: ALTER SUBSCRIPTION
sub2 SET PUBLICATION pub_lower_level, pub_all
2022-03-18 01:33:15.354 CET [2551193][logical replication
worker][4/57:0][] LOG:  logical replication apply worker for
subscription "sub2" has started
2022-03-18 01:33:15.605 CET [2551176][client
backend][:0][013_partition.pl] LOG:  disconnection: session time:
0:00:00.415 user=bf database=postgres host=[local]
2022-03-18 01:33:15.607 CET [2551209][logical replication
worker][3/76:0][] LOG:  logical replication table synchronization
worker for subscription "sub2", table "tab4_1" has started
2022-03-18 01:33:15.609 CET [2551211][logical replication
worker][5/11:0][] LOG:  logical replication table synchronization
worker for subscription "sub2", table "tab3" has started
2022-03-18 01:33:15.617 CET [2551193][logical replication
worker][4/62:0][] LOG:  logical replication apply worker for
subscription "sub2" will restart because of a parameter change

You will notice that the apply worker is never restarted after a
parameter change. The reason was that the particular subscription
reaches the limit of max_sync_workers_per_subscription after which we
don't allow to restart the apply worker. I think you might want to
increase the values of
max_sync_workers_per_subscription/max_logical_replication_workers to
make it work.

> I'd like to test & polish the main patch over the weekend, and get it
> committed early next week. Unless someone thinks it's definitely not
> ready for that ...
>

I think it is in good shape but apart from cleanup, there are issues
with dependency handling which I have analyzed and reported as one of
the comments in the email [2]. I was getting some weird behavior
during my testing due to that. Apart from that still the patch has DDL
handling code in tablecmds.c which probably is not required.
Similarly, Shi-San has reported an issue with replica full in her
email [3]. It is up to you what to do here but it would be good if you
can once share the patch after fixing these issues so that we can
re-test/review it.


[1] - 
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona=2022-03-17%2023%3A10%3A04
[2] - 
https://www.postgresql.org/message-id/CAA4eK1KR%2ByUQquK0Bx9uO3eb5xB1e0rAD9xKf-ddm5nSf4WfNg%40mail.gmail.com
[3] - 
https://www.postgresql.org/message-id/TYAPR01MB6315D664D926EF66DD6E91FCFD109%40TYAPR01MB6315.jpnprd01.prod.outlook.com

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-17 Thread Tomas Vondra
I pushed the second fix. Interestingly enough, wrasse failed in the
013_partition test. I don't see how that could be caused by this
particular commit, though - see the pgsql-committers thread [1].

I'd like to test & polish the main patch over the weekend, and get it
committed early next week. Unless someone thinks it's definitely not
ready for that ...


[1]
https://www.postgresql.org/message-id/E1nUsch-0008rQ-FW%40gemulon.postgresql.org

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-17 Thread Peter Eisentraut
I notice that the publication.sql regression tests contain a number of 
comments like


+-- error: replica identity "a" not included in the column list
+ALTER PUBLICATION testpub_fortable ADD TABLE testpub_tbl5 (b, c);

but the error doesn't actually happen, because of the way the replica 
identity checking was changed.  This needs to be checked again.





Re: Column Filtering in Logical Replication

2022-03-16 Thread Tomas Vondra



On 3/15/22 09:30, Tomas Vondra wrote:
> 
> 
> On 3/15/22 05:43, Amit Kapila wrote:
>> On Mon, Mar 14, 2022 at 4:42 PM houzj.f...@fujitsu.com
>>  wrote:
>>>
>>> On Monday, March 14, 2022 5:08 AM Tomas Vondra 
>>>  wrote:

 On 3/12/22 05:30, Amit Kapila wrote:
>> ...
>
> Okay, please find attached. I have done basic testing of this, if we
> agree with this approach then this will require some more testing.
>

 Thanks, the proposed changes seem like a clear improvement, so I've
 added them, with some minor tweaks (mostly to comments).
>>>
>>> Hi,
>>>
>>> Thanks for updating the patches !
>>> And sorry for the row filter bug caused by my mistake.
>>>
>>> I looked at the two fixup patches. I am thinking would it be better if we
>>> add one testcase for these two bugs? Maybe like the attachment.
>>>
>>
>> Your tests look good to me. We might want to add some comments for
>> each test but I guess that can be done before committing. Tomas, it
>> seems you are planning to push these bug fixes, do let me know if you
>> want me to take care of these while you focus on the main patch? I
>> think the first patch needs to be backpatched till 13 and the second
>> one is for just HEAD.
>>
> 
> Yeah, I plan to push the fixes later today. I'll polish them a bit
> first, and merge the tests (shared by Hou zj) into the patches etc.
> 

I've pushed (and backpatched to 13+) the fix for the publish_as_relid
issue, including the test. I tweaked the test a bit, to check both
orderings of the publication list.

While doing that, I discovered yet ANOTHER bug in the publish_as_relid
loop, affecting 12+13. There was a break once all actions were
replicated, but skipping additional publications ignores the fact that
the publications may replicate a different (higher-up) ancestor.

I removed the break, if anyone thinks this optimization is worth it we
could still do that once we replicate the top-most ancestor.


I'll push the second fix soon.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-16 Thread Amit Kapila
On Tue, Mar 15, 2022 at 7:38 AM shiy.f...@fujitsu.com
 wrote:
>
> On Mon, Mar 14, 2022 5:08 AM Tomas Vondra  
> wrote:
>
> 3. src/backend/commands/publicationcmds.c
> +/*
> + * Check if all columns referenced in the column list are part of the
> + * REPLICA IDENTITY index or not.
> + *
> + * Returns true if any invalid column is found.
> + */
>
> The comment for pub_collist_contains_invalid_column() seems wrong.  Should it 
> be
> "Check if all REPLICA IDENTITY columns are covered by the column list or not"?
>

On similar lines, I think errdetail for below messages need to be changed.
ereport(ERROR,
+ (errcode(ERRCODE_INVALID_COLUMN_REFERENCE),
+ errmsg("cannot update table \"%s\"",
+ RelationGetRelationName(rel)),
+ errdetail("Column list used by the publication does not cover the
replica identity.")));
  else if (cmd == CMD_DELETE && !pubdesc.rf_valid_for_delete)
  ereport(ERROR,
  (errcode(ERRCODE_INVALID_COLUMN_REFERENCE),
  errmsg("cannot delete from table \"%s\"",
  RelationGetRelationName(rel)),
  errdetail("Column used in the publication WHERE expression is not
part of the replica identity.")));
+ else if (cmd == CMD_DELETE && !pubdesc.cols_valid_for_delete)
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_COLUMN_REFERENCE),
+ errmsg("cannot delete from table \"%s\"",
+ RelationGetRelationName(rel)),
+ errdetail("Column list used by the publication does not cover the
replica identity.")));

Some assorted comments:

1. As mentioned previously as well[1], the change in ATExecDropColumn
is not required. Similarly, the change you seem to agree upon in
logicalrep_write_update[2] doesn't seem to be present.

2. I think the dependency handling in publication_set_table_columns()
has problems. While removing existing dependencies, it uses
PublicationRelationId as classId whereas while adding new dependencies
it uses PublicationRelRelationId as classId. This will create problems
while removing columns from table. For example,
postgres=# create table t1(c1 int, c2 int, c3 int);
CREATE TABLE
postgres=# create publication pub1 for table t1(c1, c2);
CREATE PUBLICATION
postgres=# select * from pg_depend where classid = 6106 or refclassid
= 6106 or classid = 6104;
 classid | objid | objsubid | refclassid | refobjid | refobjsubid | deptype
-+---+--++--+-+-
6106 | 16409 |0 |   1259 |16405 |   1 | a
6106 | 16409 |0 |   1259 |16405 |   2 | a
6106 | 16409 |0 |   6104 |16408 |   0 | a
6106 | 16409 |0 |   1259 |16405 |   0 | a
(4 rows)

Till here everything is fine.

postgres=# Alter publication pub1 alter table t1 set columns(c2);
ALTER PUBLICATION
postgres=# select * from pg_depend where classid = 6106 or refclassid
= 6106 or classid = 6104;
 classid | objid | objsubid | refclassid | refobjid | refobjsubid | deptype
-+---+--++--+-+-
6106 | 16409 |0 |   1259 |16405 |   1 | a
6106 | 16409 |0 |   1259 |16405 |   2 | a
6106 | 16409 |0 |   6104 |16408 |   0 | a
6106 | 16409 |0 |   1259 |16405 |   0 | a
6106 | 16409 |0 |   1259 |16405 |   2 | a
(5 rows)

Now without removing dependencies for columns 1 and 2, it added a new
dependency for column 2.

3.
@@ -930,8 +1054,24 @@ copy_table(Relation rel)
...
+ for (int i = 0; i < lrel.natts; i++)
+ {
+ if (i > 0)
+ appendStringInfoString(, ", ");
+
+ appendStringInfoString(, quote_identifier(lrel.attnames[i]));
+ }
...
...
for (int i = 0; i < lrel.natts; i++)
{
appendStringInfoString(, quote_identifier(lrel.attnames[i]));
if (i < lrel.natts - 1)
appendStringInfoString(, ", ");
}

In the same function, we use two different styles to achieve the same
thing. I think it is better to use the same style (probably existing)
at both places for the sake of consistency.

4.
+  
+   The ALTER TABLE ... SET COLUMNS variant allows changing
+   the set of columns that are included in the publication.  If a column list
+   is specified, it must include the replica identity columns.
+  

I think the second part holds true only for update/delete publications.

5.
+ * XXX Should this detect duplicate columns?
+ */
+static void
+publication_translate_columns(Relation targetrel, List *columns,
+   int *natts, AttrNumber **attrs)
{
...
+ if (bms_is_member(attnum, set))
+ ereport(ERROR,
+ errcode(ERRCODE_DUPLICATE_OBJECT),
+ errmsg("duplicate column \"%s\" in publication column list",
+colname));
...
}

It seems we already detect duplicate columns in this function. So XXX
part of the comment doesn't seem to be required.

6.
+ * XXX The name is a bit misleading, because we don't really transform
+ * anything here - we merely check the column list is compatible with the
+ * definition of the publication (with 

Re: Column Filtering in Logical Replication

2022-03-15 Thread Tomas Vondra



On 3/15/22 05:43, Amit Kapila wrote:
> On Mon, Mar 14, 2022 at 4:42 PM houzj.f...@fujitsu.com
>  wrote:
>>
>> On Monday, March 14, 2022 5:08 AM Tomas Vondra 
>>  wrote:
>>>
>>> On 3/12/22 05:30, Amit Kapila wrote:
> ...

 Okay, please find attached. I have done basic testing of this, if we
 agree with this approach then this will require some more testing.

>>>
>>> Thanks, the proposed changes seem like a clear improvement, so I've
>>> added them, with some minor tweaks (mostly to comments).
>>
>> Hi,
>>
>> Thanks for updating the patches !
>> And sorry for the row filter bug caused by my mistake.
>>
>> I looked at the two fixup patches. I am thinking would it be better if we
>> add one testcase for these two bugs? Maybe like the attachment.
>>
> 
> Your tests look good to me. We might want to add some comments for
> each test but I guess that can be done before committing. Tomas, it
> seems you are planning to push these bug fixes, do let me know if you
> want me to take care of these while you focus on the main patch? I
> think the first patch needs to be backpatched till 13 and the second
> one is for just HEAD.
> 

Yeah, I plan to push the fixes later today. I'll polish them a bit
first, and merge the tests (shared by Hou zj) into the patches etc.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-14 Thread Amit Kapila
On Mon, Mar 14, 2022 at 7:02 PM Tomas Vondra
 wrote:
>
> On 3/14/22 13:47, Amit Kapila wrote:
> > On Mon, Mar 14, 2022 at 5:42 PM Tomas Vondra
> >  wrote:
> >>
> >> On 3/14/22 12:12, houzj.f...@fujitsu.com wrote:
> >>> On Monday, March 14, 2022 5:08 AM Tomas Vondra 
> >>>  wrote:
> >>
> >> Anyway, the fix does not address tablesync, as explained in [1]. I'm not
> >> sure what to do about it - in principle, we could calculate which
> >> relations to sync, and then eliminate "duplicates" (i.e. relations where
> >> we are going to sync an ancestor).
> >>
> >
> > As mentioned in my previous email [1], this appears to be a base code
> > issue (even without row filter or column filter work), so it seems
> > better to deal with it separately. It has been reported separately as
> > well [2] where we found some similar issues.
> >
>
> Right. I don't want to be waiting for that fix either, that'd block this
> patch unnecessarily. If there are no other comments, I'll go ahead,
> polish the existing patches a bit more and get them committed. We can
> worry about this pre-existing issue later.
>

I think the first two patches are ready to go. I haven't read the
latest version in detail but I have in mind that we want to get this
in for PG-15.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-14 Thread Amit Kapila
On Mon, Mar 14, 2022 at 4:42 PM houzj.f...@fujitsu.com
 wrote:
>
> On Monday, March 14, 2022 5:08 AM Tomas Vondra 
>  wrote:
> >
> > On 3/12/22 05:30, Amit Kapila wrote:
> > >> ...
> > >
> > > Okay, please find attached. I have done basic testing of this, if we
> > > agree with this approach then this will require some more testing.
> > >
> >
> > Thanks, the proposed changes seem like a clear improvement, so I've
> > added them, with some minor tweaks (mostly to comments).
>
> Hi,
>
> Thanks for updating the patches !
> And sorry for the row filter bug caused by my mistake.
>
> I looked at the two fixup patches. I am thinking would it be better if we
> add one testcase for these two bugs? Maybe like the attachment.
>

Your tests look good to me. We might want to add some comments for
each test but I guess that can be done before committing. Tomas, it
seems you are planning to push these bug fixes, do let me know if you
want me to take care of these while you focus on the main patch? I
think the first patch needs to be backpatched till 13 and the second
one is for just HEAD.

-- 
With Regards,
Amit Kapila.




RE: Column Filtering in Logical Replication

2022-03-14 Thread shiy.f...@fujitsu.com
On Mon, Mar 14, 2022 5:08 AM Tomas Vondra  wrote:
> 
> On 3/12/22 05:30, Amit Kapila wrote:
> >> ...
> >
> > Okay, please find attached. I have done basic testing of this, if we
> > agree with this approach then this will require some more testing.
> >
> 
> Thanks, the proposed changes seem like a clear improvement, so I've
> added them, with some minor tweaks (mostly to comments).
> 
> I've also included the memory context rename (entry_changes to the
> change proposed by Wang Wei, using a single SQL command in tablesync.
> 
> And I've renamed the per-entry memory context to entry_cxt, and used it
> for the column list.
> 

Thanks for your patch.
Here are some comments for column filter main patch (0003 patch).

1. doc/src/sgml/catalogs.sgml
@@ -6263,6 +6263,19 @@ SCRAM-SHA-256$iteration 
count:
Reference to schema
   
  
+
+ 
+  
+   prattrs int2vector
+   (references pg_attribute.attnum)
+  
+  
+   This is an array of values that indicates which table columns are
+   part of the publication.  For example, a value of 1 3
+   would mean that the first and the third table columns are published.
+   A null value indicates that all columns are published.
+  
+ 
 

   

This change was added to pg_publication_namespace view. I think it should be
added to pg_publication_rel view, right?

2. src/backend/replication/pgoutput/pgoutput.c
@@ -188,6 +202,7 @@ static EState *create_estate_for_relation(Relation rel);
 static void pgoutput_row_filter_init(PGOutputData *data,
 List 
*publications,
 
RelationSyncEntry *entry);
+
 static bool pgoutput_row_filter_exec_expr(ExprState *state,

  ExprContext *econtext);
 static bool pgoutput_row_filter(Relation relation, TupleTableSlot *old_slot,

Should we remove this change?

3. src/backend/commands/publicationcmds.c
+/*
+ * Check if all columns referenced in the column list are part of the
+ * REPLICA IDENTITY index or not.
+ *
+ * Returns true if any invalid column is found.
+ */

The comment for pub_collist_contains_invalid_column() seems wrong.  Should it be
"Check if all REPLICA IDENTITY columns are covered by the column list or not"?

4.
The patch doesn't allow delete and update operations if the target table uses
replica identity full and it is published with column list specified, even if
column list includes all columns in the table.

For example:
create table tbl (a int, b int, c int);
create publication pub for table tbl (a, b, c);
alter table tbl replica identity full;

postgres=# delete from tbl;
ERROR:  cannot delete from table "tbl"
DETAIL:  Column list used by the publication does not cover the replica 
identity.

Should we allow this case? I think it doesn't seem to cause harm.

5. 
Maybe we need some changes for tab-complete.c.

Regards,
Shi yu


Re: Column Filtering in Logical Replication

2022-03-14 Thread Tomas Vondra



On 3/14/22 13:47, Amit Kapila wrote:
> On Mon, Mar 14, 2022 at 5:42 PM Tomas Vondra
>  wrote:
>>
>> On 3/14/22 12:12, houzj.f...@fujitsu.com wrote:
>>> On Monday, March 14, 2022 5:08 AM Tomas Vondra 
>>>  wrote:
>>
>> Anyway, the fix does not address tablesync, as explained in [1]. I'm not
>> sure what to do about it - in principle, we could calculate which
>> relations to sync, and then eliminate "duplicates" (i.e. relations where
>> we are going to sync an ancestor).
>>
> 
> As mentioned in my previous email [1], this appears to be a base code
> issue (even without row filter or column filter work), so it seems
> better to deal with it separately. It has been reported separately as
> well [2] where we found some similar issues.
> 

Right. I don't want to be waiting for that fix either, that'd block this
patch unnecessarily. If there are no other comments, I'll go ahead,
polish the existing patches a bit more and get them committed. We can
worry about this pre-existing issue later.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-14 Thread Amit Kapila
On Mon, Mar 14, 2022 at 5:42 PM Tomas Vondra
 wrote:
>
> On 3/14/22 12:12, houzj.f...@fujitsu.com wrote:
> > On Monday, March 14, 2022 5:08 AM Tomas Vondra 
> >  wrote:
>
> Anyway, the fix does not address tablesync, as explained in [1]. I'm not
> sure what to do about it - in principle, we could calculate which
> relations to sync, and then eliminate "duplicates" (i.e. relations where
> we are going to sync an ancestor).
>

As mentioned in my previous email [1], this appears to be a base code
issue (even without row filter or column filter work), so it seems
better to deal with it separately. It has been reported separately as
well [2] where we found some similar issues.


[1] - 
https://www.postgresql.org/message-id/CAA4eK1LSb-xrvGEm3ShaRA%3DMkdii2d%2B4vqh9DGPvVDA%2BD9ibYw%40mail.gmail.com
[2] - 
https://www.postgresql.org/message-id/os0pr01mb5716dc2982cc735fde38880494...@os0pr01mb5716.jpnprd01.prod.outlook.com

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-14 Thread Tomas Vondra



On 3/14/22 12:12, houzj.f...@fujitsu.com wrote:
> On Monday, March 14, 2022 5:08 AM Tomas Vondra 
>  wrote:
>>
>> On 3/12/22 05:30, Amit Kapila wrote:
 ...
>>>
>>> Okay, please find attached. I have done basic testing of this, if we
>>> agree with this approach then this will require some more testing.
>>>
>>
>> Thanks, the proposed changes seem like a clear improvement, so I've
>> added them, with some minor tweaks (mostly to comments).
> 
> Hi,
> 
> Thanks for updating the patches !
> And sorry for the row filter bug caused by my mistake.
> 
> I looked at the two fixup patches. I am thinking would it be better if we
> add one testcase for these two bugs? Maybe like the attachment.
> 

Yeah, a test would be nice - I'll take a look later.

Anyway, the fix does not address tablesync, as explained in [1]. I'm not
sure what to do about it - in principle, we could calculate which
relations to sync, and then eliminate "duplicates" (i.e. relations where
we are going to sync an ancestor).


regards

[1]
https://www.postgresql.org/message-id/822a8e40-287c-59ff-0ea9-35eb759f4fe6%40enterprisedb.com


-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-14 Thread Tomas Vondra



On 3/14/22 10:53, Amit Kapila wrote:
> On Mon, Mar 14, 2022 at 2:37 AM Tomas Vondra
>  wrote:
>>
>> On 3/12/22 05:30, Amit Kapila wrote:
 ...
>>>
>>> Okay, please find attached. I have done basic testing of this, if we
>>> agree with this approach then this will require some more testing.
>>>
>>
>> Thanks, the proposed changes seem like a clear improvement, so I've
>> added them, with some minor tweaks (mostly to comments).
>>
> 
> One minor point: Did you intentionally remove
> list_free(rel_publications) before resetting the list from the second
> patch? The memory for rel_publications is allocated in
> TopTransactionContext, so a large transaction touching many relations
> will only free this at end of the transaction which may not be a big
> deal as we don't do this every time. We free this list a few lines
> down in successful case so this appears slightly odd to me but I am
> fine if you think it doesn't matter.

The removal was not intentional, but I don't think it's an issue exactly
because it's a tiny mount of memory and we'll release it at the end of
the transaction. Which should not take long.

regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




RE: Column Filtering in Logical Replication

2022-03-14 Thread houzj.f...@fujitsu.com
On Monday, March 14, 2022 5:08 AM Tomas Vondra  
wrote:
> 
> On 3/12/22 05:30, Amit Kapila wrote:
> >> ...
> >
> > Okay, please find attached. I have done basic testing of this, if we
> > agree with this approach then this will require some more testing.
> >
> 
> Thanks, the proposed changes seem like a clear improvement, so I've
> added them, with some minor tweaks (mostly to comments).

Hi,

Thanks for updating the patches !
And sorry for the row filter bug caused by my mistake.

I looked at the two fixup patches. I am thinking would it be better if we
add one testcase for these two bugs? Maybe like the attachment.

(Attach the fixup patch to make the cfbot happy)

Best regards,
Hou zj




0003-fixup-row-filter-publications-20220313.patch
Description: 0003-fixup-row-filter-publications-20220313.patch


0001-fixup-publish_as_relid-20220313.patch
Description: 0001-fixup-publish_as_relid-20220313.patch


0002-testcase-for-publish-as-relid.patch
Description: 0002-testcase-for-publish-as-relid.patch


0004-testcase-for-row-filter-publication.patch
Description: 0004-testcase-for-row-filter-publication.patch


Re: Column Filtering in Logical Replication

2022-03-14 Thread Amit Kapila
On Mon, Mar 14, 2022 at 2:37 AM Tomas Vondra
 wrote:
>
> On 3/12/22 05:30, Amit Kapila wrote:
> >> ...
> >
> > Okay, please find attached. I have done basic testing of this, if we
> > agree with this approach then this will require some more testing.
> >
>
> Thanks, the proposed changes seem like a clear improvement, so I've
> added them, with some minor tweaks (mostly to comments).
>

One minor point: Did you intentionally remove
list_free(rel_publications) before resetting the list from the second
patch? The memory for rel_publications is allocated in
TopTransactionContext, so a large transaction touching many relations
will only free this at end of the transaction which may not be a big
deal as we don't do this every time. We free this list a few lines
down in successful case so this appears slightly odd to me but I am
fine if you think it doesn't matter.


-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-11 Thread Amit Kapila
On Fri, Mar 11, 2022 at 6:20 PM Tomas Vondra
 wrote:
>
> On 3/11/22 10:52, Amit Kapila wrote:
> > On Fri, Mar 11, 2022 at 7:26 AM Tomas Vondra
> >  wrote:
> >>
> >> On 3/10/22 20:10, Tomas Vondra wrote:
> >>>
> >>>
> >>> FWIW I think the reason is pretty simple - pgoutput_row_filter_init is
> >>> broken. It assumes you can just do this
> >>>
> >>> rftuple = SearchSysCache2(PUBLICATIONRELMAP,
> >>>   ObjectIdGetDatum(entry->publish_as_relid),
> >>>   ObjectIdGetDatum(pub->oid));
> >>>
> >>> if (HeapTupleIsValid(rftuple))
> >>> {
> >>> /* Null indicates no filter. */
> >>> rfdatum = SysCacheGetAttr(PUBLICATIONRELMAP, rftuple,
> >>>   Anum_pg_publication_rel_prqual,
> >>>   _no_filter);
> >>> }
> >>> else
> >>> {
> >>> pub_no_filter = true;
> >>> }
> >>>
> >>>
> >>> and pub_no_filter=true means there's no filter at all. Which is
> >>> nonsense, because we're using publish_as_relid here - the publication
> >>> may not include this particular ancestor, in which case we need to just
> >>> ignore this publication.
> >>>
> >>> So yeah, this needs to be reworked.
> >>>
> >>
> >> I spent a bit of time looking at this, and I think a minor change in
> >> get_rel_sync_entry() fixes this - it's enough to ensure rel_publications
> >> only includes publications that actually include publish_as_relid.
> >>
> >
> > Thanks for looking into this. I think in the first patch before
> > calling get_partition_ancestors() we need to ensure it is a partition
> > (the call expects that) and pubviaroot is true.
>
> Does the call really require that?
>

There may not be any harm but I have mentioned it because (a) the
comments atop get_partition_ancestors(...it should only be called when
it is known that the relation is a partition.) indicates the same; (b)
all existing callers seems to use it only for partitions.

> Also, I'm not sure why we'd need to
> look at pubviaroot - that's already considered earlier when calculating
> publish_as_relid, here we just need to know the relationship of the two
> OIDs (if one is ancestor/child of the other).
>

I thought of avoiding calling get_partition_ancestors when pubviaroot
is not set. It will unnecessary check the whole hierarchy for
partitions even when it is not required. I agree that this is not a
common code path but still felt why do it needlessly?

> > I think it would be
> > good if we can avoid an additional call to get_partition_ancestors()
> > as it could be costly.
>
> Maybe. OTOH we only should do this only very rarely anyway.
>
> > I wonder why it is not sufficient to ensure
> > that publish_as_relid exists after ancestor in ancestors list before
> > assigning the ancestor to publish_as_relid? This only needs to be done
> > in case of (if (!publish)). I have not tried this so I could be wrong.
> >
>
> I'm not sure what exactly are you proposing. Maybe try coding it? That's
> probably faster than trying to describe what the code might do ...
>

Okay, please find attached. I have done basic testing of this, if we
agree with this approach then this will require some more testing.

-- 
With Regards,
Amit Kapila.


v1-0001-fixup-publish_as_relid.patch
Description: Binary data


v1-0002-fixup-row-filter-publications.patch
Description: Binary data


Re: Column Filtering in Logical Replication

2022-03-11 Thread Tomas Vondra



On 3/11/22 10:52, Amit Kapila wrote:
> On Fri, Mar 11, 2022 at 7:26 AM Tomas Vondra
>  wrote:
>>
>> On 3/10/22 20:10, Tomas Vondra wrote:
>>>
>>>
>>> FWIW I think the reason is pretty simple - pgoutput_row_filter_init is
>>> broken. It assumes you can just do this
>>>
>>> rftuple = SearchSysCache2(PUBLICATIONRELMAP,
>>>   ObjectIdGetDatum(entry->publish_as_relid),
>>>   ObjectIdGetDatum(pub->oid));
>>>
>>> if (HeapTupleIsValid(rftuple))
>>> {
>>> /* Null indicates no filter. */
>>> rfdatum = SysCacheGetAttr(PUBLICATIONRELMAP, rftuple,
>>>   Anum_pg_publication_rel_prqual,
>>>   _no_filter);
>>> }
>>> else
>>> {
>>> pub_no_filter = true;
>>> }
>>>
>>>
>>> and pub_no_filter=true means there's no filter at all. Which is
>>> nonsense, because we're using publish_as_relid here - the publication
>>> may not include this particular ancestor, in which case we need to just
>>> ignore this publication.
>>>
>>> So yeah, this needs to be reworked.
>>>
>>
>> I spent a bit of time looking at this, and I think a minor change in
>> get_rel_sync_entry() fixes this - it's enough to ensure rel_publications
>> only includes publications that actually include publish_as_relid.
>>
> 
> Thanks for looking into this. I think in the first patch before
> calling get_partition_ancestors() we need to ensure it is a partition
> (the call expects that) and pubviaroot is true.

Does the call really require that? Also, I'm not sure why we'd need to
look at pubviaroot - that's already considered earlier when calculating
publish_as_relid, here we just need to know the relationship of the two
OIDs (if one is ancestor/child of the other).

> I think it would be
> good if we can avoid an additional call to get_partition_ancestors()
> as it could be costly.

Maybe. OTOH we only should do this only very rarely anyway.

> I wonder why it is not sufficient to ensure
> that publish_as_relid exists after ancestor in ancestors list before
> assigning the ancestor to publish_as_relid? This only needs to be done
> in case of (if (!publish)). I have not tried this so I could be wrong.
> 

I'm not sure what exactly are you proposing. Maybe try coding it? That's
probably faster than trying to describe what the code might do ...


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-11 Thread Tomas Vondra



On 3/11/22 08:05, wangw.f...@fujitsu.com wrote:
> On Fri, Mar 11, 2022 at 9:57 AM Tomas Vondra  
> wrote:
>>
> Hi Tomas,
> Thanks for your patches.
> 
> On Mon, Mar 9, 2022 at 9:53 PM Tomas Vondra  
> wrote:
>> On Wed, Mar 9, 2022 at 6:04 PM Amit Kapila  wrote:
>>> On Mon, Mar 7, 2022 at 11:18 PM Tomas Vondra 
>>>  wrote:
 On Fri, Mar 4, 2022 at 6:43 PM Amit Kapila  wrote:
> Fetching column filter info in tablesync.c is quite expensive. It
> seems to be using four round-trips to get the complete info whereas
> for row-filter we use just one round trip. I think we should try to
> get both row filter and column filter info in just one round trip.
>

 Maybe, but I really don't think this is an issue.

>>>
>>> I am not sure but it might matter for small tables. Leaving aside the
>>> performance issue, I think the current way will get the wrong column
>>> list in many cases: (a) The ALL TABLES IN SCHEMA case handling won't
>>> work for partitioned tables when the partitioned table is part of one
>>> schema and partition table is part of another schema. (b) The handling
>>> of partition tables in other cases will fetch incorrect lists as it
>>> tries to fetch the column list of all the partitions in the hierarchy.
>>>
>>> One of my colleagues has even tested these cases both for column
>>> filters and row filters and we find the behavior of row filter is okay
>>> whereas for column filter it uses the wrong column list. We will share
>>> the tests and results with you in a later email. We are trying to
>>> unify the column filter queries with row filter to make their behavior
>>> the same and will share the findings once it is done. I hope if we are
>>> able to achieve this that we will reduce the chances of bugs in this
>>> area.
>>>
>>
>> OK, I'll take a look at that email.
> I tried to get both the column filters and the row filters with one SQL, but
> it failed because I think the result is not easy to parse.
> 
> I noted that we use two SQLs to get column filters in the latest
> patches(20220311). I think maybe we could use one SQL to get column filters to
> reduce network cost. Like the SQL in the attachment.
> 

I'll take a look. But as I said before - I very much prefer SQL that is
easy to understand, and I don't think the one extra round trip is an
issue during tablesync (which is a very rare action).


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-11 Thread Amit Kapila
On Fri, Mar 11, 2022 at 7:26 AM Tomas Vondra
 wrote:
>
> On 3/10/22 20:10, Tomas Vondra wrote:
> >
> >
> > FWIW I think the reason is pretty simple - pgoutput_row_filter_init is
> > broken. It assumes you can just do this
> >
> > rftuple = SearchSysCache2(PUBLICATIONRELMAP,
> >   ObjectIdGetDatum(entry->publish_as_relid),
> >   ObjectIdGetDatum(pub->oid));
> >
> > if (HeapTupleIsValid(rftuple))
> > {
> > /* Null indicates no filter. */
> > rfdatum = SysCacheGetAttr(PUBLICATIONRELMAP, rftuple,
> >   Anum_pg_publication_rel_prqual,
> >   _no_filter);
> > }
> > else
> > {
> > pub_no_filter = true;
> > }
> >
> >
> > and pub_no_filter=true means there's no filter at all. Which is
> > nonsense, because we're using publish_as_relid here - the publication
> > may not include this particular ancestor, in which case we need to just
> > ignore this publication.
> >
> > So yeah, this needs to be reworked.
> >
>
> I spent a bit of time looking at this, and I think a minor change in
> get_rel_sync_entry() fixes this - it's enough to ensure rel_publications
> only includes publications that actually include publish_as_relid.
>

Thanks for looking into this. I think in the first patch before
calling get_partition_ancestors() we need to ensure it is a partition
(the call expects that) and pubviaroot is true. I think it would be
good if we can avoid an additional call to get_partition_ancestors()
as it could be costly. I wonder why it is not sufficient to ensure
that publish_as_relid exists after ancestor in ancestors list before
assigning the ancestor to publish_as_relid? This only needs to be done
in case of (if (!publish)). I have not tried this so I could be wrong.

-- 
With Regards,
Amit Kapila.




RE: Column Filtering in Logical Replication

2022-03-10 Thread wangw.f...@fujitsu.com
On Fri, Mar 11, 2022 at 9:57 AM Tomas Vondra  
wrote:
>
Hi Tomas,
Thanks for your patches.

On Mon, Mar 9, 2022 at 9:53 PM Tomas Vondra  
wrote:
>On Wed, Mar 9, 2022 at 6:04 PM Amit Kapila  wrote:
>>On Mon, Mar 7, 2022 at 11:18 PM Tomas Vondra  
>>wrote:
>>>On Fri, Mar 4, 2022 at 6:43 PM Amit Kapila  wrote:
> >>> Fetching column filter info in tablesync.c is quite expensive. It
> >>> seems to be using four round-trips to get the complete info whereas
> >>> for row-filter we use just one round trip. I think we should try to
> >>> get both row filter and column filter info in just one round trip.
> >>>
> >>
> >> Maybe, but I really don't think this is an issue.
> >>
> >
> > I am not sure but it might matter for small tables. Leaving aside the
> > performance issue, I think the current way will get the wrong column
> > list in many cases: (a) The ALL TABLES IN SCHEMA case handling won't
> > work for partitioned tables when the partitioned table is part of one
> > schema and partition table is part of another schema. (b) The handling
> > of partition tables in other cases will fetch incorrect lists as it
> > tries to fetch the column list of all the partitions in the hierarchy.
> >
> > One of my colleagues has even tested these cases both for column
> > filters and row filters and we find the behavior of row filter is okay
> > whereas for column filter it uses the wrong column list. We will share
> > the tests and results with you in a later email. We are trying to
> > unify the column filter queries with row filter to make their behavior
> > the same and will share the findings once it is done. I hope if we are
> > able to achieve this that we will reduce the chances of bugs in this
> > area.
> >
> 
> OK, I'll take a look at that email.
I tried to get both the column filters and the row filters with one SQL, but
it failed because I think the result is not easy to parse.

I noted that we use two SQLs to get column filters in the latest
patches(20220311). I think maybe we could use one SQL to get column filters to
reduce network cost. Like the SQL in the attachment.

Regards,
Wang wei


0001-Try-to-get-column-filters-with-one-SQL.patch
Description: 0001-Try-to-get-column-filters-with-one-SQL.patch


Re: Column Filtering in Logical Replication

2022-03-10 Thread Tomas Vondra
On 3/11/22 03:46, Amit Kapila wrote:
> On Fri, Mar 11, 2022 at 12:44 AM Tomas Vondra
>  wrote:
>>
>> On 3/10/22 04:09, Amit Kapila wrote:
>>> On Wed, Mar 9, 2022 at 3:33 PM Amit Kapila  wrote:

 On Mon, Mar 7, 2022 at 8:48 PM Tomas Vondra
  wrote:

> OK, I reworked this to do the same thing as the row filtering patch.
>

 Thanks, I'll check this.

>>>
>>> Some assorted comments:
>>> =
>>> 1. We don't need to send a column list for the old tuple in case of an
>>> update (similar to delete). It is not required to apply a column
>>> filter for those cases because we ensure that RI must be part of the
>>> column list for updates and deletes.
>>
>> I'm not sure which part of the code does this refer to?
>>
> 
> The below part:
> @@ -464,11 +473,11 @@ logicalrep_write_update(StringInfo out,
> TransactionId xid, Relation rel,
>   pq_sendbyte(out, 'O'); /* old tuple follows */
>   else
>   pq_sendbyte(out, 'K'); /* old key follows */
> - logicalrep_write_tuple(out, rel, oldslot, binary);
> + logicalrep_write_tuple(out, rel, oldslot, binary, columns);
>   }
> 
> I think here instead of columns, the patch needs to send NULL as it is
> already doing in logicalrep_write_delete.
> 

Hmmm, yeah. In practice it doesn't really matter, because NULL means
"send all columns" so it actually relaxes the check. But we only send
the RI keys, which is a subset of the column filter. But will fix.

>>> 2.
>>> + /*
>>> + * Check if all columns referenced in the column filter are part of
>>> + * the REPLICA IDENTITY index or not.
>>>
>>> I think this comment is reverse. The rule we follow here is that
>>> attributes that are part of RI must be there in a specified column
>>> list. This is used at two places in the patch.
>>
>> Yeah, you're right. Will fix.
>>
>>> 3. get_rel_sync_entry()
>>> {
>>> /* XXX is there a danger of memory leak here? beware */
>>> + oldctx = MemoryContextSwitchTo(CacheMemoryContext);
>>> + for (int i = 0; i < nelems; i++)
>>> ...
>>> }
>>>
>>> Similar to the row filter, I think we need to use
>>> entry->cache_expr_cxt to allocate this. There are other usages of
>>> CacheMemoryContext in this part of the code but I think those need to
>>> be also changed and we can do that as a separate patch. If we do the
>>> suggested change then we don't need to separately free columns.
>>
>> I agree a shorter-lived context would be better than CacheMemoryContext,
>> but "expr" seems to indicate it's for the expression only, so maybe we
>> should rename that.
>>
> 
> Yeah, we can do that. How about rel_entry_cxt or something like that?
> The idea is that eventually, we should move a few other things of
> RelSyncEntry like attrmap where we are using CacheMemoryContext under
> this context.
> 

Yeah, rel_entry_cxt sounds fine I guess ...

>> But do we really want a memory context for every
>> single entry?
>>
> 
> Any other better idea?
> 

No, I think you're right - it'd be hard/impossible to keep track of all
the memory allocated for expression/estate. It'd be fine for the
columns, because that's just a bitmap, but not for the expressions.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-10 Thread Amit Kapila
On Fri, Mar 11, 2022 at 7:26 AM Tomas Vondra
 wrote:
>
> But this does not address tablesync.c :-( That still copies everything,
> because it decides to sync both rels (test_pub_part_1, test_pub_part_2),
> with it's row filter. On older releases this would fail, because we'd
> start two workers:
>

Yeah, this is because of the existing problem where we sync both rels
instead of one. We have fixed some similar existing problems earlier.
Hou-San has reported a similar case in another email [1].

>
> But I find this really weird - I think it's reasonable to expect the
> sync to produce the same result as if the data was inserted and
> replicated, and this just violates that.
>
> Shouldn't tablesync calculate a list of relations in a way that prevents
> such duplicate / overlapping syncs?
>

Yes, I think it is better to fix it separately than to fix it along
with row filter or column filter work.

>
In any case, this sync issue looks
> entirely unrelated to the column filtering patch.
>

Right.

[1] - 
https://www.postgresql.org/message-id/OS0PR01MB5716DC2982CC735FDE388804940B9%40OS0PR01MB5716.jpnprd01.prod.outlook.com

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-10 Thread Amit Kapila
On Fri, Mar 11, 2022 at 12:44 AM Tomas Vondra
 wrote:
>
> On 3/10/22 04:09, Amit Kapila wrote:
> > On Wed, Mar 9, 2022 at 3:33 PM Amit Kapila  wrote:
> >>
> >> On Mon, Mar 7, 2022 at 8:48 PM Tomas Vondra
> >>  wrote:
> >>
> >>> OK, I reworked this to do the same thing as the row filtering patch.
> >>>
> >>
> >> Thanks, I'll check this.
> >>
> >
> > Some assorted comments:
> > =
> > 1. We don't need to send a column list for the old tuple in case of an
> > update (similar to delete). It is not required to apply a column
> > filter for those cases because we ensure that RI must be part of the
> > column list for updates and deletes.
>
> I'm not sure which part of the code does this refer to?
>

The below part:
@@ -464,11 +473,11 @@ logicalrep_write_update(StringInfo out,
TransactionId xid, Relation rel,
  pq_sendbyte(out, 'O'); /* old tuple follows */
  else
  pq_sendbyte(out, 'K'); /* old key follows */
- logicalrep_write_tuple(out, rel, oldslot, binary);
+ logicalrep_write_tuple(out, rel, oldslot, binary, columns);
  }

I think here instead of columns, the patch needs to send NULL as it is
already doing in logicalrep_write_delete.

> > 2.
> > + /*
> > + * Check if all columns referenced in the column filter are part of
> > + * the REPLICA IDENTITY index or not.
> >
> > I think this comment is reverse. The rule we follow here is that
> > attributes that are part of RI must be there in a specified column
> > list. This is used at two places in the patch.
>
> Yeah, you're right. Will fix.
>
> > 3. get_rel_sync_entry()
> > {
> > /* XXX is there a danger of memory leak here? beware */
> > + oldctx = MemoryContextSwitchTo(CacheMemoryContext);
> > + for (int i = 0; i < nelems; i++)
> > ...
> > }
> >
> > Similar to the row filter, I think we need to use
> > entry->cache_expr_cxt to allocate this. There are other usages of
> > CacheMemoryContext in this part of the code but I think those need to
> > be also changed and we can do that as a separate patch. If we do the
> > suggested change then we don't need to separately free columns.
>
> I agree a shorter-lived context would be better than CacheMemoryContext,
> but "expr" seems to indicate it's for the expression only, so maybe we
> should rename that.
>

Yeah, we can do that. How about rel_entry_cxt or something like that?
The idea is that eventually, we should move a few other things of
RelSyncEntry like attrmap where we are using CacheMemoryContext under
this context.

> But do we really want a memory context for every
> single entry?
>

Any other better idea?

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-10 Thread Tomas Vondra



On 3/10/22 04:09, Amit Kapila wrote:
> On Wed, Mar 9, 2022 at 3:33 PM Amit Kapila  wrote:
>>
>> On Mon, Mar 7, 2022 at 8:48 PM Tomas Vondra
>>  wrote:
>>
>>> OK, I reworked this to do the same thing as the row filtering patch.
>>>
>>
>> Thanks, I'll check this.
>>
> 
> Some assorted comments:
> =
> 1. We don't need to send a column list for the old tuple in case of an
> update (similar to delete). It is not required to apply a column
> filter for those cases because we ensure that RI must be part of the
> column list for updates and deletes.

I'm not sure which part of the code does this refer to?

> 2.
> + /*
> + * Check if all columns referenced in the column filter are part of
> + * the REPLICA IDENTITY index or not.
> 
> I think this comment is reverse. The rule we follow here is that
> attributes that are part of RI must be there in a specified column
> list. This is used at two places in the patch.

Yeah, you're right. Will fix.

> 3. get_rel_sync_entry()
> {
> /* XXX is there a danger of memory leak here? beware */
> + oldctx = MemoryContextSwitchTo(CacheMemoryContext);
> + for (int i = 0; i < nelems; i++)
> ...
> }
> 
> Similar to the row filter, I think we need to use
> entry->cache_expr_cxt to allocate this. There are other usages of
> CacheMemoryContext in this part of the code but I think those need to
> be also changed and we can do that as a separate patch. If we do the
> suggested change then we don't need to separately free columns.

I agree a shorter-lived context would be better than CacheMemoryContext,
but "expr" seems to indicate it's for the expression only, so maybe we
should rename that. But do we really want a memory context for every
single entry?

> 4. I think we don't need the DDL changes in AtExecDropColumn. Instead,
> we can change the dependency of columns to NORMAL during publication
> commands.

I'll think about that.

> 5. There is a reference to check_publication_columns but that function
> is removed from the patch.

Right, will fix.

> 6.
> /*
> * If we know everything is replicated and the row filter is invalid
> * for update and delete, there is no point to check for other
> * publications.
> */
> if (pubdesc->pubactions.pubinsert && pubdesc->pubactions.pubupdate &&
> pubdesc->pubactions.pubdelete && pubdesc->pubactions.pubtruncate &&
> !pubdesc->rf_valid_for_update && !pubdesc->rf_valid_for_delete)
> break;
> 
> /*
> * If we know everything is replicated and the column filter is invalid
> * for update and delete, there is no point to check for other
> * publications.
> */
> if (pubdesc->pubactions.pubinsert && pubdesc->pubactions.pubupdate &&
> pubdesc->pubactions.pubdelete && pubdesc->pubactions.pubtruncate &&
> !pubdesc->cf_valid_for_update && !pubdesc->cf_valid_for_delete)
> break;
> 
> Can we combine these two checks?
> 

I was worried it'd get too complex / hard to understand, but I'll think
about maybe simplifying the conditions a bit.

> I feel this patch needs a more thorough review.
> 

I won't object to more review, of course.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-10 Thread Tomas Vondra



On 3/10/22 19:17, Tomas Vondra wrote:
> On 3/9/22 11:12, houzj.f...@fujitsu.com wrote:
>> Hi,
>>
>> Here are some tests and results about the table sync query of
>> column filter patch and row filter.
>>
>> 1) multiple publications which publish schema of parent table and partition.
>> pub
>> create schema s1;
>> create table s1.t (a int, b int, c int) partition by range (a);
>> create table t_1 partition of s1.t for values from (1) to (10);
>> create publication pub1 for all tables in schema s1;
>> create publication pub2 for table t_1(b);
>>
>> sub
>> - prepare tables
>> CREATE SUBSCRIPTION sub CONNECTION 'port=1 dbname=postgres' PUBLICATION 
>> pub1, pub2;
>>
>> When doing table sync for 't_1', the column list will be (b). I think it 
>> should
>> be no filter because table t_1 is also published via ALL TABLES IN SCHMEA
>> publication.
>>
>> For Row Filter, it will use no filter for this case.
>>
>>
>> 2) one publication publishes both parent and child
>> pub
>> create table t (a int, b int, c int) partition by range (a);
>> create table t_1 partition of t for values from (1) to (10)
>>partition by range (a);
>> create table t_2 partition of t_1 for values from (1) to (10);
>>
>> create publication pub2 for table t_1(a), t_2
>>   with (PUBLISH_VIA_PARTITION_ROOT);
>>
>> sub
>> - prepare tables
>> CREATE SUBSCRIPTION sub CONNECTION 'port=1 dbname=postgres' PUBLICATION 
>> pub2;
>>
>> When doing table sync for table 't_1', it has no column list. I think the
>> expected column list is (a).
>>
>> For Row Filter, it will use the row filter of the top most parent table(t_1) 
>> in
>> this case.
>>
>>
>> 3) one publication publishes both parent and child
>> pub
>> create table t (a int, b int, c int) partition by range (a);
>> create table t_1 partition of t for values from (1) to (10)
>>partition by range (a);
>> create table t_2 partition of t_1 for values from (1) to (10);
>>
>> create publication pub2 for table t_1(a), t_2(b)
>>   with (PUBLISH_VIA_PARTITION_ROOT);
>>
>> sub
>> - prepare tables
>> CREATE SUBSCRIPTION sub CONNECTION 'port=1 dbname=postgres' PUBLICATION 
>> pub2;
>>
>> When doing table sync for table 't_1', the column list would be (a, b). I 
>> think
>> the expected column list is (a).
>>
>> For Row Filter, it will use the row filter of the top most parent table(t_1) 
>> in
>> this case.
>>
> 
> Attached is an updated patch version, addressing all of those issues.
> 
> 0001 is a bugfix, reworking how we calculate publish_as_relid. The old
> approach was unstable with multiple publications, giving different
> results depending on order of the publications. This should be
> backpatched into PG13 where publish_via_partition_root was introduced, I
> think.
> 
> 0002 is the main patch, merging the changes proposed by Peter and fixing
> the issues reported here. In most cases this means adopting the code
> used for row filters, and perhaps simplifying it a bit.
> 
> 
> But I also tried to implement a row-filter test for 0001, and I'm not
> sure I understand the behavior I observe. Consider this:
> 
> -- a chain of 3 partitions (on both publisher and subscriber)
> CREATE TABLE test_part_rf (a int primary key, b int, c int)
>PARTITION BY LIST (a);
> 
> CREATE TABLE test_part_rf_1
>PARTITION OF test_part_rf FOR VALUES IN (1,2,3,4,5)
>PARTITION BY LIST (a);
> 
> CREATE TABLE test_part_rf_2
>PARTITION OF test_part_rf_1 FOR VALUES IN (1,2,3,4,5);
> 
> -- initial data
> INSERT INTO test_part_rf VALUES (1, 5, 100);
> INSERT INTO test_part_rf VALUES (2, 15, 200);
> 
> -- two publications, each adding a different partition
> CREATE PUBLICATION test_pub_part_1 FOR TABLE test_part_rf_1
>  WHERE (b < 10) WITH (publish_via_partition_root);
> 
> CREATE PUBLICATION test_pub_part_2 FOR TABLE test_part_rf_2
>  WHERE (b > 10) WITH (publish_via_partition_root);
> 
> -- now create the subscription (also try opposite ordering)
> CREATE SUBSCRIPTION test_part_sub CONNECTION '...'
>PUBLICATION test_pub_part_1, test_pub_part_2;
> 
> -- wait for sync
> 
> -- inert some more data
> INSERT INTO test_part_rf VALUES (3, 6, 300);
> INSERT INTO test_part_rf VALUES (4, 16, 400);
> 
> -- wait for catchup
> 
> Now, based on the discussion here, my expectation is that we'll use the
> row filter from the top-most ancestor in any publication, which in this
> case is test_part_rf_1. Hence the filter should be (b < 10).
> 
> So I'd expect these rows to be replicated:
> 
> 1,5,100
> 3,6,300
> 
> But that's not what I get, unfortunately. I get different results,
> depending on the order of publications:
> 
> 1) test_pub_part_1, test_pub_part_2
> 
> 1|5|100
> 2|15|200
> 3|6|300
> 4|16|400
> 
> 2) test_pub_part_2, test_pub_part_1
> 
> 3|6|300
> 4|16|400
> 
> That seems pretty bizarre, because it either means we're not enforcing
> any filter or some strange combination of filters (notice that for (2)
> we skip/replicate rows matching either 

Re: Column Filtering in Logical Replication

2022-03-10 Thread Tomas Vondra
On 3/9/22 10:20, Peter Eisentraut wrote:
> 
> On 07.03.22 16:18, Tomas Vondra wrote:
>> AFAICS these issues should be resolved by the adoption of the row-filter
>> approach (i.e. it should fail the same way as for row filter).
> 
> The first two patches (additional testing for row filtering feature)
> look okay to me.
> 
> Attached is a fixup patch for your main feature patch (the third one).
> 
> It's a bit of code and documentation cleanup, but mainly I removed the
> term "column filter" from the patch.  Half the code was using "column
> list" or similar and half the code "column filter", which was confusing.
>  Also, there seemed to be a bit of copy-and-pasting from row-filter code
> going on, with some code comments not quite sensible, so I rewrote some
> of them.  Also some code used "rf" and "cf" symbols which were a bit
> hard to tell apart.  A few more letters can increase readability.
> 
> Note in publicationcmds.c OpenTableList() the wrong if condition was used.
> 

Thanks, I've merged these changes into the patch.

> I'm still confused about the intended replica identity handling.  This
> patch still checks whether the column list contains the replica identity
> at DDL time.  And then it also checks at execution time.  I thought the
> latest understanding was that the DDL-time checking would be removed.  I
> think it's basically useless now, since as the test cases show, you can
> subvert those checks by altering the replica identity later.

Are you sure? Which part of the patch does that? AFAICS we only do those
checks in CheckCmdReplicaIdentity now, but maybe I'm missing something.
Are you sure you're not looking at some older patch version?


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Column Filtering in Logical Replication

2022-03-09 Thread Amit Kapila
On Wed, Mar 9, 2022 at 3:33 PM Amit Kapila  wrote:
>
> On Mon, Mar 7, 2022 at 8:48 PM Tomas Vondra
>  wrote:
>
> > OK, I reworked this to do the same thing as the row filtering patch.
> >
>
> Thanks, I'll check this.
>

Some assorted comments:
=
1. We don't need to send a column list for the old tuple in case of an
update (similar to delete). It is not required to apply a column
filter for those cases because we ensure that RI must be part of the
column list for updates and deletes.
2.
+ /*
+ * Check if all columns referenced in the column filter are part of
+ * the REPLICA IDENTITY index or not.

I think this comment is reverse. The rule we follow here is that
attributes that are part of RI must be there in a specified column
list. This is used at two places in the patch.
3. get_rel_sync_entry()
{
/* XXX is there a danger of memory leak here? beware */
+ oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+ for (int i = 0; i < nelems; i++)
...
}

Similar to the row filter, I think we need to use
entry->cache_expr_cxt to allocate this. There are other usages of
CacheMemoryContext in this part of the code but I think those need to
be also changed and we can do that as a separate patch. If we do the
suggested change then we don't need to separately free columns.
4. I think we don't need the DDL changes in AtExecDropColumn. Instead,
we can change the dependency of columns to NORMAL during publication
commands.
5. There is a reference to check_publication_columns but that function
is removed from the patch.
6.
/*
* If we know everything is replicated and the row filter is invalid
* for update and delete, there is no point to check for other
* publications.
*/
if (pubdesc->pubactions.pubinsert && pubdesc->pubactions.pubupdate &&
pubdesc->pubactions.pubdelete && pubdesc->pubactions.pubtruncate &&
!pubdesc->rf_valid_for_update && !pubdesc->rf_valid_for_delete)
break;

/*
* If we know everything is replicated and the column filter is invalid
* for update and delete, there is no point to check for other
* publications.
*/
if (pubdesc->pubactions.pubinsert && pubdesc->pubactions.pubupdate &&
pubdesc->pubactions.pubdelete && pubdesc->pubactions.pubtruncate &&
!pubdesc->cf_valid_for_update && !pubdesc->cf_valid_for_delete)
break;

Can we combine these two checks?

I feel this patch needs a more thorough review.

-- 
With Regards,
Amit Kapila.




Re: Column Filtering in Logical Replication

2022-03-09 Thread Tomas Vondra
On 3/9/22 11:03, Amit Kapila wrote:
> ...
>> Hmm, yeah. That seems like a genuine problem - it should not depend on
>> the order of publications in the subscription, I guess.
>>
>> But is it an issue in the patch? Isn't that a pre-existing issue? AFAICS
>> the problem is that we initialize publish_as_relid=relid before the loop
>> over publications, and then just update it. So the first iteration
>> starts with relid, but the second iteration ends with whatever value is
>> set by the first iteration (e.g. the root).
>>
>> So with the example you posted, we start with
>>
>>   publish_as_relid = relid = test_part1
>>
>> but then if the first publication is pubviaroot=true, we update it to
>> parent. And in the second iteration, we fail to find the column filter,
>> because "parent" (publish_as_relid) is not part of the pub2.
>>
>> If we do it in the other order, we leave the publish_as_relid value as
>> is (and find the filter), and then update it in the second iteration
>> (and find the column filter too).
>>
>> Now, this can be resolved by re-calculating the publish_as_relid from
>> scratch in each iteration (start with relid, then maybe update it). But
>> that's just half the story - the issue is there even without column
>> filters. Consider this example:
>>
>> create table t (a int, b int, c int) partition by range (a);
>>
>> create table t_1 partition of t for values from (1) to (10)
>>partition by range (a);
>>
>> create table t_2 partition of t_1 for values from (1) to (10);
>>
>> create publication pub1 for table t(a)
>>   with (PUBLISH_VIA_PARTITION_ROOT);
>>
>> create publication pub2 for table t_1(a)
>>   with (PUBLISH_VIA_PARTITION_ROOT);
>>
>>
>> Now, is you change subscribe to "pub1, pub2" and "pub2, pub1", we'll end
>> up with different publish_as_relid values (t or t_1). Which seems like
>> the same ambiguity issue.
>>
> 
> I think we should fix this existing problem by always using the
> top-most table as publish_as_relid. Basically, we can check, if the
> existing publish_as_relid is an ancestor of a new rel that is going to
> replace it then we shouldn't replace it.

Right, using the topmost relid from all publications seems like the
correct solution.

> However, I think even if we
> fix the existing problem, we will still have the order problem for the
> column filter patch, and to avoid that instead of fetching column
> filters in the publication loop, we should use the final
> publish_as_relid. I think it will have another problem as well if we
> don't use final publish_as_relid which is that sometimes when we
> should not use any filter (say when pubviaroot is true and that
> publication has root partitioned table which has no filter) as per our
> rule of filters for a partitioned table, it can still use some filter
> from the non-root table.
> 

Yeah, the current behavior is just a consequence of how we determine
publish_as_relid now. If we rework that, we should first determine the
relid and then fetch the filter only for that single rel.

>>
>>> *
>>> Fetching column filter info in tablesync.c is quite expensive. It
>>> seems to be using four round-trips to get the complete info whereas
>>> for row-filter we use just one round trip. I think we should try to
>>> get both row filter and column filter info in just one round trip.
>>>
>>
>> Maybe, but I really don't think this is an issue.
>>
> 
> I am not sure but it might matter for small tables. Leaving aside the
> performance issue, I think the current way will get the wrong column
> list in many cases: (a) The ALL TABLES IN SCHEMA case handling won't
> work for partitioned tables when the partitioned table is part of one
> schema and partition table is part of another schema. (b) The handling
> of partition tables in other cases will fetch incorrect lists as it
> tries to fetch the column list of all the partitions in the hierarchy.
> 
> One of my colleagues has even tested these cases both for column
> filters and row filters and we find the behavior of row filter is okay
> whereas for column filter it uses the wrong column list. We will share
> the tests and results with you in a later email. We are trying to
> unify the column filter queries with row filter to make their behavior
> the same and will share the findings once it is done. I hope if we are
> able to achieve this that we will reduce the chances of bugs in this
> area.
> 

OK, I'll take a look at that email.

> Note: I think the first two patches for tests are not required after
> commit ceb57afd3c.
> 

Right. Will remove.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




RE: Column Filtering in Logical Replication

2022-03-09 Thread houzj.f...@fujitsu.com
On Wednesday, March 9, 2022 6:04 PM Amit Kapila 
> On Mon, Mar 7, 2022 at 8:48 PM Tomas Vondra
>  wrote:
> >
> > On 3/4/22 11:42, Amit Kapila wrote:
> >
> > > *
> > > Fetching column filter info in tablesync.c is quite expensive. It
> > > seems to be using four round-trips to get the complete info whereas
> > > for row-filter we use just one round trip. I think we should try to
> > > get both row filter and column filter info in just one round trip.
> > >
> >
> > Maybe, but I really don't think this is an issue.
> >
> 
> I am not sure but it might matter for small tables. Leaving aside the
> performance issue, I think the current way will get the wrong column list in
> many cases: (a) The ALL TABLES IN SCHEMA case handling won't work for
> partitioned tables when the partitioned table is part of one schema and
> partition table is part of another schema. (b) The handling of partition 
> tables in
> other cases will fetch incorrect lists as it tries to fetch the column list 
> of all the
> partitions in the hierarchy.
> 
> One of my colleagues has even tested these cases both for column filters and
> row filters and we find the behavior of row filter is okay whereas for column
> filter it uses the wrong column list. We will share the tests and results 
> with you
> in a later email. We are trying to unify the column filter queries with row 
> filter to
> make their behavior the same and will share the findings once it is done. I 
> hope
> if we are able to achieve this that we will reduce the chances of bugs in 
> this area.
> 
> Note: I think the first two patches for tests are not required after commit
> ceb57afd3c.

Hi,

Here are some tests and results about the table sync query of
column filter patch and row filter.

1) multiple publications which publish schema of parent table and partition.
pub
create schema s1;
create table s1.t (a int, b int, c int) partition by range (a);
create table t_1 partition of s1.t for values from (1) to (10);
create publication pub1 for all tables in schema s1;
create publication pub2 for table t_1(b);

sub
- prepare tables
CREATE SUBSCRIPTION sub CONNECTION 'port=1 dbname=postgres' PUBLICATION 
pub1, pub2;

When doing table sync for 't_1', the column list will be (b). I think it should
be no filter because table t_1 is also published via ALL TABLES IN SCHMEA
publication.

For Row Filter, it will use no filter for this case.


2) one publication publishes both parent and child
pub
create table t (a int, b int, c int) partition by range (a);
create table t_1 partition of t for values from (1) to (10)
   partition by range (a);
create table t_2 partition of t_1 for values from (1) to (10);

create publication pub2 for table t_1(a), t_2
  with (PUBLISH_VIA_PARTITION_ROOT);

sub
- prepare tables
CREATE SUBSCRIPTION sub CONNECTION 'port=1 dbname=postgres' PUBLICATION 
pub2;

When doing table sync for table 't_1', it has no column list. I think the
expected column list is (a).

For Row Filter, it will use the row filter of the top most parent table(t_1) in
this case.


3) one publication publishes both parent and child
pub
create table t (a int, b int, c int) partition by range (a);
create table t_1 partition of t for values from (1) to (10)
   partition by range (a);
create table t_2 partition of t_1 for values from (1) to (10);

create publication pub2 for table t_1(a), t_2(b)
  with (PUBLISH_VIA_PARTITION_ROOT);

sub
- prepare tables
CREATE SUBSCRIPTION sub CONNECTION 'port=1 dbname=postgres' PUBLICATION 
pub2;

When doing table sync for table 't_1', the column list would be (a, b). I think
the expected column list is (a).

For Row Filter, it will use the row filter of the top most parent table(t_1) in
this case.

Best regards,
Hou zj


Re: Column Filtering in Logical Replication

2022-03-09 Thread Amit Kapila
On Mon, Mar 7, 2022 at 8:48 PM Tomas Vondra
 wrote:
>
> On 3/4/22 11:42, Amit Kapila wrote:
> > On Wed, Mar 2, 2022 at 5:43 PM Tomas Vondra
> >  wrote:
> >>
> >> Attached is an updated patch, addressing most of the issues reported so
> >> far. There are various minor tweaks, but the main changes are:
> > ...
> >>
> >> 3) checks of column filter vs. publish_via_partition_root and replica
> >> identity, following the same logic as the row-filter patch (hopefully,
> >> it touches the same places, using the same logic, ...)
> >>
> >> That means - with "publish_via_partition_root=false" it's not allowed to
> >> specify column filters on partitioned tables, only for leaf partitions.
> >>
> >> And we check column filter vs. replica identity when adding tables to
> >> publications, or whenever we change the replica identity.
> >>
> >
> > This handling is different from row filter work and I see problems
> > with it.
>
> By different, I assume you mean I tried to enfoce the rules in ALTER
> PUBLICATION and other ALTER commands, instead of when modifying the
> data?
>

Yes.

> OK, I reworked this to do the same thing as the row filtering patch.
>

Thanks, I'll check this.

> > The column list validation w.r.t primary key (default replica
> > identity) is missing. The handling of column list vs. partitions has
> > multiple problems: (a) In attach partition, the patch is just checking
> > ancestors for RI validation but what if the table being attached has
> > further subpartitions; (b) I think the current locking also seems to
> > have problems because it is quite possible that while it validates the
> > ancestors here, concurrently someone changes the column list. I think
> > it won't be enough to just change the locking mode because with the
> > current patch strategy during attach, we will be first taking locks
> > for child tables of current partition and then parent tables which can
> > pose deadlock hazards.
> > > The columns list validation also needs to be done when we change
> > publication action.
> >
> I believe those issues should be solved by adopting the same approach as
> the row-filtering patch, right?
>

Right.

>
> > Some other miscellaneous comments:
> > =
> > *
> > In get_rel_sync_entry(), the handling for partitioned tables doesn't
> > seem to be correct. It can publish a different set of columns based on
> > the order of publications specified in the subscription.
> >
> > For example:
> > 
> > create table parent (a int, b int, c int) partition by range (a);
> > create table test_part1 (like parent);
> > alter table parent attach partition test_part1 for values from (1) to (10);
> >
> > create publication pub for table parent(a) with 
> > (PUBLISH_VIA_PARTITION_ROOT);
> > create publication pub2 for table test_part1(b);
> > ---
> >
> > Now, depending on the order of publications in the list while defining
> > subscription, the column list will change
> > 
> > create subscription sub connection 'port=1 dbname=postgres'
> > publication pub, pub2;
> >
> > For the above, column list will be: (a)
> >
> > create subscription sub connection 'port=1 dbname=postgres'
> > publication pub2, pub;
> >
> > For this one, the column list will be: (a, b)
> > 
> >
> > To avoid this, the column list should be computed based on the final
> > publish_as_relid as we are doing for the row filter.
> >
>
> Hmm, yeah. That seems like a genuine problem - it should not depend on
> the order of publications in the subscription, I guess.
>
> But is it an issue in the patch? Isn't that a pre-existing issue? AFAICS
> the problem is that we initialize publish_as_relid=relid before the loop
> over publications, and then just update it. So the first iteration
> starts with relid, but the second iteration ends with whatever value is
> set by the first iteration (e.g. the root).
>
> So with the example you posted, we start with
>
>   publish_as_relid = relid = test_part1
>
> but then if the first publication is pubviaroot=true, we update it to
> parent. And in the second iteration, we fail to find the column filter,
> because "parent" (publish_as_relid) is not part of the pub2.
>
> If we do it in the other order, we leave the publish_as_relid value as
> is (and find the filter), and then update it in the second iteration
> (and find the column filter too).
>
> Now, this can be resolved by re-calculating the publish_as_relid from
> scratch in each iteration (start with relid, then maybe update it). But
> that's just half the story - the issue is there even without column
> filters. Consider this example:
>
> create table t (a int, b int, c int) partition by range (a);
>
> create table t_1 partition of t for values from (1) to (10)
>partition by range (a);
>
> create table t_2 partition of t_1 for values from (1) to (10);
>
> create publication pub1 for table t(a)
>   with (PUBLISH_VIA_PARTITION_ROOT);
>
> create publication pub2 for table t_1(a)
>   with 

  1   2   3   >