Re: Logical replication issue after Postgresql upgrade from 13 to 14

2021-12-01 Thread Amit Kapila
On Wed, Dec 1, 2021 at 5:56 PM Marcos Pegoraro  wrote:
>
>> I have an issue with logical replication after Postgresql upgrade from
>> 13 to 14 (upgraded subscriber only using pg_upgrade_cluster -m link 13
>> main). After upgrade all subscriptions were disabled so I have enabled
>> them and replication workers successfully started.
>> pg_stat_subscription contains list of all subscriptions. All times in
>> this table is near current time (replication workers receives data
>> from servers). But no changes in destination table since cluster
>> upgrade (on publishers tables are changed). What I'm doing wrong and
>> how to fix issue?
>
>
> Amit, wouldn't it be better to document all steps needed to use pg_upgrade 
> with logical replication ?
> Sergey is showing a different problem than mine.
>

I am fine with that. Feel free to propose a patch on -hackers. I can
help in reviewing and improving the same.

-- 
With Regards,
Amit Kapila.




Re: Behaviour of failed Primary

2020-05-21 Thread Amit Kapila
On Thu, May 21, 2020 at 5:38 PM Santhosh Kumar  wrote:
>
> Hi Forum,
>  If I have a cluster with Synchronous replication enabled with three nodes, 
> for eg:
>
> [primary] [hot stand by 1] [host stand by 2]
>
> And for some unforeseen reasons, if primary fails, the failover will kick in 
> and hot stand by 1 will become new primary and  cluster setup will look like 
> this
>
> [new primary (hot stand by1)] [host stand by 2]
>
> My question here is, what will happen if the original primary which has 
> failed comes back. Will it become part of this high available replica cluster 
> automatically or it will be stale and disconnected from the cluster?
>

It won't become standby automatically as it would have diverged from
the new master.

> How can we automatically make the failed primary to be part of the cluster 
> with hot standby role? It would be of great help, if you can direct me to any 
> references details. Thank you, upfront.
>

I think pg_rewind can help in such situations.  See the docs of pg_rewind [1].


[1] - https://www.postgresql.org/docs/devel/app-pgrewind.html

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: Slow planning time for simple query

2018-06-16 Thread Amit Kapila
On Thu, Jun 14, 2018 at 4:34 AM, Maksim Milyutin  wrote:
> 13.06.2018 12:40, Maksim Milyutin wrote:
>
> On 09.06.2018 22:49, Tom Lane wrote:
>
> Maksim Milyutin  writes:
>
> On hot standby I faced with the similar problem.
> ...
> is planned 4.940 ms on master and *254.741* ms on standby.
>
> (I wonder though why, if you executed the same query on the master,
> its setting of the index-entry-is-dead bits didn't propagate to the
> standby.)
>
>
> I have verified the number dead item pointers (through pageinspect
> extension) in the first leaf page of index participating in query
> ('main.message_instance_pkey') on master and slave nodes and have noticed a
> big difference.
>
> SELECT * FROM monitoring.bt_page_stats('main.message_instance_pkey', 3705);
>
> On master:
>
>  blkno | type | live_items | dead_items | avg_item_size | page_size |
> free_size | btpo_prev | btpo_next | btpo | btpo_flags
> ---+--+++---+---+---+---+---+--+
>   3705 | l|  1 | 58 |24 |  8192 |
> 6496 | 0 |  3719 |0 | 65
>
> On standby:
>
>  blkno | type | live_items | dead_items | avg_item_size | page_size |
> free_size | btpo_prev | btpo_next | btpo | btpo_flags
> ---+--+++---+---+---+---+---+--+
>   3705 | l| 59 |  0 |24 |  8192 |
> 6496 | 0 |  3719 |0 |  1
>
>
>
> In this point I want to highlight the issue that the changes in lp_flags
> bits (namely, set items as dead) for index item pointers doesn't propagate
> from master to replica in my case. As a consequence, on standby I have live
> index items most of which on master are marked as dead. And my queries on
> planning stage are forced to descent to heap pages under
> get_actual_variable_range execution that considerately slows down planning.
>
> Is it bug or restriction of implementation or misconfiguration of
> WAL/replication?
>

It is not a misconfiguration issue.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: Slow planning time for simple query

2018-06-16 Thread Amit Kapila
On Sun, Jun 10, 2018 at 1:19 AM, Tom Lane  wrote:
> Maksim Milyutin  writes:
>> On hot standby I faced with the similar problem.
>> ...
>> is planned 4.940 ms on master and *254.741* ms on standby.
>
> Presumably the problem is that the standby isn't authorized to change
> the btree index's "entry is dead" bits,
>

I don't see anything like that in the code.  We use _bt_killitems to
mark the items as dead and neither that function or any of its caller
has any such assumption.

> so it's unable to prune index
> entries previously detected as dead, and thus the logic that intends
> to improve this situation doesn't work on the standby.
>

If my above understanding is correct, then one thing that could lead
to such behavior is the computation of RecentGlobalXmin on standby.
Basically, if the RecentGlobalXmin has a different value on standby,
then it is possible that the decision whether a particular item is
dead differs on master and standby.

> (I wonder though why, if you executed the same query on the master,
> its setting of the index-entry-is-dead bits didn't propagate to the
> standby.)
>

Because we don't WAL log it.  See _bt_killitems.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: question on streaming replication

2018-06-15 Thread Amit Kapila
On Thu, Jun 14, 2018 at 10:58 AM, Atul Kumar  wrote:
> Hi,
>
> I have postgres edb 9.6 version, i have below query to solve it out.
>

This is not the right place to ask queries on edb versions.  You need
to check with your vendor about the right place to ask questions.
Here, you can ask the questions about PostgreSQL.

> i have configured streaming replication having master and slave node
> on same  server just to test it.
>
> All worked fine but when i made slave service stop, and create some
> test databases in master, after then i made slave service start, slave
> didn't pick the changes.
>

I think you need to ensure that replica is connected to master server
and then probably check logs to confirm what exactly happened.

> The replication was on async state.
>
> Then after doing some search on google i tried to make it sync state
> but even making changes in postgresql.conf file I am neither getting
> sync state nor getting any changes on slave server.
>

After making changes in postgresql.conf, you might need to use
pg_reload_conf  or restart the server depending on the setting you
have changed to make that setting effective.

> Please suggest the needful.
>

As mentioned above, I suggest asking only PostgreSQL related questions
on these mailing lists.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com