Hello All,
I've got a cluster that's having issues with pg_catalog.pg_largeobject getting
massively bloated. Vacuum is running OK and there's 700GB of free space in the
table and only 100GB of data, but subsequent inserts seem to be not using space
from the FSM and instead always allocating new
On Thu, Jul 18, 2024 at 4:38 AM sivapostg...@yahoo.com
wrote:
>
> Hello,
> PG V11
>
> Select count(*) from table1
> Returns 10456432
>
> Select field1, field2 from table1 where field3> '2024-07-18 12:00:00'
> Times out
>
> The above query was working fine for the past 2 years.
>
> Backup was taken
Hi:
Please, avoid top posting, specially when replying to long mail with
various points,m it makes it nearly impossible to track what you are
replying to.
On Sat, 20 Jul 2024 at 13:44, sivapostg...@yahoo.com
wrote:
> Executed
> VACUUM FULL VERBOSE
> followed by
> REINDEX DATABASE dbname;
As it
(Because VACUUM FULL rewrites the table, an implicit REINDEX occurs.)
I don't see mention of analyzing the database.
Also, VACUUM FULL probably doesn't do what you think it does.
On Sat, Jul 20, 2024 at 7:44 AM sivapostg...@yahoo.com <
sivapostg...@yahoo.com> wrote:
> Executed
> VACUUM FULL VER
Executed VACUUM FULL VERBOSEfollowed byREINDEX DATABASE dbname;
It didn't increase the performance, still time out happened. VACUUM didn't
find any dead rows in that particular table.
Yes, the actual query and conditions were not given in my first comment.
Actually where condition is not on
Hi
Respected Team
I know the use case of implementing the partitions with publication and
subscription of built-in logical replication
CREATE PUBLICATION dbz_publication FOR TABLE betplacement.bet WITH
(publish_via_partition_root = true); This will use parent table to replica
data changes to targ