On Thu, Jan 18, 2024 at 2:47 PM Amit Kapila <amit.kapil...@gmail.com> wrote: > > On Thu, Jan 18, 2024 at 12:12 PM Bharath Rupireddy > <bharath.rupireddyforpostg...@gmail.com> wrote: > > > > On Wed, Jan 17, 2024 at 11:45 AM li jie <ggys...@gmail.com> wrote: > > > > > > Hi hackers, > > > > > > During logical replication, if there is a large write transaction, some > > > spill files will be written to disk, depending on the setting of > > > logical_decoding_work_mem. > > > > > > This behavior can effectively avoid OOM, but if the transaction > > > generates a lot of change before commit, a large number of files may > > > fill the disk. For example, you can update a TB-level table. > > > > > > However, I found an inelegant phenomenon. If the modified large table is > > > not > > > published, its changes will also be written with a large number of spill > > > files. > > > Look at an example below: > > > > Thanks. I agree that decoding and queuing the changes of unpublished > > tables' data into reorder buffer is an unnecessary task for walsender. > > It takes processing efforts (CPU overhead), consumes disk space and > > uses memory configured via logical_decoding_work_mem for a replication > > connection inefficiently. > > > > This is all true but note that in successful cases (where the table is > published) all the work done by FilterByTable(accessing caches, > transaction-related stuff) can add noticeable overhead as anyway we do > that later in pgoutput_change().
Right. Overhead for published tables need to be studied. A possible way is to mark the checks performed in FilterByTable/filter_by_table_cb and skip the same checks in pgoutput_change. I'm not sure if this works without any issues though. -- Bharath Rupireddy PostgreSQL Contributors Team RDS Open Source Databases Amazon Web Services: https://aws.amazon.com