On Wed, Jan 8, 2025 at 3:02 PM Masahiko Sawada <sawada.m...@gmail.com> wrote:
>
> On Thu, Dec 19, 2024 at 11:11 PM Nisha Moond <nisha.moond...@gmail.com> wrote:
> > [3] Test with pgbench run on both publisher and subscriber.
> >
> > Test setup:
> > - Tests performed on pgHead + v16 patches
> > - Created a pub-sub replication system.
> > - Parameters for both instances were:
> >
> >    share_buffers = 30GB
> >    min_wal_size = 10GB
> >    max_wal_size = 20GB
> >    autovacuum = false
>
> Since you disabled autovacuum on the subscriber, dead tuples created
> by non-hot updates are accumulated anyway regardless of
> detect_update_deleted setting, is that right?
>
> > Test Run:
> > - Ran pgbench(read-write) on both the publisher and the subscriber with 30 
> > clients for a duration of 120 seconds, collecting data over 5 runs.
> > - Note that pgbench was running for different tables on pub and sub.
> > (The scripts used for test "case1-2_measure.sh" and case1-2_setup.sh" are 
> > attached).
> >
> > Results:
> > Run#                   pub TPS              sub TPS
> > 1                         32209   13704
> > 2                         32378   13684
> > 3                         32720   13680
> > 4                         31483   13681
> > 5                         31773   13813
> > median               32209   13684
> > regression          7%         -53%
>
> What was the TPS on the subscriber when detect_update_deleted = false?
> And how much were the tables bloated compared to when
> detect_update_deleted = false?
>

Test results with 'retain_conflict_info=false', tested on v20 patches
where the parameter name is changed.
With 'retain_conflict_info' disabled, both the Publisher and
Subscriber sustain similar TPS, with no performance reduction observed
on either node.

Test Setup:
(used same setup as above test)
 - Tests performed on pgHead+v20 patches
 - Created a pub-sub replication setup.
 - Parameters for both instances were:
   autovacuum = false
   shared_buffers = '30GB'
   max_wal_size = 20GB
   min_wal_size = 10GB
 Note: 'track_commit_timestamp' is disabled on Sub as not required for
retain_conflict_info=false.

Test Run:
- Pub and Sub had different pgbench tables with initial data of scale=100.
- Ran pgbench(read-write) on both the publisher and the subscriber
with 30 clients for a duration of 15 minutes, collecting data over 3
runs.

Results:
Run#            pub TPS             sub TPS
      1             30533.29878     29161.33335
      2             29931.30723     29520.89321
      3             30665.54192     29440.92953
Median         30533.29878     29440.92953
pgHead median 30112.31203     28933.75013
regression     1%                    2%

- Both Pub and Sub nodes have similar TPS in all runs, which is 1-2%
better than pgHead.

--
Thanks,
Nisha


Reply via email to