On Mon, Nov 4, 2019 at 2:43 PM Kuntal Ghosh <kuntalghosh.2...@gmail.com> wrote: > > Hello hackers, > > I've done some performance testing of this feature. Following is my > test case (taken from an earlier thread): > > postgres=# CREATE TABLE large_test (num1 bigint, num2 double > precision, num3 double precision); > postgres=# \timing on > postgres=# EXPLAIN (ANALYZE, BUFFERS) INSERT INTO large_test (num1, > num2, num3) SELECT round(random()*10), random(), random()*142 FROM > generate_series(1, 1000000) s(i); > > I've kept the publisher and subscriber in two different system. > > HEAD: > With 1000000 tuples, > Execution Time: 2576.821 ms, Time: 9.632.158 ms (00:09.632), Spill count: 245 > With 10000000 tuples (10 times more), > Execution Time: 30359.509 ms, Time: 95261.024 ms (01:35.261), Spill count: > 2442 > > With the memory accounting patch, following are the performance results: > With 100000 tuples, > logical_decoding_work_mem=64kB, Execution Time: 2414.371 ms, Time: > 9648.223 ms (00:09.648), Spill count: 2315 > logical_decoding_work_mem=64MB, Execution Time: 2477.830 ms, Time: > 9895.161 ms (00:09.895), Spill count 3 > With 1000000 tuples (10 times more), > logical_decoding_work_mem=64kB, Execution Time: 38259.227 ms, Time: > 105761.978 ms (01:45.762), Spill count: 23149 > logical_decoding_work_mem=64MB, Execution Time: 24624.639 ms, Time: > 89985.342 ms (01:29.985), Spill count: 23 > > With logical decoding of in-progress transactions patch and with > streaming on, following are the performance results: > With 100000 tuples, > logical_decoding_work_mem=64kB, Execution Time: 2674.034 ms, Time: > 20779.601 ms (00:20.780) > logical_decoding_work_mem=64MB, Execution Time: 2062.404 ms, Time: > 9559.953 ms (00:09.560) > With 1000000 tuples (10 times more), > logical_decoding_work_mem=64kB, Execution Time: 26949.588 ms, Time: > 196261.892 ms (03:16.262) > logical_decoding_work_mem=64MB, Execution Time: 27084.403 ms, Time: > 90079.286 ms (01:30.079) So your result shows that with "streaming on", performance is degrading? By any chance did you try to see where is the bottleneck?
-- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com