On 21/07/17 15:58, Joshua D. Drake wrote:

On 07/19/2017 07:57 PM, Tom Lane wrote:
Peter Geoghegan <p...@bowt.ie> writes:
My argument for the importance of index bloat to the more general
bloat problem is simple: any bloat that accumulates, that cannot be
cleaned up, will probably accumulate until it impacts performance
quite noticeably.

But that just begs the question: *does* it accumulate indefinitely, or
does it eventually reach a more-or-less steady state?  The traditional
wisdom about btrees, for instance, is that no matter how full you pack
them to start with, the steady state is going to involve something like
1/3rd free space.  You can call that bloat if you want, but it's not
likely that you'll be able to reduce the number significantly without
paying exorbitant costs.

I'm not claiming that we don't have any problems, but I do think it's
important to draw a distinction between bloat and normal operating

Agreed but we aren't talking about 30% I don't think. Here is where I am at. It took until 30 minutes ago for the tests to finish:

                name                 |  setting
 autovacuum                          | on
 autovacuum_analyze_scale_factor     | 0.1
 autovacuum_analyze_threshold        | 50
 autovacuum_freeze_max_age           | 200000000
 autovacuum_max_workers              | 3
 autovacuum_multixact_freeze_max_age | 400000000
 autovacuum_naptime                  | 60
 autovacuum_vacuum_cost_delay        | 20
 autovacuum_vacuum_cost_limit        | -1
 autovacuum_vacuum_scale_factor      | 0.2
 autovacuum_vacuum_threshold         | 50
 autovacuum_work_mem                 | -1
 log_autovacuum_min_duration         | -1

Test 1: 55G    /srv/main
TPS:    955

Test 2: 112G    /srv/main
TPS:    531 (Not sure what happened here, long checkpoint?)

Test 3: 109G    /srv/main
TPS:    868

Test 4: 143G
TPS:    840

Test 5: 154G
TPS:     722

I am running the query here:


And will post a followup. Once the query finishes I am going to launch the tests with autovacuum_vacuum_cost_limit of 5000. Is there anything else you folks would like me to change?

I usually advise setting autovacuum_naptime = 10s (or even 5s) for workloads that do a lot of updates (or inserts + deletes) - as on modern HW a lot of churn can happen in 1 minute, and that just makes vacuum's job harder.


Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to