On Wed, Jul 19, 2017 at 3:11 PM, Joshua D. Drake <j...@commandprompt.com> wrote:
> The good news is, PostgreSQL is not doing half bad against 128 connections
> with only 16vCPU. The bad news is we more than doubled our disk size without
> getting reuse or bloat under control. The concern here is that under heavy
> write loads that are persistent, we will eventually bloat out and have to
> vacuum full, no matter what. I know that Jan has done some testing and the
> best he could get is something like 8 days before PostgreSQL became unusable
> (but don't quote me on that).
> I am open to comments, suggestions, running multiple tests with different
> parameters or just leaving this in the archive for people to reference.

Did you see my blogpost on planet PostgreSQL from last night?


Perhaps you could use my query to instrument an interesting index, to
see what that turns up. I would really like to get a better sense of
how often and to what extent index bloat is a problem that VACUUM is
just inherently incapable of keeping under control over time. The
timeline for performance to degrade with real production workloads is
very interesting to me. It's really hard to simulate certain types of
problems that you will see frequently in production.

Index bloat is a general problem that B-Trees have in all other major
systems, but I think that PostgreSQL has a tendency to allow indexes
to become progressively more bloated over time, in a way that it often
can never recover from [1]. This may be a particular problem with
unique indexes, where many physical duplicates accumulate in pages.
Duplicates that are theoretically reclaimable, but due to how the
keyspace is split up, will never actually be reclaimed [2].

Peter Geoghegan

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to