On Thu, Dec 10, 2015 at 6:46 AM, Michael Paquier
<michael.paqu...@gmail.com> wrote:
> On Thu, Dec 10, 2015 at 7:23 PM, Amit Langote
> <langote_amit...@lab.ntt.co.jp> wrote:
>> On 2015/12/10 15:28, Michael Paquier wrote:
>>> - The progress tracking facility adds a whole level of complexity for
>>> very little gain, and IMO this should *not* be part of PgBackendStatus
>>> because in most cases its data finishes wasted. We don't expect
>>> backends to run frequently such progress reports, do we? My opinion on
>>> the matter if that we should define a different collector data for
>>> vacuum, with something like PgStat_StatVacuumEntry, then have on top
>>> of it a couple of routines dedicated at feeding up data with it when
>>> some work is done on a vacuum job.
>> I assume your comment here means we should use stats collector to the
>> track/publish progress info, is that right?
> Yep.

Oh, please, no.  Gosh, this is supposed to be a lightweight facility!
Just have a chunk of shared memory and write the data in there.  If
you try to feed this through the stats collector you're going to
increase the overhead by 100x or more, and there's no benefit.  We've
got to do relation stats that way because there's no a priori bound on
the number of relations, so we can't just preallocate enough shared
memory for all of them.  But there's no similar restriction here: the
number of backends IS fixed at startup time.  As long as we limit the
amount of progress information that a backend can supply to some fixed
length, which IMHO we definitely should, there's no need to add the
expense of funneling this through the stats collector.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to