On Mon, Jan 9, 2017 at 6:01 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
> On 9 January 2017 at 08:48, Masahiko Sawada <sawada.m...@gmail.com> wrote:
>> I had not considered necessity of dead lock detection support.
> It seems like a big potential win to scan multiple indexes in parallel.
> Does the design for collecting dead TIDs use a variable amount of
> memory?

No. Collecting dead TIDs and calculation for max dead tuples are same
as current lazy vacuum. That is, the memory space for dead TIDs is
allocated with fixed size. If parallel lazy vacuum that memory space
is allocated in dynamic shared memory, else is allocated in local

> Does this work negate the other work to allow VACUUM to use >
> 1GB memory?

Partly yes. Because memory space for dead TIDs needs to be allocated
in DSM before vacuum worker launches, parallel lazy vacuum cannot use
such a variable amount of memory as that work does. But in
non-parallel lazy vacuum, that work would be effective. We might be
able to do similar thing using DSA but I'm not sure that is better.

Attached result of performance test with scale factor = 500 and the
test script I used. I measured each test at four times and plot
average of last three execution times to sf_500.png file. When table
has index, vacuum execution time is smallest when number of index and
parallel degree is same.


Masahiko Sawada
NTT Open Source Software Center

Attachment: parallel_vacuum.sh
Description: Bourne shell script

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to