On 16 December 2013 10:12, Heikki Linnakangas <hlinnakan...@vmware.com> wrote:
> On 12/13/2013 08:40 PM, Alvaro Herrera wrote:
>>
>> Heikki Linnakangas escribió:
>>
>>> I haven't been following this thread in detail, but would it help if
>>> we implemented a scheme to reduce (auto)vacuum's memory usage? Such
>>> schemes have been discussed in the past, packing the list of dead
>>> items more tightly.
>>
>>
>> Well, it would help some, but it wouldn't eliminate the problem
>> completely.  Autovacuum scales its memory usage based on the size of the
>> table.  There will always be a table so gigantic that a maximum
>> allocated memory is to be expected; and DBAs will need a way to limit
>> the memory consumption even if we pack dead items more densely.

The problem is allocation of memory, not one of efficient usage once
we have been allocated.

> Another idea: Store only the least significant 20 bits the block number of
> each item pointer, and use the remaining 12 bits for the offset number. So
> each item pointer is stored as a single 32 bit integer. For the top 12 bits
> of the block number, build a separate lookup table of 4096 entries, indexed
> by the top bits. Each entry in the lookup table points to the beginning and
> end index in the main array where the entries for that page range is stored.
> That would reduce the memory usage by about 1/3, which isn't as good as the
> bitmap method when there is a lot of dead tuples same pages, but would
> probably be a smaller patch.

We would do better to just use memory from shared_buffers and then we
wouldn't need a memory allocation or limit.
If we split the allocation into a series of BLCKSZ blocks of memory,
we can use your compression down to 4 bytes/row and then index the
blocks.

-- 
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to