I'm okay with any of the proposed designs or with dropping the idea.  Closing
the loop on a few facts:

On Sat, Mar 07, 2015 at 04:34:41PM -0600, Jim Nasby wrote:
> If we go that route, does it still make sense to explicitly use
> repalloc_huge? It will just cut over to that at some point (128M?) anyway,
> and if you're vacuuming a small relation presumably it's not worth messing
> with.

repalloc_huge() differs from repalloc() only in the size ceiling beyond which
they raise errors.  repalloc() raises errors for requests larger than ~1 GiB,
while repalloc_huge() is practically unconstrained on 64-bit and permits up to
~2 GiB on 32-bit.

On Mon, Mar 09, 2015 at 05:12:22PM -0500, Jim Nasby wrote:
> Speaking of which... people have referenced allowing > 1GB of dead tuples,
> which means allowing maintenance_work_mem > MAX_KILOBYTES. The comment for
> that says:
> 
> /* upper limit for GUC variables measured in kilobytes of memory */
> /* note that various places assume the byte size fits in a "long" variable
> */
> 
> So I'm not sure how well that will work. I think that needs to be a separate
> patch.

On LP64 platforms, MAX_KILOBYTES already covers maintenance_work_mem values up
to ~2 TiB.  Raising the limit on ILP32 platforms is not worth the trouble.
Raising the limit on LLP64 platforms is a valid but separate project.

nm


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to