On 2017-05-30 07:27:12 -0400, Robert Haas wrote:
> The other is that I figured 64k was small enough that nobody would
> care about the memory utilization.  I'm not sure we can assume the
> same thing if we make this bigger.  It's probably fine to use a 6.4M
> tuple queue for each worker if work_mem is set to something big, but
> maybe not if work_mem is set to the default of 4MB.

Probably not.  It might also end up being detrimental performancewise,
because we start touching more memory.  I guess it'd make sense to set
it in the planner, based on the size of a) work_mem b) number of
expected tuples.

I do wonder whether the larger size fixes some scheduling issue
(i.e. while some backend is scheduled out, the other side of the queue
can continue), or whether it's largely triggered by fixable contention
inside the queue.  I'd guess it's a bit of both.  It should be
measurable in some cases, by comparing the amount of time blocking on
reading the queue (or continuing because the queue is empty), writing
to the queue (should always result in blocking) and time spent waiting
for the spinlock.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to