On Fri, Jun 2, 2017 at 9:15 AM, Amit Kapila <[email protected]> wrote: > On Fri, Jun 2, 2017 at 6:38 PM, Robert Haas <[email protected]> wrote: >> On Fri, Jun 2, 2017 at 9:01 AM, Amit Kapila <[email protected]> wrote: >>> Your reasoning sounds sensible to me. I think the other way to attack >>> this problem is that we can maintain some local queue in each of the >>> workers when the shared memory queue becomes full. Basically, we can >>> extend your "Faster processing at Gather node" patch [1] such that >>> instead of fixed sized local queue, we can extend it when the shm >>> queue become full. I think that way we can handle both the problems >>> (worker won't stall if shm queues are full and workers can do batched >>> writes in shm queue to avoid the shm queue communication overhead) in >>> a similar way. >> >> We still have to bound the amount of memory that we use for queueing >> data in some way. > > Yeah, probably till work_mem (or some percentage of work_mem). If we > want to have some extendable solution then we might want to back it up > with some file, however, we might not need to go that far. I think we > can do some experiments to see how much additional memory is > sufficient to give us maximum benefit.
Yes, I think that's important. Also, I think we still need a better understanding of in which cases the benefit is there. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
