Simon, Mark,
Actually only 1 lock check per query, but certainly extra processing and
data structures to maintain the pool information... so, yes certainly
much more suitable for DW (AFAIK we never attempted to measure the
additional overhead for non DW workload).
I recall testing it when the patch was submitted for 8.2., and the
overhead was substantial in the worst case ... like 30% for an in-memory
one-liner workload.
I've been going over the greenplum docs and it looks like the attempt
to ration work_mem was dropped. At this point, Greenplum 3.3 only
rations by # of concurrent queries and total cost. I know that work_mem
rationing was in the original plans; what made that unworkable?
My argument in general is that in the general case ... where you can't
count on a majority of long-running queries ... any kind of admission
control or resource management is a hard problem (if it weren't, Oracle
would have had it before 11). I think that we'll need to tackle it, but
I don't expect the first patches we make to be even remotely usable.
It's definitely not an SOC project.
I should write more about this.
--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com
--
Sent via pgsql-hackers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers