Jim C. Nasby wrote:
Something that would be extremely useful to add to the first pass of
this would be to have a work_mem limiter. This would allow users to set
work_mem much more aggressively without worrying about pushing the
machine to swapping. That capability alone would make this valuable to a
very large number of our users.
Right - in principle it is not that difficult to add (once I have the
machinery for the cost limiter going properly that is). I thinking we
1. Add hooks to count work_mem allocations where they happen, or
2. Scan the plan tree and deduce how many work_mem allocations there
1. might be tricky, because I'm taking the resource lock before the
executor is actually run (beginning of PortalRun), so 2. might be the
most workable approach.
However as I understand it, this sounds very like Simon's shared
work_mem proposal, and the major issue there (as I understood it) was
that for many/most(?) OSes free(3) doesn't synchronously release memory
back to OSes free list - it may only be immediately reusable for the
process that actually freed it (in some cases it may only *ever* be
reusable for the process that freed it - until that process terminates
Now it may be for DSS workloads that the freed memory gets back to the
free list "quickly enough", or that this type of work_mem limiting -
even though not entirely accurate in its memory arithmetic, is "good
enough" to prevent OOM situations - clearly some time will need to be
spent checking this for the various platforms.
These factors may make it better to aim for the simple count + cost
limiters first, and *then* look at the memory one.
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly