On 06/05/2018 09:22 AM, David Rowley wrote:
On 5 June 2018 at 17:04, Tomas Vondra <tomas.von...@2ndquadrant.com> wrote:
On 06/05/2018 04:56 AM, David Rowley wrote:
Isn't there still a problem determining when the memory exhaustion
actually happens though?   As far as I know, we've still little
knowledge how much memory each aggregate state occupies.

Jeff tried to solve this in [1], but from what I remember, there was
too much concern about the overhead of the additional accounting code.

[1] 
https://www.postgresql.org/message-id/flat/CAKJS1f8yvvvj-sVDv_bcxkzcZKq0ZOTVhX0dHfnYDct2Mycq5Q%40mail.gmail.com#cakjs1f8yvvvj-svdv_bcxkzczkq0zotvhx0dhfnydct2myc...@mail.gmail.com


I had a chat with Jeff Davis at pgcon about this, and IIRC he suggested
a couple of people who were originally worried about the overhead seem
to be accepting it now.

Is there any great need to make everything pay the small price for
this? Couldn't we just create a new MemoryContextMethod that
duplicates aset.c, but has the require additional accounting built in
at the implementation level, then make execGrouping.c use that
allocator for its hash table? The code would not really need to be
duplicated, we could just do things the same way as like.c does with
like_match.c and include a .c file. We'd need another interface
function in MemoryContextMethods to support getting the total memory
allocated in the context, but that does not seem like a problem.


There probably is not a need, but there was not a great way to enable it explicitly only for some contexts. IIRC what was originally considered back in 2015 was some sort of a flag in the memory context, and the overhead was about the same as with the int64 arithmetic alone.

But I don't think we've considered copying the whole AllocSet. Maybe that would work, not sure. I wonder if an aggregate might use a custom context internally (I don't recall anything like that). The accounting capability seems potentially useful for other places, and those might not use AllocSet (or at least not directly).

All of this of course assumes the overhead is still there. I sure will do some new measurements.

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Reply via email to