Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On 22.2.2015 02:38, Jeff Davis wrote: On Wed, 2015-01-28 at 23:25 +0100, Tomas Vondra wrote: 3) moves the assert into the 'if (release)' branch You missed one, but I got it. Oh, thanks! 4) includes the comments proposed by Ali Akbar in his reviews Warnings at makeArrayResult/makeMdArrayResult about freeing memory with private subcontexts. I also edited the comments substantially. Thanks. Regarding the performance impact of decreasing the size of the preallocated array from 64 to just 8 elements, I tried this. CREATE TABLE test AS SELECT mod(i,10) a, i FROM generate_series(1,64*10) s(i); SELECT a, array_agg(i) AS x FRM test GROUP BY 1; or actually (to minimize transfer overhead): SELECT COUNT(x) FROM ( SELECT a, array_agg(i) AS x FRM test GROUP BY 1 ) foo; That's actually a bogus test -- array_agg is never executed. Really? How could that happen when the result of array_agg() is passed to the COUNT()? Also, how could that allocate huge amounts of memory and get killed by OOM, which happens easily with this query? See the test script I used (attached). Many small groups has a significant improvement with your patch (median 167ms unpatched, 125ms patched), and the one large group is not measurably different (median 58ms for both). Yup, that's expected. With a lot of small groups the improvement may easily be much higher, but this is a conservative test. The test script uses a small dataset of 100K tuples, because 1M tuples will run out of memory for small groups without the patch. Committed. Cool. Thanks once more. regards -- Tomas Vondrahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On Sun, 2015-02-22 at 03:14 +0100, Tomas Vondra wrote: SELECT COUNT(x) FROM ( SELECT a, array_agg(i) AS x FRM test GROUP BY 1 ) foo; That's actually a bogus test -- array_agg is never executed. Really? How could that happen when the result of array_agg() is passed to the COUNT()? Also, how could that allocate huge amounts of memory and get killed by OOM, which happens easily with this query? Oops, I misread that as COUNT(*). Count(x) will force array_agg() to be executed. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On Wed, 2015-01-28 at 23:25 +0100, Tomas Vondra wrote: 3) moves the assert into the 'if (release)' branch You missed one, but I got it. 4) includes the comments proposed by Ali Akbar in his reviews Warnings at makeArrayResult/makeMdArrayResult about freeing memory with private subcontexts. I also edited the comments substantially. Regarding the performance impact of decreasing the size of the preallocated array from 64 to just 8 elements, I tried this. CREATE TABLE test AS SELECT mod(i,10) a, i FROM generate_series(1,64*10) s(i); SELECT a, array_agg(i) AS x FRM test GROUP BY 1; or actually (to minimize transfer overhead): SELECT COUNT(x) FROM ( SELECT a, array_agg(i) AS x FRM test GROUP BY 1 ) foo; That's actually a bogus test -- array_agg is never executed. See the test script I used (attached). Many small groups has a significant improvement with your patch (median 167ms unpatched, 125ms patched), and the one large group is not measurably different (median 58ms for both). The test script uses a small dataset of 100K tuples, because 1M tuples will run out of memory for small groups without the patch. Committed. Regards, Jeff Davis array_agg_test.sql Description: application/sql -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On Thu, Jan 29, 2015 at 7:25 AM, Tomas Vondra tomas.von...@2ndquadrant.com wrote: attached is v9 of the patch, modified along the lines of Tom's comments: Moved this patch to next CF, hopefully it will get more attention, and a reviewer. -- Michael
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On 1/28/15 4:25 PM, Tomas Vondra wrote: + * It's possible to choose whether to create a separate memory context for the + * array builde state, or whether to allocate it directly within rcontext Typo. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Jeff Davis pg...@j-davis.com writes: On Tue, Jan 20, 2015 at 6:44 AM, Tom Lane t...@sss.pgh.pa.us wrote: Uh, sorry, I've not been paying any attention to this thread for awhile. What's the remaining questions at issue? This patch is trying to improve the array_agg case where there are many arrays being constructed simultaneously, such as in HashAgg. You strongly suggested that a commitable patch would differentiate between the grouped and ungrouped cases (or perhaps you meant differentiate between HashAgg and sorted aggregation?). Tomas's current patch does not do so; it does two main things: 1. Uses a common memory context for arrays being constructed by array_agg in any context (ungrouped, sorted agg, and HashAgg) 2. Reduces the initial array allocation to 8 elements from 64, but preserves doubling behavior Sorry for slow response on this. I took a look at the v8 patch (that's still latest no?). I see that it only gets rid of the separate context for the two callers array_agg_transfn and array_agg_array_transfn, for which no memory release can happen anyway until the parent context is reset (cf comments in corresponding finalfns). So that's fine --- it is not making any bloat case any worse, at least. I'm not sure about whether reducing the initial Datum array size across-the-board is a good thing or not. One obvious hack to avoid unexpected side effects on other callers would be astate-alen = (subcontext ? 64 : 8); The larger initial size is basically free when subcontext is true, anyway, considering it will be coming out of an 8K initial subcontext allocation. This still leaves us wondering whether the smaller initial size will hurt array_agg itself, but that's at least capable of being tested with a reasonably small number of test cases. I strongly object to removing initArrayResultArr's element_type argument. That's got nothing to do with the stated purpose of the patch, and it forces a non-too-cheap catalog lookup to be done inside initArrayResultArr. It's true that some of the current call sites are just doing the same lookup themselves anyway, but we should not foreclose the possibility that the caller has the data already (as some others do) or is able to cache it across multiple uses. A possible compromise is to add the convention that callers can pass InvalidOid if they want initArrayResultArr to do the lookup. (In any case, the function's header comment needs adjustment if the API spec changes.) Other than that, I think the API changes here are OK. Adding a new argument to existing functions is something we do all the time, and it's clear how to modify any callers to get the same behavior as before. We could perhaps clean things up with a more invasive redefinition, but I doubt it's worth inflicting more pain on third party callers. Another thing I'd change is this: + /* we can only release the context if it's a private one. */ + Assert(! (release !astate-private_cxt)); + /* Clean up all the junk */ if (release) MemoryContextDelete(astate-mcontext); Why not this way: /* Clean up all the junk */ if (release) + { + Assert(astate-private_cxt); MemoryContextDelete(astate-mcontext); + } Seems a lot more understandable, and less code too. I concur with the concerns that the comments could do with more work, but haven't attempted to improve them myself. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On 28.1.2015 21:28, Tom Lane wrote: Jeff Davis pg...@j-davis.com writes: On Tue, Jan 20, 2015 at 6:44 AM, Tom Lane t...@sss.pgh.pa.us wrote: Uh, sorry, I've not been paying any attention to this thread for awhile. What's the remaining questions at issue? This patch is trying to improve the array_agg case where there are many arrays being constructed simultaneously, such as in HashAgg. You strongly suggested that a commitable patch would differentiate between the grouped and ungrouped cases (or perhaps you meant differentiate between HashAgg and sorted aggregation?). Tomas's current patch does not do so; it does two main things: 1. Uses a common memory context for arrays being constructed by array_agg in any context (ungrouped, sorted agg, and HashAgg) 2. Reduces the initial array allocation to 8 elements from 64, but preserves doubling behavior Sorry for slow response on this. I took a look at the v8 patch (that's still latest no?). I see that it only gets rid of the Yes, v8 is the latest version I submitted. separate context for the two callers array_agg_transfn and array_agg_array_transfn, for which no memory release can happen anyway until the parent context is reset (cf comments in corresponding finalfns). So that's fine --- it is not making any bloat case any worse, at least. I'm not sure about whether reducing the initial Datum array size across-the-board is a good thing or not. One obvious hack to avoid unexpected side effects on other callers would be astate-alen = (subcontext ? 64 : 8); The larger initial size is basically free when subcontext is true, anyway, considering it will be coming out of an 8K initial subcontext allocation. Seems like a good idea. If we're using 8kB contexts, we can preallocate 64 elements right away. But maybe we could decrease the 8kB context size to 1kB? For 64 elements is just 512B + 64B for the arrays, and a bit of space for the ArrayBuildState. So maybe ~600B in total. Of course, this does not include space for pass-by-ref values. Anyway, I have no problem with this - I mostly care just about the array_agg() case. All the other places may adopt the 'common context' approach to get the benefit. This still leaves us wondering whether the smaller initial size will hurt array_agg itself, but that's at least capable of being tested with a reasonably small number of test cases. I plan to do more testing to confirm this - my initial testing seemed to confirm this, but I'll repeat that with the current patch. I strongly object to removing initArrayResultArr's element_type argument. That's got nothing to do with the stated purpose of the patch, and it forces a non-too-cheap catalog lookup to be done inside initArrayResultArr. It's true that some of the current call sites are just doing the same lookup themselves anyway, but we should not foreclose the possibility that the caller has the data already (as some others do) or is able to cache it across multiple uses. A possible compromise is to add the convention that callers can pass InvalidOid if they want initArrayResultArr to do the lookup. (In any case, the function's header comment needs adjustment if the API spec changes.) Fair point, and the InvalidOid approach seems sane to me. Other than that, I think the API changes here are OK. Adding a new argument to existing functions is something we do all the time, and it's clear how to modify any callers to get the same behavior as before. We could perhaps clean things up with a more invasive redefinition, but I doubt it's worth inflicting more pain on third party callers. Another thing I'd change is this: + /* we can only release the context if it's a private one. */ + Assert(! (release !astate-private_cxt)); + /* Clean up all the junk */ if (release) MemoryContextDelete(astate-mcontext); Why not this way: /* Clean up all the junk */ if (release) + { + Assert(astate-private_cxt); MemoryContextDelete(astate-mcontext); + } Seems a lot more understandable, and less code too. Yeah, I agree it's easier to understand. I concur with the concerns that the comments could do with more work, but haven't attempted to improve them myself. There were a few comments about this, after the v8 patch, with recommended comment changes. regards -- Tomas Vondrahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Hi, attached is v9 of the patch, modified along the lines of Tom's comments: 1) uses alen=64 for cases with private context, 8 otherwise 2) reverts removal of element_type from initArrayResultArr() When element_type=InvalidOid is passed to initArrayResultArr, it performs lookup using get_element_type(), otherwise reuses the value it receives from the caller. 3) moves the assert into the 'if (release)' branch 4) includes the comments proposed by Ali Akbar in his reviews Warnings at makeArrayResult/makeMdArrayResult about freeing memory with private subcontexts. Regarding the performance impact of decreasing the size of the preallocated array from 64 to just 8 elements, I tried this. CREATE TABLE test AS SELECT mod(i,10) a, i FROM generate_series(1,64*10) s(i); SELECT a, array_agg(i) AS x FRM test GROUP BY 1; or actually (to minimize transfer overhead): SELECT COUNT(x) FROM ( SELECT a, array_agg(i) AS x FRM test GROUP BY 1 ) foo; with work_mem=2GB (so that it really uses HashAggregate). The dataset is constructed to have exactly 64 items per group, thus exploiting the difference between alen=8 and alen=64. With alen=8 I get these timings: Time: 1892,681 ms Time: 1879,046 ms Time: 1892,626 ms Time: 1892,155 ms Time: 1880,282 ms Time: 1868,344 ms Time: 1873,294 ms and with alen=64: Time: 1888,244 ms Time: 1882,991 ms Time: 1885,157 ms Time: 1868,935 ms Time: 1878,053 ms Time: 1894,871 ms Time: 1871,571 ms That's 1880 vs 1882 on average, so pretty much no difference. Would be nice if someone else could try this on their machine(s). regards -- Tomas Vondrahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c index f3ce1d7..9eb4d63 100644 --- a/src/backend/executor/nodeSubplan.c +++ b/src/backend/executor/nodeSubplan.c @@ -262,7 +262,7 @@ ExecScanSubPlan(SubPlanState *node, /* Initialize ArrayBuildStateAny in caller's context, if needed */ if (subLinkType == ARRAY_SUBLINK) astate = initArrayResultAny(subplan-firstColType, - CurrentMemoryContext); + CurrentMemoryContext, true); /* * We are probably in a short-lived expression-evaluation context. Switch @@ -964,7 +964,7 @@ ExecSetParamPlan(SubPlanState *node, ExprContext *econtext) /* Initialize ArrayBuildStateAny in caller's context, if needed */ if (subLinkType == ARRAY_SUBLINK) astate = initArrayResultAny(subplan-firstColType, - CurrentMemoryContext); + CurrentMemoryContext, true); /* * Must switch to per-query memory context. diff --git a/src/backend/utils/adt/array_userfuncs.c b/src/backend/utils/adt/array_userfuncs.c index 600646e..49fc23a 100644 --- a/src/backend/utils/adt/array_userfuncs.c +++ b/src/backend/utils/adt/array_userfuncs.c @@ -498,8 +498,13 @@ array_agg_transfn(PG_FUNCTION_ARGS) elog(ERROR, array_agg_transfn called in non-aggregate context); } - state = PG_ARGISNULL(0) ? NULL : (ArrayBuildState *) PG_GETARG_POINTER(0); + if (PG_ARGISNULL(0)) + state = initArrayResult(arg1_typeid, aggcontext, false); + else + state = (ArrayBuildState *) PG_GETARG_POINTER(0); + elem = PG_ARGISNULL(1) ? (Datum) 0 : PG_GETARG_DATUM(1); + state = accumArrayResult(state, elem, PG_ARGISNULL(1), @@ -573,7 +578,12 @@ array_agg_array_transfn(PG_FUNCTION_ARGS) elog(ERROR, array_agg_array_transfn called in non-aggregate context); } - state = PG_ARGISNULL(0) ? NULL : (ArrayBuildStateArr *) PG_GETARG_POINTER(0); + + if (PG_ARGISNULL(0)) + state = initArrayResultArr(arg1_typeid, InvalidOid, aggcontext, false); + else + state = (ArrayBuildStateArr *) PG_GETARG_POINTER(0); + state = accumArrayResultArr(state, PG_GETARG_DATUM(1), PG_ARGISNULL(1), diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c index 5591b46..b8c4fba 100644 --- a/src/backend/utils/adt/arrayfuncs.c +++ b/src/backend/utils/adt/arrayfuncs.c @@ -4606,6 +4606,7 @@ array_insert_slice(ArrayType *destArray, * * element_type is the array element type (must be a valid array element type) * rcontext is where to keep working state + * subcontext is a flag determining whether to use a separate memory context * * Note: there are two common schemes for using accumArrayResult(). * In the older scheme, you start with a NULL ArrayBuildState pointer, and @@ -4615,24 +4616,43 @@ array_insert_slice(ArrayType *destArray, * once per element. In this scheme you always end with a non-NULL pointer * that you can pass to makeArrayResult; you get an empty array if there * were no elements. This is preferred if an empty array is what you want. + * + * It's possible to choose whether to create a separate memory context for the + * array builde state, or whether to allocate it directly within rcontext + * (along with various other pieces). This influences memory
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Hi, On 21.1.2015 09:01, Jeff Davis wrote: On Tue, 2015-01-20 at 23:37 +0100, Tomas Vondra wrote: Tom's message where he points that out is here: http://www.postgresql.org/message-id/20707.1396372...@sss.pgh.pa.us That message also says: I think a patch that stood a chance of getting committed would need to detect whether the aggregate was being called in simple or grouped contexts, and apply different behaviors in the two cases. I take that as an objection to any patch which does not distinguish between the grouped and ungrouped aggregate cases, which includes your patch. I don't think 'simple context' in this case means 'aggregate without a group by clause'. The way I understood it back in April 2014 was that while the patch worked fine with grouped cases (i.e. in nodeAgg.c or such), the underlying function are used elsewhere in a simple context (e.g. in an xpath() or so) - and in this case it was broken. And that was a correct point, and was fixed by the recent patches. But maybe I'm missing something? I don't agree with that objection (or perhaps I don't understand it); but given the strong words above, I need to get some kind of response before I can consider committing your patch. I generally agree that having two API 'facets' with different behavior is slightly awkward and assymetric, but I wouldn't call that ugly. Right, your words are more precise (and polite). My apologies. U ... I wasn't suggesting calling the resulting API 'ugly' is impolite or so. It was meant just as a comment that the aesthetics of the API is quite subjective matter. No apology needed. I actually modified both APIs initially, but I think Ali is right that not breaking the existing API (and keeping the original behavior in that case) is better. We can break it any time we want in the future, but it's impossible to unbreak it ;-) We can't break the old API, and I'm not suggesting that we do. I was hoping to find some alternative. Why can't we? I'm not saying we should in this cases, but there are cases when breaking the API is the right thing to do (e.g. when the behavior changes radically, and needs to be noticed by the users). -- Tomas Vondrahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On Tue, 2015-01-20 at 23:37 +0100, Tomas Vondra wrote: Tom's message where he points that out is here: http://www.postgresql.org/message-id/20707.1396372...@sss.pgh.pa.us That message also says: I think a patch that stood a chance of getting committed would need to detect whether the aggregate was being called in simple or grouped contexts, and apply different behaviors in the two cases. I take that as an objection to any patch which does not distinguish between the grouped and ungrouped aggregate cases, which includes your patch. I don't agree with that objection (or perhaps I don't understand it); but given the strong words above, I need to get some kind of response before I can consider committing your patch. I generally agree that having two API 'facets' with different behavior is slightly awkward and assymetric, but I wouldn't call that ugly. Right, your words are more precise (and polite). My apologies. I actually modified both APIs initially, but I think Ali is right that not breaking the existing API (and keeping the original behavior in that case) is better. We can break it any time we want in the future, but it's impossible to unbreak it ;-) We can't break the old API, and I'm not suggesting that we do. I was hoping to find some alternative. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
2015-01-20 18:17 GMT+07:00 Ali Akbar the.ap...@gmail.com: 2015-01-20 14:22 GMT+07:00 Jeff Davis pg...@j-davis.com: The current patch, which I am evaluating for commit, does away with per-group memory contexts (it uses a common context for all groups), and reduces the initial array allocation from 64 to 8 (but preserves doubling behavior). Jeff Tomas, spotted this comment in v8 patch: @@ -4713,6 +4733,11 @@ accumArrayResult(ArrayBuildState *astate, /* * makeArrayResult - produce 1-D final result of accumArrayResult * + * If the array build state was initialized with a separate memory context, + * this also frees all the memory (by deleting the subcontext). If a parent + * context was used directly, the memory may be freed by an explicit pfree() + * call (unless it's meant to be freed by destroying the parent context). + * * astate is working state (must not be NULL) * rcontext is where to construct result */ Simple pfree(astate) call is not enough to free the memory. If it's scalar accumulation (initArrayResult), the user must pfree(astate-dvalues) and pfree(astate-dnulls) before astate. If it's array accumulation, pfree(astate-data) and pfree(astate-nullbitmap), with both can be null if no array accumulated and some other cases. If its any (scalar or array) accumulation, it's more complicated. I suggest it's simpler to just force the API user to destroy the parent context instead. So the comment become like this: @@ -4713,6 +4733,11 @@ accumArrayResult(ArrayBuildState *astate, /* * makeArrayResult - produce 1-D final result of accumArrayResult * + * If the array build state was initialized with a separate memory context, + * this also frees all the memory (by deleting the subcontext). If a parent + * context was used directly, the memory is meant to be freed by destroying + * the parent context. + * * astate is working state (must not be NULL) * rcontext is where to construct result */ Sorry, there is another comment of makeMdArrayResult, i suggest also changing it like this: @@ -4738,6 +4764,12 @@ makeArrayResult(ArrayBuildState *astate, * beware: no check that specified dimensions match the number of values * accumulated. * + * beware: if the astate was not initialized within a separate memory + * context (i.e. using subcontext=true when calling initArrayResult), + * using release=true is illegal as it releases the whole context, + * and that may include other memory still used elsewhere (instead use + * release=false and release the memory with the parent context later) + * * astate is working state (must not be NULL) * rcontext is where to construct result
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
2015-01-20 14:22 GMT+07:00 Jeff Davis pg...@j-davis.com: The current patch, which I am evaluating for commit, does away with per-group memory contexts (it uses a common context for all groups), and reduces the initial array allocation from 64 to 8 (but preserves doubling behavior). Jeff Tomas, spotted this comment in v8 patch: @@ -4713,6 +4733,11 @@ accumArrayResult(ArrayBuildState *astate, /* * makeArrayResult - produce 1-D final result of accumArrayResult * + * If the array build state was initialized with a separate memory context, + * this also frees all the memory (by deleting the subcontext). If a parent + * context was used directly, the memory may be freed by an explicit pfree() + * call (unless it's meant to be freed by destroying the parent context). + * * astate is working state (must not be NULL) * rcontext is where to construct result */ Simple pfree(astate) call is not enough to free the memory. If it's scalar accumulation (initArrayResult), the user must pfree(astate-dvalues) and pfree(astate-dnulls) before astate. If it's array accumulation, pfree(astate-data) and pfree(astate-nullbitmap), with both can be null if no array accumulated and some other cases. If its any (scalar or array) accumulation, it's more complicated. I suggest it's simpler to just force the API user to destroy the parent context instead. So the comment become like this: @@ -4713,6 +4733,11 @@ accumArrayResult(ArrayBuildState *astate, /* * makeArrayResult - produce 1-D final result of accumArrayResult * + * If the array build state was initialized with a separate memory context, + * this also frees all the memory (by deleting the subcontext). If a parent + * context was used directly, the memory is meant to be freed by destroying + * the parent context. + * * astate is working state (must not be NULL) * rcontext is where to construct result */ Regards, -- Ali Akbar
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Jeff Davis pg...@j-davis.com writes: Tom (tgl), Is my reasoning above acceptable? Uh, sorry, I've not been paying any attention to this thread for awhile. What's the remaining questions at issue? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Hi, On 20.1.2015 12:23, Ali Akbar wrote: 2015-01-20 18:17 GMT+07:00 Ali Akbar the.ap...@gmail.com Sorry, there is another comment of makeMdArrayResult, i suggest also changing it like this: @@ -4738,6 +4764,12 @@ makeArrayResult(ArrayBuildState *astate, * beware: no check that specified dimensions match the number of values * accumulated. * + * beware: if the astate was not initialized within a separate memory + * context (i.e. using subcontext=true when calling initArrayResult), + * using release=true is illegal as it releases the whole context, + * and that may include other memory still used elsewhere (instead use + * release=false and release the memory with the parent context later) + * *astate is working state (must not be NULL) *rcontext is where to construct result I think both comment fixes are appropriate. I'll wait a bit and then post an updated version of the patch (unless it gets commited with the comment fixes before that). -- Tomas Vondrahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On Tue, Jan 20, 2015 at 6:44 AM, Tom Lane t...@sss.pgh.pa.us wrote: Jeff Davis pg...@j-davis.com writes: Tom (tgl), Is my reasoning above acceptable? Uh, sorry, I've not been paying any attention to this thread for awhile. What's the remaining questions at issue? This patch is trying to improve the array_agg case where there are many arrays being constructed simultaneously, such as in HashAgg. You strongly suggested that a commitable patch would differentiate between the grouped and ungrouped cases (or perhaps you meant differentiate between HashAgg and sorted aggregation?). Tomas's current patch does not do so; it does two main things: 1. Uses a common memory context for arrays being constructed by array_agg in any context (ungrouped, sorted agg, and HashAgg) 2. Reduces the initial array allocation to 8 elements from 64, but preserves doubling behavior I don't see either as a big problem, but perhaps there are some downsides in some cases. I think a worst case would be a slowdown in the sorted agg case where every group has 64 elements, so I'll try to see if that's a real problem or not. If you saw a bigger problem, please let me know; and if not, I'll proceed with the review. There are also some other concerns I have about the API ugliness, which I believe is the reason there's so much discussion about making the comments better. The reason the API is ugly is for backwards compatibility, so there's no perfect solution. Your opinion is welcome here too, but I mainly need to see if your objection above has been addressed. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Hi, On 20.1.2015 21:13, Jeff Davis wrote: On Tue, Jan 20, 2015 at 6:44 AM, Tom Lane t...@sss.pgh.pa.us wrote: Jeff Davis pg...@j-davis.com writes: Tom (tgl), Is my reasoning above acceptable? Uh, sorry, I've not been paying any attention to this thread for awhile. What's the remaining questions at issue? This patch is trying to improve the array_agg case where there are many arrays being constructed simultaneously, such as in HashAgg. You strongly suggested that a commitable patch would differentiate between the grouped and ungrouped cases (or perhaps you meant differentiate between HashAgg and sorted aggregation?). Tomas's current patch does not do so; it does two main things: I don't think that's entirely true. The problem with the initial (experimental) patch was that while it fixed aggregate queries, it mostly ignored all the other callers, and either resulted in memory corruption (unexpected pfree) or bloat (when not doint the pfree). Tom's message where he points that out is here: http://www.postgresql.org/message-id/20707.1396372...@sss.pgh.pa.us The current patch does that distinction properly (IMHO), because it does this distinction - all the callers using the underlying array functions will use the original approach (with subcontexts), and only the array_agg uses the new API and forces subcontext=false. 1. Uses a common memory context for arrays being constructed by array_agg in any context (ungrouped, sorted agg, and HashAgg) 2. Reduces the initial array allocation to 8 elements from 64, but preserves doubling behavior Yes, that's true. I'd like to point out that while the current code uses 64 items, it also uses 8kB per-grop contexts. That's slightly overkill I guess ... I don't see either as a big problem, but perhaps there are some downsides in some cases. I think a worst case would be a slowdown in the sorted agg case where every group has 64 elements, so I'll try to see if that's a real problem or not. If you saw a bigger problem, please let me know; and if not, I'll proceed with the review. FWIW I've done a fair amount of measurements and not noticed any measurable difference (unless using a rather crazy testcase, IIRC). Certainly the issues with excessive memory consumption (and swapping) were much worse. There are also some other concerns I have about the API ugliness, which I believe is the reason there's so much discussion about making the comments better. The reason the API is ugly is for backwards compatibility, so there's no perfect solution. Your opinion is welcome here too, but I mainly need to see if your objection above has been addressed. I generally agree that having two API 'facets' with different behavior is slightly awkward and assymetric, but I wouldn't call that ugly. I actually modified both APIs initially, but I think Ali is right that not breaking the existing API (and keeping the original behavior in that case) is better. We can break it any time we want in the future, but it's impossible to unbreak it ;-) regards -- Tomas Vondrahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On Wed, Jan 14, 2015 at 1:52 PM, Tomas Vondra tomas.von...@2ndquadrant.com wrote: Attached is v8 patch, with a few comments added: 1) before initArrayResult() - explaining when it's better to use a single memory context, and when it's more efficient to use a separate memory context for each array build state 2) before makeArrayResult() - explaining that it won't free memory when allocated in a single memory context (and that a pfree() has to be used if necessary) 3) before makeMdArrayResult() - explaining that it's illegal to use release=true unless using a subcontext I understand there is history to this API, and we need to be compatible, but the end result is awkward. I'm wondering whether it would be better to just have a new set of functions like accumArrayResultElem, etc., and only allow astate=NULL in the original accumArrayResult(). That cure might be worse than the disease though. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On Sun, Dec 28, 2014 at 11:53 PM, Jeff Davis pg...@j-davis.com wrote: On Tue, 2014-04-01 at 13:08 -0400, Tom Lane wrote: I think a patch that stood a chance of getting committed would need to detect whether the aggregate was being called in simple or grouped contexts, and apply different behaviors in the two cases. The simple context doesn't seem like a big problem even if we change things as Tomas suggests: IMNSHO these are the issues we really should fix - by lowering the initial element count (64-4) and using a single memory context. In the simple context, there's only one context regardless, so the only cost I see is from reducing the initial allocation from 64 to some lower number. But if we're doubling each time, it won't take long to get there; and because it's the simple context, we only need to do it once. Tom (tgl), Is my reasoning above acceptable? The current patch, which I am evaluating for commit, does away with per-group memory contexts (it uses a common context for all groups), and reduces the initial array allocation from 64 to 8 (but preserves doubling behavior). Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On 12.1.2015 01:28, Ali Akbar wrote: Or else we implement what you suggest below (more comments below): Thinking about the 'release' flag a bit more - maybe we could do this instead: if (release astate-private_cxt) MemoryContextDelete(astate-mcontext); else if (release) { pfree(astate-dvalues); pfree(astate-dnulls); pfree(astate); } i.e. either destroy the whole context if possible, and just free the memory when using a shared memory context. But I'm afraid this would penalize the shared memory context, because that's intended for cases where all the build states coexist in parallel and then at some point are all converted into a result and thrown away. Adding pfree() calls is no improvement here, and just wastes cycles. As per Tom's comment, i'm using parent memory context instead of shared memory context below. In the future, if some code writer decided to use subcontext=false, to save memory in cases where there are many array accumulation, and the parent memory context is long-living, current code can cause memory leak. So i think we should implement your suggestion (pfreeing astate), and warn the implication in the API comment. The API user must choose between release=true, wasting cycles but preventing memory leak, or managing memory from the parent memory context. I'm wondering whether this is necessary after fixing makeArrayResult to use the privat_cxt flag. It's still possible to call makeMdArrayResult directly (with the wrong 'release' value). Another option might be to get rid of the 'release' flag altogether, and just use the 'private_cxt' - I'm not aware of a code using release=false with private_cxt=true (e.g. to build the same array twice from the same astate). But maybe there's such code, and another downside is that it'd break the existing API. In one possible use case, for efficiency maybe the caller will create a special parent memory context for all array accumulation. Then uses makeArrayResult* with release=false, and in the end releasing the parent memory context once for all. Yeah, although I'd much rather not break the existing code at all. That is - my goal is not to make it slower unless absolutely necessary (and in that case the code may be fixed per your suggestion). But I'm not convinced it's worth it. OK. Do you think we need to note this in the comments? Something like this: If using subcontext=false, the caller must be careful about memory usage, because makeArrayResult* will not free the memory used. Yes, I think it's worth mentioning. But I think it makes sense to move the error handling into initArrayResultArr(), including the get_element_type() call, and remove the element_type from the signature. This means initArrayResultAny() will call the get_element_type() twice, but I guess that's negligible. And everyone who calls initArrayResultArr() will get the error handling for free. Patch v7 attached, implementing those two changes, i.e. * makeMdArrayResult(..., astate-private_cxt) * move error handling into initArrayResultArr() * remove element_type from initArrayResultArr() signature Reviewing the v7 patch: - applies cleanly to current master. patch format, whitespace, etc is good - make check runs without error - performance memory usage still consistent If you think we don't have to add the comment (see above), i'll mark this as ready for committer Attached is v8 patch, with a few comments added: 1) before initArrayResult() - explaining when it's better to use a single memory context, and when it's more efficient to use a separate memory context for each array build state 2) before makeArrayResult() - explaining that it won't free memory when allocated in a single memory context (and that a pfree() has to be used if necessary) 3) before makeMdArrayResult() - explaining that it's illegal to use release=true unless using a subcontext Otherwise the v8 patch is exactly the same as v7. Assuming the comments make it sufficiently clear, I agree with marking this as 'ready for committer'. regards -- Tomas Vondrahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c index f3ce1d7..9eb4d63 100644 --- a/src/backend/executor/nodeSubplan.c +++
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Or else we implement what you suggest below (more comments below): Thinking about the 'release' flag a bit more - maybe we could do this instead: if (release astate-private_cxt) MemoryContextDelete(astate-mcontext); else if (release) { pfree(astate-dvalues); pfree(astate-dnulls); pfree(astate); } i.e. either destroy the whole context if possible, and just free the memory when using a shared memory context. But I'm afraid this would penalize the shared memory context, because that's intended for cases where all the build states coexist in parallel and then at some point are all converted into a result and thrown away. Adding pfree() calls is no improvement here, and just wastes cycles. As per Tom's comment, i'm using parent memory context instead of shared memory context below. In the future, if some code writer decided to use subcontext=false, to save memory in cases where there are many array accumulation, and the parent memory context is long-living, current code can cause memory leak. So i think we should implement your suggestion (pfreeing astate), and warn the implication in the API comment. The API user must choose between release=true, wasting cycles but preventing memory leak, or managing memory from the parent memory context. I'm wondering whether this is necessary after fixing makeArrayResult to use the privat_cxt flag. It's still possible to call makeMdArrayResult directly (with the wrong 'release' value). Another option might be to get rid of the 'release' flag altogether, and just use the 'private_cxt' - I'm not aware of a code using release=false with private_cxt=true (e.g. to build the same array twice from the same astate). But maybe there's such code, and another downside is that it'd break the existing API. In one possible use case, for efficiency maybe the caller will create a special parent memory context for all array accumulation. Then uses makeArrayResult* with release=false, and in the end releasing the parent memory context once for all. Yeah, although I'd much rather not break the existing code at all. That is - my goal is not to make it slower unless absolutely necessary (and in that case the code may be fixed per your suggestion). But I'm not convinced it's worth it. OK. Do you think we need to note this in the comments? Something like this: If using subcontext=false, the caller must be careful about memory usage, because makeArrayResult* will not free the memory used. But I think it makes sense to move the error handling into initArrayResultArr(), including the get_element_type() call, and remove the element_type from the signature. This means initArrayResultAny() will call the get_element_type() twice, but I guess that's negligible. And everyone who calls initArrayResultArr() will get the error handling for free. Patch v7 attached, implementing those two changes, i.e. * makeMdArrayResult(..., astate-private_cxt) * move error handling into initArrayResultArr() * remove element_type from initArrayResultArr() signature Reviewing the v7 patch: - applies cleanly to current master. patch format, whitespace, etc is good - make check runs without error - performance memory usage still consistent If you think we don't have to add the comment (see above), i'll mark this as ready for committer Regards, -- Ali Akbar
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Hi, On 8.1.2015 08:53, Ali Akbar wrote: In the CF, the status becomes Needs Review. Let's continue our discussion of makeArrayResult* behavior if subcontext=false and release=true (more below): 2014-12-22 8:08 GMT+07:00 Ali Akbar the.ap...@gmail.com mailto:the.ap...@gmail.com: With this API, i think we should make it clear if we call initArrayResult with subcontext=false, we can't call makeArrayResult, but we must use makeMdArrayResult directly. Or better, we can modify makeArrayResult to release according to astate-private_cxt: @@ -4742,7 +4742,7 @@ makeArrayResult(ArrayBuildState *astate, dims[0] = astate-nelems; lbs[0] = 1; - return makeMdArrayResult(astate, ndims, dims, lbs, rcontext, true); + return makeMdArrayResult(astate, ndims, dims, lbs, rcontext, astate-private_cxt); I've done this, so makeArrayResult() uses the private_cxt flag. Or else we implement what you suggest below (more comments below): Thinking about the 'release' flag a bit more - maybe we could do this instead: if (release astate-private_cxt) MemoryContextDelete(astate-mcontext); else if (release) { pfree(astate-dvalues); pfree(astate-dnulls); pfree(astate); } i.e. either destroy the whole context if possible, and just free the memory when using a shared memory context. But I'm afraid this would penalize the shared memory context, because that's intended for cases where all the build states coexist in parallel and then at some point are all converted into a result and thrown away. Adding pfree() calls is no improvement here, and just wastes cycles. As per Tom's comment, i'm using parent memory context instead of shared memory context below. In the future, if some code writer decided to use subcontext=false, to save memory in cases where there are many array accumulation, and the parent memory context is long-living, current code can cause memory leak. So i think we should implement your suggestion (pfreeing astate), and warn the implication in the API comment. The API user must choose between release=true, wasting cycles but preventing memory leak, or managing memory from the parent memory context. I'm wondering whether this is necessary after fixing makeArrayResult to use the privat_cxt flag. It's still possible to call makeMdArrayResult directly (with the wrong 'release' value). Another option might be to get rid of the 'release' flag altogether, and just use the 'private_cxt' - I'm not aware of a code using release=false with private_cxt=true (e.g. to build the same array twice from the same astate). But maybe there's such code, and another downside is that it'd break the existing API. In one possible use case, for efficiency maybe the caller will create a special parent memory context for all array accumulation. Then uses makeArrayResult* with release=false, and in the end releasing the parent memory context once for all. Yeah, although I'd much rather not break the existing code at all. That is - my goal is not to make it slower unless absolutely necessary (and in that case the code may be fixed per your suggestion). But I'm not convinced it's worth it. As for the v6 patch: - the patch applies cleanly to master - make check is successfull - memory benefit is still there - performance benefit i think is negligible Reviewing the code, found this: @@ -573,7 +578,22 @@ array_agg_array_transfn(PG_FUNCTION_ARGS) elog(ERROR, array_agg_array_transfn called in non-aggregate context); } -state = PG_ARGISNULL(0) ? NULL : (ArrayBuildStateArr *) PG_GETARG_POINTER(0); + +if (PG_ARGISNULL(0)) +{ +Oidelement_type = get_element_type(arg1_typeid); + +if (!OidIsValid(element_type)) +ereport(ERROR, +(errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg(data type %s is not an array type, +format_type_be(arg1_typeid; digging more, it looks like those code required because accumArrayResultArr checks the element type: /* First time through --- initialize */ Oidelement_type = get_element_type(array_type); if (!OidIsValid(element_type)) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg(data type %s is not an array type, format_type_be(array_type; astate = initArrayResultArr(array_type, element_type, rcontext, true); I think
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
2014-12-29 14:38 GMT+07:00 Jeff Davis pg...@j-davis.com: Just jumping into this patch now. Do we think this is worth changing the signature of functions in array.h, which might be used from a lot of third-party code? We might want to provide new functions to avoid a breaking change. V6 patch from Tomas only change initArrayResult* functions. initArrayResult is new API in 9.5 (commit bac2739), with old API still works as-is. -- Ali Akbar
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On Sun, 2014-12-21 at 13:00 -0500, Tom Lane wrote: Tomas Vondra t...@fuzzy.cz writes: i.e. either destroy the whole context if possible, and just free the memory when using a shared memory context. But I'm afraid this would penalize the shared memory context, because that's intended for cases where all the build states coexist in parallel and then at some point are all converted into a result and thrown away. Adding pfree() calls is no improvement here, and just wastes cycles. FWIW, I quite dislike the terminology shared memory context, because it sounds too much like it means a context in shared memory. I see that the patch itself doesn't use that phrase, which is good, but can we come up with some other phrase for talking about it? Common memory context? Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On Tue, 2014-12-16 at 00:27 +0100, Tomas Vondra wrote: plperl.c: In function 'array_to_datum_internal': plperl.c:1196: error: too few arguments to function 'accumArrayResult' plperl.c: In function 'plperl_array_to_datum': plperl.c:1223: error: too few arguments to function 'initArrayResult' Cheers, Thanks, attached is a version that fixes this. Just jumping into this patch now. Do we think this is worth changing the signature of functions in array.h, which might be used from a lot of third-party code? We might want to provide new functions to avoid a breaking change. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On Tue, 2014-04-01 at 13:08 -0400, Tom Lane wrote: I think a patch that stood a chance of getting committed would need to detect whether the aggregate was being called in simple or grouped contexts, and apply different behaviors in the two cases. The simple context doesn't seem like a big problem even if we change things as Tomas suggests: IMNSHO these are the issues we really should fix - by lowering the initial element count (64-4) and using a single memory context. In the simple context, there's only one context regardless, so the only cost I see is from reducing the initial allocation from 64 to some lower number. But if we're doubling each time, it won't take long to get there; and because it's the simple context, we only need to do it once. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On 21.12.2014 02:54, Alvaro Herrera wrote: Tomas Vondra wrote: Attached is v5 of the patch, fixing an error with releasing a shared memory context (invalid flag values in a few calls). The functions that gain a new argument should get their comment updated, to explain what the new argument is for. Right. I've added a short description of the 'subcontext' parameter to all three variations of the initArray* function, and a more thorough explanation to initArrayResult(). Also, what is it with this hunk? @@ -4768,6 +4770,9 @@ makeMdArrayResult(ArrayBuildState *astate, MemoryContextSwitchTo(oldcontext); +/* we can only release the context if it's a private one. */ +// Assert(! (release !astate-private_cxt)); + /* Clean up all the junk */ if (release) MemoryContextDelete(astate-mcontext); That's a mistake, of couse - the assert should not be commented out. Attached is v6 of the patch, with the comments and assert fixed. Thinking about the 'release' flag a bit more - maybe we could do this instead: if (release astate-private_cxt) MemoryContextDelete(astate-mcontext); else if (release) { pfree(astate-dvalues); pfree(astate-dnulls); pfree(astate); } i.e. either destroy the whole context if possible, and just free the memory when using a shared memory context. But I'm afraid this would penalize the shared memory context, because that's intended for cases where all the build states coexist in parallel and then at some point are all converted into a result and thrown away. Adding pfree() calls is no improvement here, and just wastes cycles. regards Tomas diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c index d9faf20..9c97755 100644 --- a/src/backend/executor/nodeSubplan.c +++ b/src/backend/executor/nodeSubplan.c @@ -262,7 +262,7 @@ ExecScanSubPlan(SubPlanState *node, /* Initialize ArrayBuildStateAny in caller's context, if needed */ if (subLinkType == ARRAY_SUBLINK) astate = initArrayResultAny(subplan-firstColType, - CurrentMemoryContext); + CurrentMemoryContext, true); /* * We are probably in a short-lived expression-evaluation context. Switch @@ -964,7 +964,7 @@ ExecSetParamPlan(SubPlanState *node, ExprContext *econtext) /* Initialize ArrayBuildStateAny in caller's context, if needed */ if (subLinkType == ARRAY_SUBLINK) astate = initArrayResultAny(subplan-firstColType, - CurrentMemoryContext); + CurrentMemoryContext, true); /* * Must switch to per-query memory context. diff --git a/src/backend/utils/adt/array_userfuncs.c b/src/backend/utils/adt/array_userfuncs.c index 50ea4d2..f434fdd 100644 --- a/src/backend/utils/adt/array_userfuncs.c +++ b/src/backend/utils/adt/array_userfuncs.c @@ -498,8 +498,13 @@ array_agg_transfn(PG_FUNCTION_ARGS) elog(ERROR, array_agg_transfn called in non-aggregate context); } - state = PG_ARGISNULL(0) ? NULL : (ArrayBuildState *) PG_GETARG_POINTER(0); + if (PG_ARGISNULL(0)) + state = initArrayResult(arg1_typeid, aggcontext, false); + else + state = (ArrayBuildState *) PG_GETARG_POINTER(0); + elem = PG_ARGISNULL(1) ? (Datum) 0 : PG_GETARG_DATUM(1); + state = accumArrayResult(state, elem, PG_ARGISNULL(1), @@ -573,7 +578,22 @@ array_agg_array_transfn(PG_FUNCTION_ARGS) elog(ERROR, array_agg_array_transfn called in non-aggregate context); } - state = PG_ARGISNULL(0) ? NULL : (ArrayBuildStateArr *) PG_GETARG_POINTER(0); + + if (PG_ARGISNULL(0)) + { + Oid element_type = get_element_type(arg1_typeid); + + if (!OidIsValid(element_type)) + ereport(ERROR, + (errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg(data type %s is not an array type, + format_type_be(arg1_typeid; + + state = initArrayResultArr(arg1_typeid, element_type, aggcontext, false); + } + else + state = (ArrayBuildStateArr *) PG_GETARG_POINTER(0); + state = accumArrayResultArr(state, PG_GETARG_DATUM(1), PG_ARGISNULL(1), diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c index 933c6b0..0302caf 100644 --- a/src/backend/utils/adt/arrayfuncs.c +++ b/src/backend/utils/adt/arrayfuncs.c @@ -4606,6 +4606,7 @@ array_insert_slice(ArrayType *destArray, * * element_type is the array element type (must be a valid array element type) * rcontext is where to keep working state + * subcontext is a flag determining whether to use a separate memory context * * Note: there are two common schemes for using accumArrayResult(). * In the older scheme, you start with a NULL ArrayBuildState pointer, and @@ -4615,24 +4616,36 @@ array_insert_slice(ArrayType *destArray, * once per element. In this scheme you always end with a non-NULL pointer * that you can pass to makeArrayResult; you get an empty array if there * were no elements. This is preferred if an empty array is what you want. + * + * You may choose whether to
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Tomas Vondra t...@fuzzy.cz writes: i.e. either destroy the whole context if possible, and just free the memory when using a shared memory context. But I'm afraid this would penalize the shared memory context, because that's intended for cases where all the build states coexist in parallel and then at some point are all converted into a result and thrown away. Adding pfree() calls is no improvement here, and just wastes cycles. FWIW, I quite dislike the terminology shared memory context, because it sounds too much like it means a context in shared memory. I see that the patch itself doesn't use that phrase, which is good, but can we come up with some other phrase for talking about it? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Another positive benefit is that this won't break the code unless it uses the new API. This is a problem especially with external code (e.g. extensions), but the new API (initArray*) is not part of 9.4 so there's no such code. So that's nice. The one annoying thing is that this makes the API slighly unbalanced. With the new API you can use a shared memory context, which with the old one (not using the initArray* methods) you can't. But I'm OK with that, and it makes the patch smaller (15kB - 11kB). Yes, with this API, we can backpatch this patch to 9.4 (or below) if we need it there. I think this API is a good compromise of old API and new API. Ideally if we can migrate all code to new API (all code must call initArrayResult* before accumArrayResult*), we can remove parameter MemoryContext rcontext from accumArrayResult. Currently, the code isn't using the rcontext for anything except for old API calls (in first call to accumArrayResult). 2014-12-21 20:38 GMT+07:00 Tomas Vondra t...@fuzzy.cz: On 21.12.2014 02:54, Alvaro Herrera wrote: Tomas Vondra wrote: Attached is v5 of the patch, fixing an error with releasing a shared memory context (invalid flag values in a few calls). The functions that gain a new argument should get their comment updated, to explain what the new argument is for. Right. I've added a short description of the 'subcontext' parameter to all three variations of the initArray* function, and a more thorough explanation to initArrayResult(). With this API, i think we should make it clear if we call initArrayResult with subcontext=false, we can't call makeArrayResult, but we must use makeMdArrayResult directly. Or better, we can modify makeArrayResult to release according to astate-private_cxt: @@ -4742,7 +4742,7 @@ makeArrayResult(ArrayBuildState *astate, dims[0] = astate-nelems; lbs[0] = 1; - return makeMdArrayResult(astate, ndims, dims, lbs, rcontext, true); + return makeMdArrayResult(astate, ndims, dims, lbs, rcontext, astate-private_cxt); Or else we implement what you suggest below (more comments below): Thinking about the 'release' flag a bit more - maybe we could do this instead: if (release astate-private_cxt) MemoryContextDelete(astate-mcontext); else if (release) { pfree(astate-dvalues); pfree(astate-dnulls); pfree(astate); } i.e. either destroy the whole context if possible, and just free the memory when using a shared memory context. But I'm afraid this would penalize the shared memory context, because that's intended for cases where all the build states coexist in parallel and then at some point are all converted into a result and thrown away. Adding pfree() calls is no improvement here, and just wastes cycles. As per Tom's comment, i'm using parent memory context instead of shared memory context below. In the future, if some code writer decided to use subcontext=false, to save memory in cases where there are many array accumulation, and the parent memory context is long-living, current code can cause memory leak. So i think we should implement your suggestion (pfreeing astate), and warn the implication in the API comment. The API user must choose between release=true, wasting cycles but preventing memory leak, or managing memory from the parent memory context. In one possible use case, for efficiency maybe the caller will create a special parent memory context for all array accumulation. Then uses makeArrayResult* with release=false, and in the end releasing the parent memory context once for all. Regards, -- Ali Akbar
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On 12/21/14, 7:08 PM, Ali Akbar wrote: Another positive benefit is that this won't break the code unless it uses the new API. This is a problem especially with external code (e.g. extensions), but the new API (initArray*) is not part of 9.4 so there's no such code. So that's nice. The one annoying thing is that this makes the API slighly unbalanced. With the new API you can use a shared memory context, which with the old one (not using the initArray* methods) you can't. But I'm OK with that, and it makes the patch smaller (15kB - 11kB). Yes, with this API, we can backpatch this patch to 9.4 (or below) if we need it there. I think this API is a good compromise of old API and new API. Ideally if we can migrate all code to new API (all code must call initArrayResult* before accumArrayResult*), we can remove parameter MemoryContext rcontext from accumArrayResult. Currently, the code isn't using the rcontext for anything except for old API calls (in first call to accumArrayResult). Until we eliminate the API though, we should leave something in place that still uses the old one, to make certain we don't accidentally break it. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
2014-12-16 11:01 GMT+07:00 Ali Akbar the.ap...@gmail.com: 2014-12-16 10:47 GMT+07:00 Ali Akbar the.ap...@gmail.com: 2014-12-16 6:27 GMT+07:00 Tomas Vondra t...@fuzzy.cz: Just fast-viewing the patch. The patch is not implementing the checking for not creating new context in initArrayResultArr. I think we should implement it also there for consistency (and preventing future problems). Testing the performance with your query, looks promising: speedup is between 12% ~ 15%. Because i'm using 32-bit systems, setting work_mem to 1024GB failed: ERROR: 1073741824 is outside the valid range for parameter work_mem (64 .. 2097151) STATEMENT: SET work_mem = '1024GB'; psql:/media/truecrypt1/oss/postgresql/postgresql/../patches/array-agg.sql:20: ERROR: 1073741824 is outside the valid range for parameter work_mem (64 .. 2097151) Maybe because of that, in the large groups a test, the speedup is awesome: master: 16,819 ms with patch: 1,720 ms Looks like with master, postgres resort to disk, but with the patch it fits in memory. Note: I hasn't tested the large dataset. As expected, testing array_agg(anyarray), the performance is still the same, because the subcontext hasn't implemented there (test script modified from Tomas', attached). I implemented the subcontext checking in initArrayResultArr by changing the v3 patch like this: +++ b/src/backend/utils/adt/arrayfuncs.c @@ -4797,10 +4797,11 @@ initArrayResultArr(Oid array_type, Oid element_type, MemoryContext rcontext, bool subcontext) { ArrayBuildStateArr *astate; - MemoryContext arr_context; + MemoryContext arr_context = rcontext; /* by default use the parent ctx */ /* Make a temporary context to hold all the junk */ - arr_context = AllocSetContextCreate(rcontext, + if (subcontext) + arr_context = AllocSetContextCreate(rcontext, accumArrayResultArr, ALLOCSET_DEFAULT_MINSIZE, ALLOCSET_DEFAULT_INITSIZE, Testing the performance, it got the 12%~15% speedup. Good. (patch attached) Looking at the modification in accumArrayResult* functions, i don't really comfortable with: 1. Code that calls accumArrayResult* after explicitly calling initArrayResult* must always passing subcontext, but it has no effect. 2. All existing codes that calls accumArrayResult must be changed. Just an idea: why don't we minimize the change in API like this: 1. Adding parameter bool subcontext, only in initArrayResult* functions but not in accumArrayResult* 2. Code that want to not creating subcontext must calls initArrayResult* explicitly. Other codes that calls directly to accumArrayResult can only be changed in the call to makeArrayResult* (with release=true parameter). In places that we don't want to create subcontext (as in array_agg_transfn), modify it to use initArrayResult* before calling accumArrayResult*. What do you think? As per your concern about calling initArrayResult* with subcontext=false, while makeArrayResult* with release=true: Also, it seems that using 'subcontext=false' and then 'release=true' would be a bug. Maybe it would be appropriate to store the 'subcontext' value into the ArrayBuildState and then throw an error if makeArrayResult* is called with (release=true subcontext=false). Yes, i think we should do that to minimize unexpected coding errors. In makeArrayResult*, i think its better to not throwing an error, but using assertions: Assert(release == false || astate-subcontext == true); Regards, -- Ali Akbar array-agg-anyarray.sql Description: application/sql diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c index 18ae318..48e66bf 100644 --- a/contrib/dblink/dblink.c +++ b/contrib/dblink/dblink.c @@ -1271,13 +1271,13 @@ dblink_get_connections(PG_FUNCTION_ARGS) /* stash away current value */ astate = accumArrayResult(astate, CStringGetTextDatum(hentry-name), - false, TEXTOID, CurrentMemoryContext); + false, TEXTOID, CurrentMemoryContext, true); } } if (astate) PG_RETURN_ARRAYTYPE_P(makeArrayResult(astate, - CurrentMemoryContext)); + CurrentMemoryContext, true)); else PG_RETURN_NULL(); } diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c index c16b38e..bd5eb32 100644 --- a/src/backend/access/common/reloptions.c +++ b/src/backend/access/common/reloptions.c @@ -671,7 +671,7 @@ transformRelOptions(Datum oldOptions, List *defList, char *namspace, /* No match, so keep old option */ astate = accumArrayResult(astate, oldoptions[i], false, TEXTOID, - CurrentMemoryContext); + CurrentMemoryContext, true); } } } @@ -758,12 +758,12 @@ transformRelOptions(Datum oldOptions, List *defList, char *namspace, astate =
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Hi! First of all, thanks for the review - the insights and comments are spot-on. More comments below. On 20.12.2014 09:26, Ali Akbar wrote: 2014-12-16 11:01 GMT+07:00 Ali Akbar the.ap...@gmail.com mailto:the.ap...@gmail.com: 2014-12-16 10:47 GMT+07:00 Ali Akbar the.ap...@gmail.com mailto:the.ap...@gmail.com: 2014-12-16 6:27 GMT+07:00 Tomas Vondra t...@fuzzy.cz mailto:t...@fuzzy.cz: Just fast-viewing the patch. The patch is not implementing the checking for not creating new context in initArrayResultArr. I think we should implement it also there for consistency (and preventing future problems). You're right that initArrayResultArr was missing the code deciding whether to create a subcontext or reuse the parent one, and the fix you proposed (i.e. reusing code from initArrayResult) is IMHO the right one. Testing the performance with your query, looks promising: speedup is between 12% ~ 15%. Because i'm using 32-bit systems, setting work_mem to 1024GB failed: ERROR: 1073741824 is outside the valid range for parameter work_mem (64 .. 2097151) STATEMENT: SET work_mem = '1024GB'; psql:/media/truecrypt1/oss/postgresql/postgresql/../patches/array-agg.sql:20: ERROR: 1073741824 is outside the valid range for parameter work_mem (64 .. 2097151) Yes, that's pretty clearly because of the 2GB limit on 32-bit systems. Maybe because of that, in the large groups a test, the speedup is awesome: master: 16,819 ms with patch: 1,720 ms Probably. It's difficult to say without explain plans or something, but it's probably using a different plan (e.g. group aggregate). Looks like with master, postgres resort to disk, but with the patch it fits in memory. I'd bet that's not postgres, but system using a swap (because postgres allocates a lot of memory). Note: I hasn't tested the large dataset. As expected, testing array_agg(anyarray), the performance is still the same, because the subcontext hasn't implemented there (test script modified from Tomas', attached). I implemented the subcontext checking in initArrayResultArr by changing the v3 patch like this: +++ b/src/backend/utils/adt/arrayfuncs.c @@ -4797,10 +4797,11 @@ initArrayResultArr(Oid array_type, Oid element_type, MemoryContext rcontext, bool subcontext) { ArrayBuildStateArr *astate; - MemoryContext arr_context; + MemoryContext arr_context = rcontext; /* by default use the parent ctx */ /* Make a temporary context to hold all the junk */ - arr_context = AllocSetContextCreate(rcontext, + if (subcontext) + arr_context = AllocSetContextCreate(rcontext, accumArrayResultArr, ALLOCSET_DEFAULT_MINSIZE, ALLOCSET_DEFAULT_INITSIZE, Testing the performance, it got the 12%~15% speedup. Good. (patch attached) Nice, and it's consistent with my measurements on scalar values. Looking at the modification in accumArrayResult* functions, i don't really comfortable with: 1. Code that calls accumArrayResult* after explicitly calling initArrayResult* must always passing subcontext, but it has no effect. 2. All existing codes that calls accumArrayResult must be changed. Just an idea: why don't we minimize the change in API like this: 1. Adding parameter bool subcontext, only in initArrayResult* functions but not in accumArrayResult* 2. Code that want to not creating subcontext must calls initArrayResult* explicitly. Other codes that calls directly to accumArrayResult can only be changed in the call to makeArrayResult* (with release=true parameter). In places that we don't want to create subcontext (as in array_agg_transfn), modify it to use initArrayResult* before calling accumArrayResult*. What do you think? I think it's an interesting idea. I've been considering this before, when thinking about the best way to keep the calls to the various methods consistent (eg. enforcing the use of release=true only with subcontexts). What I ended up doing (see the v4 patch attached) is that I (1) added 'private_cxt' flag to the ArrayBuildState[Arr] struct, tracking whether there's a private memory context (2) rolled back all the API changes, except for the initArray* methods (as you proposed) This has the positive benefit that it allows checking consistency of the calls - you can still do initArrayResult(..., subcontext=false) ... makeArrayResult(..., release=true) but it won't reset the memory context, and with assert-enabled build it will actually fail. Another positive benefit is that this won't break the code unless it uses the new API. This is a
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Attached is v5 of the patch, fixing an error with releasing a shared memory context (invalid flag values in a few calls). kind regards Tomas Vondra diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c index d9faf20..9c97755 100644 --- a/src/backend/executor/nodeSubplan.c +++ b/src/backend/executor/nodeSubplan.c @@ -262,7 +262,7 @@ ExecScanSubPlan(SubPlanState *node, /* Initialize ArrayBuildStateAny in caller's context, if needed */ if (subLinkType == ARRAY_SUBLINK) astate = initArrayResultAny(subplan-firstColType, - CurrentMemoryContext); + CurrentMemoryContext, true); /* * We are probably in a short-lived expression-evaluation context. Switch @@ -964,7 +964,7 @@ ExecSetParamPlan(SubPlanState *node, ExprContext *econtext) /* Initialize ArrayBuildStateAny in caller's context, if needed */ if (subLinkType == ARRAY_SUBLINK) astate = initArrayResultAny(subplan-firstColType, - CurrentMemoryContext); + CurrentMemoryContext, true); /* * Must switch to per-query memory context. diff --git a/src/backend/utils/adt/array_userfuncs.c b/src/backend/utils/adt/array_userfuncs.c index 50ea4d2..f434fdd 100644 --- a/src/backend/utils/adt/array_userfuncs.c +++ b/src/backend/utils/adt/array_userfuncs.c @@ -498,8 +498,13 @@ array_agg_transfn(PG_FUNCTION_ARGS) elog(ERROR, array_agg_transfn called in non-aggregate context); } - state = PG_ARGISNULL(0) ? NULL : (ArrayBuildState *) PG_GETARG_POINTER(0); + if (PG_ARGISNULL(0)) + state = initArrayResult(arg1_typeid, aggcontext, false); + else + state = (ArrayBuildState *) PG_GETARG_POINTER(0); + elem = PG_ARGISNULL(1) ? (Datum) 0 : PG_GETARG_DATUM(1); + state = accumArrayResult(state, elem, PG_ARGISNULL(1), @@ -573,7 +578,22 @@ array_agg_array_transfn(PG_FUNCTION_ARGS) elog(ERROR, array_agg_array_transfn called in non-aggregate context); } - state = PG_ARGISNULL(0) ? NULL : (ArrayBuildStateArr *) PG_GETARG_POINTER(0); + + if (PG_ARGISNULL(0)) + { + Oid element_type = get_element_type(arg1_typeid); + + if (!OidIsValid(element_type)) + ereport(ERROR, + (errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg(data type %s is not an array type, + format_type_be(arg1_typeid; + + state = initArrayResultArr(arg1_typeid, element_type, aggcontext, false); + } + else + state = (ArrayBuildStateArr *) PG_GETARG_POINTER(0); + state = accumArrayResultArr(state, PG_GETARG_DATUM(1), PG_ARGISNULL(1), diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c index 933c6b0..7a14a71 100644 --- a/src/backend/utils/adt/arrayfuncs.c +++ b/src/backend/utils/adt/arrayfuncs.c @@ -4617,22 +4617,24 @@ array_insert_slice(ArrayType *destArray, * were no elements. This is preferred if an empty array is what you want. */ ArrayBuildState * -initArrayResult(Oid element_type, MemoryContext rcontext) +initArrayResult(Oid element_type, MemoryContext rcontext, bool subcontext) { ArrayBuildState *astate; - MemoryContext arr_context; + MemoryContext arr_context = rcontext; /* by default use the parent ctx */ /* Make a temporary context to hold all the junk */ - arr_context = AllocSetContextCreate(rcontext, - accumArrayResult, - ALLOCSET_DEFAULT_MINSIZE, - ALLOCSET_DEFAULT_INITSIZE, - ALLOCSET_DEFAULT_MAXSIZE); + if (subcontext) + arr_context = AllocSetContextCreate(rcontext, + accumArrayResult, + ALLOCSET_DEFAULT_MINSIZE, + ALLOCSET_DEFAULT_INITSIZE, + ALLOCSET_DEFAULT_MAXSIZE); astate = (ArrayBuildState *) MemoryContextAlloc(arr_context, sizeof(ArrayBuildState)); astate-mcontext = arr_context; - astate-alen = 64; /* arbitrary starting array size */ + astate-private_cxt = subcontext; + astate-alen = 8; /* arbitrary starting array size */ astate-dvalues = (Datum *) MemoryContextAlloc(arr_context, astate-alen * sizeof(Datum)); astate-dnulls = (bool *) @@ -4666,7 +4668,7 @@ accumArrayResult(ArrayBuildState *astate, if (astate == NULL) { /* First time through --- initialize */ - astate = initArrayResult(element_type, rcontext); + astate = initArrayResult(element_type, rcontext, true); } else { @@ -4768,6 +4770,9 @@ makeMdArrayResult(ArrayBuildState *astate, MemoryContextSwitchTo(oldcontext); + /* we can only release the context if it's a private one. */ + // Assert(! (release !astate-private_cxt)); + /* Clean up all the junk */ if (release) MemoryContextDelete(astate-mcontext); @@ -4791,22 +4796,25 @@ makeMdArrayResult(ArrayBuildState *astate, * rcontext is where to keep working state */ ArrayBuildStateArr * -initArrayResultArr(Oid array_type, Oid element_type, MemoryContext rcontext) +initArrayResultArr(Oid array_type, Oid element_type, MemoryContext rcontext, + bool subcontext) { ArrayBuildStateArr *astate; - MemoryContext arr_context; + MemoryContext arr_context = rcontext; /* by
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Tomas Vondra wrote: Attached is v5 of the patch, fixing an error with releasing a shared memory context (invalid flag values in a few calls). The functions that gain a new argument should get their comment updated, to explain what the new argument is for. Also, what is it with this hunk? @@ -4768,6 +4770,9 @@ makeMdArrayResult(ArrayBuildState *astate, MemoryContextSwitchTo(oldcontext); + /* we can only release the context if it's a private one. */ + // Assert(! (release !astate-private_cxt)); + /* Clean up all the junk */ if (release) MemoryContextDelete(astate-mcontext); -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
2014-12-16 6:27 GMT+07:00 Tomas Vondra t...@fuzzy.cz: On 15.12.2014 22:35, Jeff Janes wrote: On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra t...@fuzzy.cz mailto:t...@fuzzy.cz wrote: Hi, Attached is v2 of the patch lowering array_agg memory requirements. Hopefully it addresses the issues issues mentioned by TL in this thread (not handling some of the callers appropriately etc.). Hi Tomas, When configured --with-libxml I get compilation errors: xml.c: In function 'xml_xpathobjtoxmlarray': xml.c:3684: error: too few arguments to function 'accumArrayResult' xml.c:3721: error: too few arguments to function 'accumArrayResult' xml.c: In function 'xpath': xml.c:3933: error: too few arguments to function 'initArrayResult' xml.c:3936: error: too few arguments to function 'makeArrayResult' And when configured --with-perl, I get: plperl.c: In function 'array_to_datum_internal': plperl.c:1196: error: too few arguments to function 'accumArrayResult' plperl.c: In function 'plperl_array_to_datum': plperl.c:1223: error: too few arguments to function 'initArrayResult' Cheers, Thanks, attached is a version that fixes this. Just fast-viewing the patch. The patch is not implementing the checking for not creating new context in initArrayResultArr. I think we should implement it also there for consistency (and preventing future problems). Regards, -- Ali Akbar
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
2014-12-16 10:47 GMT+07:00 Ali Akbar the.ap...@gmail.com: 2014-12-16 6:27 GMT+07:00 Tomas Vondra t...@fuzzy.cz: On 15.12.2014 22:35, Jeff Janes wrote: On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra t...@fuzzy.cz mailto:t...@fuzzy.cz wrote: Hi, Attached is v2 of the patch lowering array_agg memory requirements. Hopefully it addresses the issues issues mentioned by TL in this thread (not handling some of the callers appropriately etc.). Hi Tomas, When configured --with-libxml I get compilation errors: xml.c: In function 'xml_xpathobjtoxmlarray': xml.c:3684: error: too few arguments to function 'accumArrayResult' xml.c:3721: error: too few arguments to function 'accumArrayResult' xml.c: In function 'xpath': xml.c:3933: error: too few arguments to function 'initArrayResult' xml.c:3936: error: too few arguments to function 'makeArrayResult' And when configured --with-perl, I get: plperl.c: In function 'array_to_datum_internal': plperl.c:1196: error: too few arguments to function 'accumArrayResult' plperl.c: In function 'plperl_array_to_datum': plperl.c:1223: error: too few arguments to function 'initArrayResult' Cheers, Thanks, attached is a version that fixes this. Just fast-viewing the patch. The patch is not implementing the checking for not creating new context in initArrayResultArr. I think we should implement it also there for consistency (and preventing future problems). Looking at the modification in accumArrayResult* functions, i don't really comfortable with: 1. Code that calls accumArrayResult* after explicitly calling initArrayResult* must always passing subcontext, but it has no effect. 2. All existing codes that calls accumArrayResult must be changed. Just an idea: why don't we minimize the change in API like this: 1. Adding parameter bool subcontext, only in initArrayResult* functions but not in accumArrayResult* 2. Code that want to not creating subcontext must calls initArrayResult* explicitly. Other codes that calls directly to accumArrayResult can only be changed in the call to makeArrayResult* (with release=true parameter). In places that we don't want to create subcontext (as in array_agg_transfn), modify it to use initArrayResult* before calling accumArrayResult*. What do you think? Regards, -- Ali Akbar
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Hi, Attached is v2 of the patch lowering array_agg memory requirements. Hopefully it addresses the issues issues mentioned by TL in this thread (not handling some of the callers appropriately etc.). The v2 of the patch does this: * adds 'subcontext' flag to initArrayResult* methods If it's 'true' then a separate context is created for the ArrayBuildState instance, otherwise it's built within the parent context. Currently, only the array_agg_* functions pass 'subcontext=false' so that the array_agg() aggregate does not create separate context for each group. All other callers use 'true' and thus keep using the original implementation (this includes ARRAY_SUBLINK subplans, and various other places building array incrementally). * adds 'release' flag to makeArrayResult This is mostly to make it consistent with makeArrayResultArr and makeArrayResultAny. All current callers use 'release=true'. Also, it seems that using 'subcontext=false' and then 'release=true' would be a bug. Maybe it would be appropriate to store the 'subcontext' value into the ArrayBuildState and then throw an error if makeArrayResult* is called with (release=true subcontext=false). * modifies all the places calling those functions * decreases number of preallocated elements to 8 The original value was 64 (512B), the current value is 64B. (Not counting the 'nulls' array). More about this later ... Now, some performance measurements - attached is a simple SQL script that executes a few GROUP BY queries with various numbers of groups and group elements. I ran the tests with two dataset sizes: small = a) 1M groups, 1 item per group b) 100k groups, 16 items per group c) 100k groups, 64 items per group d) 10k groups, 1024 items per group large = a) 10M groups, 1 item per group b) 1M groups, 16 items per group c) 1M groups, 64 items per group d) 100k groups, 1024 items per group So essentially the 'large' dataset uses 10x the number of groups. The results (average from the 5 runs, in ms) look like this: small = test | master | patched | diff -||-|--- a | 1419 | 834 | -41% b |595 | 498 | -16% c | 2061 |1832 | -11% d | 2197 |1957 | -11% large = test | master | patched | diff -||-|--- a |OOM |9144 | n/a b | 7366 |6257 | -15% c | 29899 | 22940 | -23% d | 35456 | 31347 | -12% So it seems to give solid speedup across the whole test suite - I'm yet to find a case where it's actually slower than what we have now. The test cases (b) and (c) were actually created with this goal, because both should be OK with the original array size (64 elements), but with the new size it requires a few repalloc() calls. But even those are much faster. This is most likely thanks to removing the AllocSetContextCreate call and sharing freelists across groups (although the test cases don't seem extremely suitable for that, as all the groups grow in parallel). I even tried to bump the initial array size back to 64 elements, but the performance actually decreased a bit for some reason. I have no idea why this happens ... The test script is attached - tweak the 'size' variable for different dataset sizes. The (insane) work_mem sizes are used to force a hash aggregate - clearly I don't have 1TB of RAM. regards Tomas diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c index 18ae318..48e66bf 100644 --- a/contrib/dblink/dblink.c +++ b/contrib/dblink/dblink.c @@ -1271,13 +1271,13 @@ dblink_get_connections(PG_FUNCTION_ARGS) /* stash away current value */ astate = accumArrayResult(astate, CStringGetTextDatum(hentry-name), - false, TEXTOID, CurrentMemoryContext); + false, TEXTOID, CurrentMemoryContext, true); } } if (astate) PG_RETURN_ARRAYTYPE_P(makeArrayResult(astate, - CurrentMemoryContext)); + CurrentMemoryContext, true)); else PG_RETURN_NULL(); } diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c index c16b38e..bd5eb32 100644 --- a/src/backend/access/common/reloptions.c +++ b/src/backend/access/common/reloptions.c @@ -671,7 +671,7 @@ transformRelOptions(Datum oldOptions, List *defList, char *namspace, /* No match, so keep old option */ astate = accumArrayResult(astate, oldoptions[i], false, TEXTOID, - CurrentMemoryContext); + CurrentMemoryContext, true); } } } @@ -758,12 +758,12 @@ transformRelOptions(Datum oldOptions, List *defList, char *namspace, astate = accumArrayResult(astate, PointerGetDatum(t), false, TEXTOID, - CurrentMemoryContext); + CurrentMemoryContext, true); } } if (astate) - result = makeArrayResult(astate, CurrentMemoryContext); + result = makeArrayResult(astate, CurrentMemoryContext, true); else result = (Datum) 0; diff --git
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On 31.3.2014 21:04, Robert Haas wrote: On Thu, Mar 27, 2014 at 10:00 PM, Tomas Vondra t...@fuzzy.cz wrote: The patch also does one more thing - it changes how the arrays (in the aggregate state) grow. Originally it worked like this /* initial size */ astate-alen = 64; /* when full, grow exponentially */ if (astate-nelems = astate-alen) astate-alen *= 2; so the array length would grow like this 64 - 128 - 256 - 512 ... (note we're talking about elements, not bytes, so with with 32-bit integers it's actually 256B - 512B - 1024B - ...). While I do understand the point of this (minimizing palloc overhead), I find this pretty dangerous, especially in case of array_agg(). I've modified the growth like this: /* initial size */ astate-alen = 4; /* when full, grow exponentially */ if (astate-nelems = astate-alen) astate-alen += 4; I admit that might be a bit too aggressive, and maybe there's a better way to do this - with better balance between safety and speed. I was thinking about something like this: /* initial size */ astate-alen = 4; /* when full, grow exponentially */ if (astate-nelems = astate-alen) if (astate-alen 128) astate-alen *= 2; else astate-alen += 128; i.e. initial size with exponential growth, but capped at 128B. So I think this kind of thing is very sensible, but the last time I suggested something similar, I got told no: http://www.postgresql.org/message-id/caeylb_wlght7yjlare9ppert5rkd5zjbb15te+kpgejgqko...@mail.gmail.com But I think you're right and the objections previously raised are wrong. I suspect that the point at which we should stop doubling is higher than 128 elements, because that's only 8kB, which really isn't that big - and the idea that the resizing overhead takes only amortized constant time is surely appealing. But I still think that doubling *forever* is a bad idea, here and there. The fact that we've written the code that way in lots of places doesn't make it the right algorithm. I've been thinking about it a bit more and maybe the doubling is not that bad idea, after all. What I'd like to see is a solution that does wastes less than some known fraction of the allocated memory, and apparently that's what doubling does ... Let's assume we have many buffers (arrays in array_agg), allocated in this manner. Let's assume the buffers are independent, i.e. the doubling is not somehow synchronized for the buffers. Now, at arbitrary time the buffers should be ~75% full on average. There will be buffers that were just doubled (50% full), buffers that will be doubled soon (100% full) and buffers somewhere in between. But on average the buffers should be 75%. That means we're wasting 25% memory on average, which seems quite acceptable to me. We could probably use a different growth rate (say 1.5x, resulting in 12.5% memory being wasted), but I don't see this as the main problem (and I won't fight for this part of array_agg patch). The current array_agg however violates some of the assumptions mentioned above, because it (1) pre-allocates quite large number of items (64) at the beginning, resulting in ~98% of memory being wasted initially (2) allocates one memory context per group, with 8kB initial size, so you're actually wasting ~99.999% of the memory (3) thanks to the dedicated memory contexts, the doubling is pretty much pointless up until you cross the 8kB boundary IMNSHO these are the issues we really should fix - by lowering the initial element count (64-4) and using a single memory context. regards Tomas -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
How much of this problem can be attributed by the fact that repalloc has to copy the data from the old array into the new one? If it's large, perhaps we could solve it by replicating the trick we use for InvalidationChunk. It'd be a bit messy, but the mess would be pretty well contained, I think. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Tomas Vondra t...@fuzzy.cz writes: I've been thinking about it a bit more and maybe the doubling is not that bad idea, after all. It is not. There's a reason why that's our standard behavior. The current array_agg however violates some of the assumptions mentioned above, because it (1) pre-allocates quite large number of items (64) at the beginning, resulting in ~98% of memory being wasted initially (2) allocates one memory context per group, with 8kB initial size, so you're actually wasting ~99.999% of the memory (3) thanks to the dedicated memory contexts, the doubling is pretty much pointless up until you cross the 8kB boundary IMNSHO these are the issues we really should fix - by lowering the initial element count (64-4) and using a single memory context. The real issue here is that all those decisions are perfectly reasonable if you expect that a large number of values will get aggregated --- and even if you don't expect that, they're cheap insurance in simple cases. It only gets to be a problem if you have a lot of concurrent executions of array_agg, such as in a grouped-aggregate query. You're essentially arguing that in the grouped-aggregate case, it's better to optimize on the assumption that only a very small number of values will get aggregated (per hash table entry) --- which is possibly reasonable, but the argument that it's okay to pessimize the behavior for other cases seems pretty flimsy from here. Actually, though, the patch as given outright breaks things for both the grouped and ungrouped cases, because the aggregate no longer releases memory when it's done. That's going to result in memory bloat not savings, in any situation where the aggregate is executed repeatedly. I think a patch that stood a chance of getting committed would need to detect whether the aggregate was being called in simple or grouped contexts, and apply different behaviors in the two cases. And you can't just remove the sub-context without providing some substitute cleanup mechanism. Possibly you could keep the context but give it some much-more-miserly allocation parameters in the grouped case. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On 1.4.2014 19:08, Tom Lane wrote: Tomas Vondra t...@fuzzy.cz writes: I've been thinking about it a bit more and maybe the doubling is not that bad idea, after all. It is not. There's a reason why that's our standard behavior. The current array_agg however violates some of the assumptions mentioned above, because it (1) pre-allocates quite large number of items (64) at the beginning, resulting in ~98% of memory being wasted initially (2) allocates one memory context per group, with 8kB initial size, so you're actually wasting ~99.999% of the memory (3) thanks to the dedicated memory contexts, the doubling is pretty much pointless up until you cross the 8kB boundary IMNSHO these are the issues we really should fix - by lowering the initial element count (64-4) and using a single memory context. The real issue here is that all those decisions are perfectly reasonable if you expect that a large number of values will get aggregated --- and even if you don't expect that, they're cheap insurance in simple cases. Yes, if you expect a large number of values it's perfectly valid. But what if those assumptions are faulty? Is it OK to fail because of OOM even for trivial queries breaking those assumptions? I'd like to improve that and make this work without impacting the queries that match the assumptions. It only gets to be a problem if you have a lot of concurrent executions of array_agg, such as in a grouped-aggregate query. You're essentially arguing that in the grouped-aggregate case, it's better to optimize on the assumption that only a very small number of values will get aggregated (per hash table entry) --- which is possibly reasonable, but the argument that it's okay to pessimize the behavior for other cases seems pretty flimsy from here. I'm not saying it's okay to pessimize the behavior of other cases. I admit decreasing the initial size from 64 to only 4 items may be too aggressive - let's measure the difference and tweak the number accordingly. Heck, even 64 items is way lower than the 8kB utilized by each per-group memory context right now. Actually, though, the patch as given outright breaks things for both the grouped and ungrouped cases, because the aggregate no longer releases memory when it's done. That's going to result in memory bloat not savings, in any situation where the aggregate is executed repeatedly. Really? Can you provide query for which the current and patched code behave differently? Looking at array_agg_finalfn (which is the final function for array_agg), I see it does this: /* * Make the result. We cannot release the ArrayBuildState because * sometimes aggregate final functions are re-executed. Rather, it * is nodeAgg.c's responsibility to reset the aggcontext when it's * safe to do so. */ result = makeMdArrayResult(state, 1, dims, lbs, CurrentMemoryContext, false); i.e. it sets release=false. So I fail to see how the current code behaves differently from the patch? If it wasn't releasing the memory before, it's not releasing memory before. In both cases the memory gets released when the aggcontext gets released in nodeAgg.c (as explained by the comment in the code). However, after looking at the code now, I think it's actually wrong to remove the MemoryContextDelete from makeMdArrayResult(). It does not make any difference to the array_agg (which sets release=false anyway), but it makes difference to functions calling makeArrayResult() as that uses release=true. That however is not called by aggregate functions, but from regexp_split_to_array, xpath and subplans. I think a patch that stood a chance of getting committed would need to detect whether the aggregate was being called in simple or grouped contexts, and apply different behaviors in the two cases. And you can't just remove the sub-context without providing some substitute cleanup mechanism. Possibly you could keep the context but give it some much-more-miserly allocation parameters in the grouped case. I don't think the patch removes any cleanup mechanism (see above), but maybe I'm wrong. Yes, tweaking the parameters depending on the aggregate - whether it's simple or grouped, or maybe an estimate number of elements in a group - seems like a good idea. regards Tomas -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
Tomas Vondra t...@fuzzy.cz writes: On 1.4.2014 19:08, Tom Lane wrote: Actually, though, the patch as given outright breaks things for both the grouped and ungrouped cases, because the aggregate no longer releases memory when it's done. That's going to result in memory bloat not savings, in any situation where the aggregate is executed repeatedly. Looking at array_agg_finalfn (which is the final function for array_agg), I see it does this: /* * Make the result. We cannot release the ArrayBuildState because * sometimes aggregate final functions are re-executed. Rather, it * is nodeAgg.c's responsibility to reset the aggcontext when it's * safe to do so. */ result = makeMdArrayResult(state, 1, dims, lbs, CurrentMemoryContext, false); i.e. it sets release=false. So I fail to see how the current code behaves differently from the patch? You're conveniently ignoring the callers that set release=true. Reverse engineering a query that exhibits memory bloat is left as an exercise for the reader (but in a quick look, I'll bet ARRAY_SUBLINK subplans are one locus for problems). It's possible that it'd work to use a subcontext only if release=true; I've not dug through the code enough to convince myself of that. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On 1.4.2014 20:56, Tom Lane wrote: Tomas Vondra t...@fuzzy.cz writes: On 1.4.2014 19:08, Tom Lane wrote: You're conveniently ignoring the callers that set release=true. Reverse engineering a query that exhibits memory bloat is left as an exercise for the reader (but in a quick look, I'll bet ARRAY_SUBLINK subplans are one locus for problems). No, I'm not. I explicitly mentioned those cases (although you're right I concentrated mostly on cases with release=false, because of array_agg). It's possible that it'd work to use a subcontext only if release=true; I've not dug through the code enough to convince myself of that. Maybe, though 'release' is not available in makeArrayResult() which is where the memory context needs to be decided. So all the callers would need to be modified to supply this parameter. But there only ~15 places where makeArrayResult is called. regards Tomas -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg
On Thu, Mar 27, 2014 at 10:00 PM, Tomas Vondra t...@fuzzy.cz wrote: The patch also does one more thing - it changes how the arrays (in the aggregate state) grow. Originally it worked like this /* initial size */ astate-alen = 64; /* when full, grow exponentially */ if (astate-nelems = astate-alen) astate-alen *= 2; so the array length would grow like this 64 - 128 - 256 - 512 ... (note we're talking about elements, not bytes, so with with 32-bit integers it's actually 256B - 512B - 1024B - ...). While I do understand the point of this (minimizing palloc overhead), I find this pretty dangerous, especially in case of array_agg(). I've modified the growth like this: /* initial size */ astate-alen = 4; /* when full, grow exponentially */ if (astate-nelems = astate-alen) astate-alen += 4; I admit that might be a bit too aggressive, and maybe there's a better way to do this - with better balance between safety and speed. I was thinking about something like this: /* initial size */ astate-alen = 4; /* when full, grow exponentially */ if (astate-nelems = astate-alen) if (astate-alen 128) astate-alen *= 2; else astate-alen += 128; i.e. initial size with exponential growth, but capped at 128B. So I think this kind of thing is very sensible, but the last time I suggested something similar, I got told no: http://www.postgresql.org/message-id/caeylb_wlght7yjlare9ppert5rkd5zjbb15te+kpgejgqko...@mail.gmail.com But I think you're right and the objections previously raised are wrong. I suspect that the point at which we should stop doubling is higher than 128 elements, because that's only 8kB, which really isn't that big - and the idea that the resizing overhead takes only amortized constant time is surely appealing. But I still think that doubling *forever* is a bad idea, here and there. The fact that we've written the code that way in lots of places doesn't make it the right algorithm. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers