Re: Optimize planner memory consumption for huge arrays

2024-02-26 Thread Tom Lane
Tomas Vondra  writes:
> On 2/25/24 17:29, Tom Lane wrote:
>> Yeah.  Also: once we had such an idea, it'd be very tempting to apply
>> it to other frequently-reset contexts like the executor's per-tuple
>> evaluation contexts.  I'm not quite prepared to argue that
>> MemoryContextReset should just act that way all the time ... but
>> it's sure interesting to think about.

> Do the context resets consume enough time to make this measurable?

I think they do.  We previously invented the "isReset" mechanism to
eliminate work in the case of exactly zero allocations since the
last reset, and that made a very measurable difference at the time,
even though you'd think the amount of work saved would be negligible.
This idea seems like it might be able to supersede that one and win
in a larger fraction of cases.

> +1 to disable this optimization in assert-enabled builds. I guess we'd
> invent a new constant to disable it, and tie it to USE_ASSERT_CHECKING
> (similar to CLOBBER_FREED_MEMORY, for example).

> Thinking about CLOBBER_FREED_MEMORY, could it be useful to still clobber
> the memory, even if we don't actually reset the context?

I think in any case where we were trying to support debugging, we'd
just disable the optimization, so that reset always resets.

regards, tom lane




Re: Optimize planner memory consumption for huge arrays

2024-02-26 Thread Tomas Vondra



On 2/25/24 17:29, Tom Lane wrote:
> Tomas Vondra  writes:
>> On 2/25/24 00:07, Tom Lane wrote:
>>> ...  I'm not sure if it'd be worth extending
>>> the mcxt.c API to provide something like "MemoryContextResetIfBig",
>>> with some internal rule that would be cheap to apply like "reset
>>> if we have any non-keeper blocks".
> 
>> I think MemoryContextResetIfBig is an interesting idea - I think a good
>> way to define "big" would be "has multiple blocks", because that's the
>> only case where we can actually reclaim some memory.
> 
> Yeah.  Also: once we had such an idea, it'd be very tempting to apply
> it to other frequently-reset contexts like the executor's per-tuple
> evaluation contexts.  I'm not quite prepared to argue that
> MemoryContextReset should just act that way all the time ... but
> it's sure interesting to think about.
> 

Do the context resets consume enough time to make this measurable? I may
be wrong, but I'd guess it's not measurable. In which case, what would
be the benefit?

> Another question is whether this wouldn't hurt debugging, in that
> dangling-pointer bugs would become harder to catch.  We'd certainly
> want to turn off the optimization in USE_VALGRIND builds, and maybe
> we just shouldn't do it at all if USE_ASSERT_CHECKING.
> 
>   regards, tom lane

+1 to disable this optimization in assert-enabled builds. I guess we'd
invent a new constant to disable it, and tie it to USE_ASSERT_CHECKING
(similar to CLOBBER_FREED_MEMORY, for example).

Thinking about CLOBBER_FREED_MEMORY, could it be useful to still clobber
the memory, even if we don't actually reset the context?


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Optimize planner memory consumption for huge arrays

2024-02-25 Thread Tom Lane
Tomas Vondra  writes:
> On 2/25/24 00:07, Tom Lane wrote:
>> ...  I'm not sure if it'd be worth extending
>> the mcxt.c API to provide something like "MemoryContextResetIfBig",
>> with some internal rule that would be cheap to apply like "reset
>> if we have any non-keeper blocks".

> I think MemoryContextResetIfBig is an interesting idea - I think a good
> way to define "big" would be "has multiple blocks", because that's the
> only case where we can actually reclaim some memory.

Yeah.  Also: once we had such an idea, it'd be very tempting to apply
it to other frequently-reset contexts like the executor's per-tuple
evaluation contexts.  I'm not quite prepared to argue that
MemoryContextReset should just act that way all the time ... but
it's sure interesting to think about.

Another question is whether this wouldn't hurt debugging, in that
dangling-pointer bugs would become harder to catch.  We'd certainly
want to turn off the optimization in USE_VALGRIND builds, and maybe
we just shouldn't do it at all if USE_ASSERT_CHECKING.

regards, tom lane




Re: Optimize planner memory consumption for huge arrays

2024-02-25 Thread Tomas Vondra
On 2/25/24 00:07, Tom Lane wrote:
> I wrote:
>> Tomas Vondra  writes:
>>> On 2/19/24 16:45, Tom Lane wrote:
 Tomas Vondra  writes:
> For example, I don't think we expect selectivity functions to allocate
> long-lived objects, right? So maybe we could run them in a dedicated
> memory context, and reset it aggressively (after each call).
> 
 That could eliminate a whole lot of potential leaks.  I'm not sure 
 though how much it moves the needle in terms of overall planner
 memory consumption.
> 
>>> It was an ad hoc thought, inspired by the issue at hand. Maybe it would
>>> be possible to find similar "boundaries" in other parts of the planner.
> 
>> Here's a quick and probably-incomplete implementation of that idea.
>> I've not tried to study its effects on memory consumption, just made
>> sure it passes check-world.
> 
> I spent a bit more time on this patch.  One thing I was concerned
> about was whether it causes any noticeable slowdown, and it seems that
> it does: testing with "pgbench -S" I observe perhaps 1% slowdown.
> However, we don't necessarily need to reset the temp context after
> every single usage.  I experimented with resetting it every tenth
> time, and that got me from 1% slower than HEAD to 1% faster.

Isn't 1% well within the usual noise and/or the differences that can be
caused simply by slightly different alignment of the binary? I'd treat
this as "same performance" ...

> Of course "every tenth time" is very ad hoc.  I wondered if we could
> make it somehow conditional on how much memory had been consumed
> in the temp context, but there doesn't seem to be any cheap way
> to get that.  Applying something like MemoryContextMemConsumed
> would surely be a loser.  I'm not sure if it'd be worth extending
> the mcxt.c API to provide something like "MemoryContextResetIfBig",
> with some internal rule that would be cheap to apply like "reset
> if we have any non-keeper blocks".

Wouldn't it be sufficient to look simply at MemoryContextMemAllocated?
That's certainly way cheaper than MemoryContextStatsInternal, especially
if the context tree is shallow (which I think we certainly expect here).

I think MemoryContextResetIfBig is an interesting idea - I think a good
way to define "big" would be "has multiple blocks", because that's the
only case where we can actually reclaim some memory.

> 
> I also looked into whether it really does reduce overall memory
> consumption noticeably, by collecting stats about planner memory
> consumption during the core regression tests.  The answer is that
> it barely helps.  I see the average used space across all planner
> invocations drop from 23344 bytes to 23220, and the worst-case
> numbers hardly move at all.  So that's a little discouraging.
> But of course the regression tests prefer not to deal in very
> large/expensive test cases, so maybe it's not surprising that
> I don't see much win in this test.
> 

I'm not really surprised by this - I think you're right most of our
selectivity functions either doesn't do memory-expensive stuff, or we
don't have such corner cases in our regression tests. Or at least not to
the extent to move the overall average, so we'd need to look at
individual cases allocating quite a bit of memory.

But I think that's fine - I see this as a safety measure, not something
that'd improve the "good" cases.

> Anyway, 0001 attached is a cleaned-up patch with the every-tenth-
> time rule, and 0002 (not meant for commit) is the quick and
> dirty instrumentation patch I used for collecting usage stats.
> 
> Even though this seems of only edge-case value, I'd much prefer
> to do this than the sort of ad-hoc patching initially proposed
> in this thread.
> 

+1 to that, it seems like a more principled approach.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Optimize planner memory consumption for huge arrays

2024-02-24 Thread Tom Lane
I wrote:
> Tomas Vondra  writes:
>> On 2/19/24 16:45, Tom Lane wrote:
>>> Tomas Vondra  writes:
 For example, I don't think we expect selectivity functions to allocate
 long-lived objects, right? So maybe we could run them in a dedicated
 memory context, and reset it aggressively (after each call).

>>> That could eliminate a whole lot of potential leaks.  I'm not sure 
>>> though how much it moves the needle in terms of overall planner
>>> memory consumption.

>> It was an ad hoc thought, inspired by the issue at hand. Maybe it would
>> be possible to find similar "boundaries" in other parts of the planner.

> Here's a quick and probably-incomplete implementation of that idea.
> I've not tried to study its effects on memory consumption, just made
> sure it passes check-world.

I spent a bit more time on this patch.  One thing I was concerned
about was whether it causes any noticeable slowdown, and it seems that
it does: testing with "pgbench -S" I observe perhaps 1% slowdown.
However, we don't necessarily need to reset the temp context after
every single usage.  I experimented with resetting it every tenth
time, and that got me from 1% slower than HEAD to 1% faster.  Of
course "every tenth time" is very ad hoc.  I wondered if we could
make it somehow conditional on how much memory had been consumed
in the temp context, but there doesn't seem to be any cheap way
to get that.  Applying something like MemoryContextMemConsumed
would surely be a loser.  I'm not sure if it'd be worth extending
the mcxt.c API to provide something like "MemoryContextResetIfBig",
with some internal rule that would be cheap to apply like "reset
if we have any non-keeper blocks".

I also looked into whether it really does reduce overall memory
consumption noticeably, by collecting stats about planner memory
consumption during the core regression tests.  The answer is that
it barely helps.  I see the average used space across all planner
invocations drop from 23344 bytes to 23220, and the worst-case
numbers hardly move at all.  So that's a little discouraging.
But of course the regression tests prefer not to deal in very
large/expensive test cases, so maybe it's not surprising that
I don't see much win in this test.

Anyway, 0001 attached is a cleaned-up patch with the every-tenth-
time rule, and 0002 (not meant for commit) is the quick and
dirty instrumentation patch I used for collecting usage stats.

Even though this seems of only edge-case value, I'd much prefer
to do this than the sort of ad-hoc patching initially proposed
in this thread.

regards, tom lane

diff --git a/src/backend/optimizer/path/clausesel.c b/src/backend/optimizer/path/clausesel.c
index c949dc1866..4678c778b0 100644
--- a/src/backend/optimizer/path/clausesel.c
+++ b/src/backend/optimizer/path/clausesel.c
@@ -24,6 +24,7 @@
 #include "statistics/statistics.h"
 #include "utils/fmgroids.h"
 #include "utils/lsyscache.h"
+#include "utils/memutils.h"
 #include "utils/selfuncs.h"
 
 /*
@@ -693,6 +694,7 @@ clause_selectivity_ext(PlannerInfo *root,
 	Selectivity s1 = 0.5;		/* default for any unhandled clause type */
 	RestrictInfo *rinfo = NULL;
 	bool		cacheable = false;
+	MemoryContext saved_cxt;
 
 	if (clause == NULL)			/* can this still happen? */
 		return s1;
@@ -756,6 +758,21 @@ clause_selectivity_ext(PlannerInfo *root,
 			clause = (Node *) rinfo->clause;
 	}
 
+	/*
+	 * Run all the remaining work in the short-lived planner_tmp_cxt, which
+	 * we'll reset at the bottom.  This allows selectivity-related code to not
+	 * be too concerned about leaking memory.
+	 */
+	saved_cxt = MemoryContextSwitchTo(root->glob->planner_tmp_cxt);
+
+	/*
+	 * This function can be called recursively, in which case we'd better not
+	 * reset planner_tmp_cxt until we exit the topmost level.  Use of
+	 * planner_tmp_cxt_depth also makes it safe for other places to use and
+	 * reset planner_tmp_cxt in the same fashion.
+	 */
+	root->glob->planner_tmp_cxt_depth++;
+
 	if (IsA(clause, Var))
 	{
 		Var		   *var = (Var *) clause;
@@ -967,6 +984,20 @@ clause_selectivity_ext(PlannerInfo *root,
 			rinfo->outer_selec = s1;
 	}
 
+	/* Exit and optionally clean up the planner_tmp_cxt */
+	MemoryContextSwitchTo(saved_cxt);
+	root->glob->planner_tmp_cxt_usage++;
+	Assert(root->glob->planner_tmp_cxt_depth > 0);
+	if (--root->glob->planner_tmp_cxt_depth == 0)
+	{
+		/* It's safe to reset the tmp context, but do we want to? */
+		if (root->glob->planner_tmp_cxt_usage >= PLANNER_TMP_CXT_USAGE_RESET)
+		{
+			MemoryContextReset(root->glob->planner_tmp_cxt);
+			root->glob->planner_tmp_cxt_usage = 0;
+		}
+	}
+
 #ifdef SELECTIVITY_DEBUG
 	elog(DEBUG4, "clause_selectivity: s1 %f", s1);
 #endif			/* SELECTIVITY_DEBUG */
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index be4e182869..df6a1fd330 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -307,6 +307,12 @@ 

Re: Optimize planner memory consumption for huge arrays

2024-02-22 Thread Alena Rybakina

Hi!

On 20.02.2024 07:41, Andrei Lepikhov wrote:

On 20/2/2024 04:51, Tom Lane wrote:

Tomas Vondra  writes:

On 2/19/24 16:45, Tom Lane wrote:

Tomas Vondra  writes:
For example, I don't think we expect selectivity functions to 
allocate

long-lived objects, right? So maybe we could run them in a dedicated
memory context, and reset it aggressively (after each call).

Here's a quick and probably-incomplete implementation of that idea.
I've not tried to study its effects on memory consumption, just made
sure it passes check-world.
Thanks for the sketch. The trick with the planner_tmp_cxt_depth 
especially looks interesting.
I think we should design small memory contexts - one per scalable 
direction of memory utilization, like selectivity or partitioning 
(appending ?).
My coding experience shows that short-lived GEQO memory context forces 
people to learn on Postgres internals more precisely :).


I think there was a problem in your patch when you freed up the memory 
of a variable in the eqsel_internal function, because we have a case 
where the variable was deleted by reference in the 
eval_const_expressions_mutator function (it is only for T_SubPlan and 
T_AlternativeSubPlan type of nodes.


This query just causes an error in your case:

create table a (id bigint, c1 bigint, primary key(id));
create table b (a_id bigint, b_id bigint, b2 bigint, primary key(a_id, 
b_id));

explain select id
    from a, b
    where id = a_id
  and b2 = (select  min(b2)
    from    b
    where   id = a_id);
drop table a;
drop table b;

We can return a copy of the variable or not release the memory of this 
variable.


I attached two patch: the first one is removing your memory cleanup and 
another one returns the copy of variable.


The author of the corrections is not only me, but also Daniil Anisimov.

--
Regards,
Alena Rybakina
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
diff --git a/src/backend/optimizer/util/clauses.c 
b/src/backend/optimizer/util/clauses.c
index edc25d712e9..ac560b1605e 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -2915,7 +2915,7 @@ eval_const_expressions_mutator(Node *node,
 * XXX should we ereport() here instead?  Probably this 
routine
 * should never be invoked after SubPlan creation.
 */
-   return node;
+   return CopyObject(node);
case T_RelabelType:
{
RelabelType *relabel = (RelabelType *) node;
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index cea777e9d40..e5bad75ec1c 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -281,6 +281,7 @@ eqsel_internal(PG_FUNCTION_ARGS, bool negate)
selec = var_eq_non_const(, operator, collation, other,
 varonleft, 
negate);
 
+   pfree(other);
ReleaseVariableStats(vardata);
 
return selec;
@@ -1961,15 +1962,15 @@ scalararraysel(PlannerInfo *root,
{
List   *args;
Selectivity s2;
-
-   args = list_make2(leftop,
- 
makeConst(nominal_element_type,
-   
-1,
-   
nominal_element_collation,
-   
elmlen,
-   
elem_values[i],
-   
elem_nulls[i],
-   
elmbyval));
+   Const *c = makeConst(nominal_element_type,
+   -1,
+   
nominal_element_collation,
+   elmlen,
+   elem_values[i],
+   elem_nulls[i],
+   elmbyval);
+
+   args = list_make2(leftop, c);
if (is_join_clause)
s2 = 
DatumGetFloat8(FunctionCall5Coll(,

  clause->inputcollid,
@@ -1985,7 +1986,8 @@ scalararraysel(PlannerInfo *root,

  

Re: Optimize planner memory consumption for huge arrays

2024-02-19 Thread Andrei Lepikhov

On 20/2/2024 04:51, Tom Lane wrote:

Tomas Vondra  writes:

On 2/19/24 16:45, Tom Lane wrote:

Tomas Vondra  writes:

For example, I don't think we expect selectivity functions to allocate
long-lived objects, right? So maybe we could run them in a dedicated
memory context, and reset it aggressively (after each call).

Here's a quick and probably-incomplete implementation of that idea.
I've not tried to study its effects on memory consumption, just made
sure it passes check-world.
Thanks for the sketch. The trick with the planner_tmp_cxt_depth 
especially looks interesting.
I think we should design small memory contexts - one per scalable 
direction of memory utilization, like selectivity or partitioning 
(appending ?).
My coding experience shows that short-lived GEQO memory context forces 
people to learn on Postgres internals more precisely :).


--
regards,
Andrei Lepikhov
Postgres Professional





Re: Optimize planner memory consumption for huge arrays

2024-02-19 Thread Andrei Lepikhov

On 19/2/2024 20:47, Tomas Vondra wrote:

On 9/8/23 07:11, Lepikhov Andrei wrote:

Just for comparison, without partitioning:
elems   1   1E1 1E2 1E3 1E4 
master: 12kB14kB37kB266kB   2.5MB
patched:12kB11.5kB  13kB24kB141kB



These improvements look pretty nice, considering how simple the patch
seems to be. I can't even imagine how much memory we'd need with even
more partitions (say, 1000) if 100 partitions means 274MB.

BTW when releasing memory in scalararraysel, wouldn't it be good to also
free the elem_values/elem_nulls? I haven't tried and maybe it's not that
significant amount.

Agree. Added into the next version of the patch.
Moreover, I see a slight planning speedup. Looking into the reason for 
that, I discovered that it is because sometimes the planner utilizes the 
same memory piece for the next array element. It finds this piece more 
quickly than before that optimization.


--
regards,
Andrei Lepikhov
Postgres Professional
From 2e89dc8b743953068174c777d7a014e1ea71f659 Mon Sep 17 00:00:00 2001
From: "Andrey V. Lepikhov" 
Date: Tue, 20 Feb 2024 11:05:51 +0700
Subject: [PATCH] Utilize memory in scalararraysel in more optimal manner.

Estimating selectivity on an array operation free the memory right after
working out a single element. It reduces peak memory consumption
on massive arrays.
---
 src/backend/utils/adt/selfuncs.c | 26 --
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index cea777e9d4..e5bad75ec1 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -281,6 +281,7 @@ eqsel_internal(PG_FUNCTION_ARGS, bool negate)
selec = var_eq_non_const(, operator, collation, other,
 varonleft, 
negate);
 
+   pfree(other);
ReleaseVariableStats(vardata);
 
return selec;
@@ -1961,15 +1962,15 @@ scalararraysel(PlannerInfo *root,
{
List   *args;
Selectivity s2;
-
-   args = list_make2(leftop,
- 
makeConst(nominal_element_type,
-   
-1,
-   
nominal_element_collation,
-   
elmlen,
-   
elem_values[i],
-   
elem_nulls[i],
-   
elmbyval));
+   Const *c = makeConst(nominal_element_type,
+   -1,
+   
nominal_element_collation,
+   elmlen,
+   elem_values[i],
+   elem_nulls[i],
+   elmbyval);
+
+   args = list_make2(leftop, c);
if (is_join_clause)
s2 = 
DatumGetFloat8(FunctionCall5Coll(,

  clause->inputcollid,
@@ -1985,7 +1986,8 @@ scalararraysel(PlannerInfo *root,

  ObjectIdGetDatum(operator),

  PointerGetDatum(args),

  Int32GetDatum(varRelid)));
-
+   list_free(args);
+   pfree(c);
if (useOr)
{
s1 = s1 + s2 - s1 * s2;
@@ -2004,6 +2006,9 @@ scalararraysel(PlannerInfo *root,
if ((useOr ? isEquality : isInequality) &&
s1disjoint >= 0.0 && s1disjoint <= 1.0)
s1 = s1disjoint;
+
+   pfree(elem_values);
+   pfree(elem_nulls);
}
else if (rightop && IsA(rightop, ArrayExpr) &&
 !((ArrayExpr *) rightop)->multidims)
@@ -2052,6 +2057,7 @@ scalararraysel(PlannerInfo *root,

  ObjectIdGetDatum(operator),
 

Re: Optimize planner memory consumption for huge arrays

2024-02-19 Thread Tom Lane
Tomas Vondra  writes:
> On 2/19/24 16:45, Tom Lane wrote:
>> Tomas Vondra  writes:
>>> For example, I don't think we expect selectivity functions to allocate
>>> long-lived objects, right? So maybe we could run them in a dedicated
>>> memory context, and reset it aggressively (after each call).

>> That could eliminate a whole lot of potential leaks.  I'm not sure 
>> though how much it moves the needle in terms of overall planner
>> memory consumption.

> I'm not sure about that either, maybe not much - for example it would
> not help with the two other memory usage patches (which are related to
> SpecialJoinInfo and RestrictInfo, outside selectivity functions).

> It was an ad hoc thought, inspired by the issue at hand. Maybe it would
> be possible to find similar "boundaries" in other parts of the planner.

Here's a quick and probably-incomplete implementation of that idea.
I've not tried to study its effects on memory consumption, just made
sure it passes check-world.

The main hazard here is that something invoked inside clause
selectivity might try to cache a data structure for later use.
However, there are already places that do that kind of thing,
and they already explicitly switch into the planner_cxt, because
otherwise they fail under GEQO.  (If we do find places that need
fixing for this, they were probably busted under GEQO already.)
Perhaps it's worth updating the comments at those places, but
I didn't bother in this first cut.

regards, tom lane

diff --git a/src/backend/optimizer/path/clausesel.c b/src/backend/optimizer/path/clausesel.c
index c949dc1866..669acd7315 100644
--- a/src/backend/optimizer/path/clausesel.c
+++ b/src/backend/optimizer/path/clausesel.c
@@ -24,6 +24,7 @@
 #include "statistics/statistics.h"
 #include "utils/fmgroids.h"
 #include "utils/lsyscache.h"
+#include "utils/memutils.h"
 #include "utils/selfuncs.h"
 
 /*
@@ -693,6 +694,7 @@ clause_selectivity_ext(PlannerInfo *root,
 	Selectivity s1 = 0.5;		/* default for any unhandled clause type */
 	RestrictInfo *rinfo = NULL;
 	bool		cacheable = false;
+	MemoryContext saved_cxt;
 
 	if (clause == NULL)			/* can this still happen? */
 		return s1;
@@ -756,6 +758,21 @@ clause_selectivity_ext(PlannerInfo *root,
 			clause = (Node *) rinfo->clause;
 	}
 
+	/*
+	 * Run all the remaining work in the short-lived planner_tmp_cxt, which
+	 * we'll reset at the bottom.  This allows selectivity-related code to not
+	 * be too concerned about leaking memory.
+	 */
+	saved_cxt = MemoryContextSwitchTo(root->planner_tmp_cxt);
+
+	/*
+	 * This function can be called recursively, in which case we'd better not
+	 * reset planner_tmp_cxt until we exit the topmost level.  Use of
+	 * planner_tmp_cxt_depth also makes it safe for other places to use and
+	 * reset planner_tmp_cxt in the same fashion.
+	 */
+	root->planner_tmp_cxt_depth++;
+
 	if (IsA(clause, Var))
 	{
 		Var		   *var = (Var *) clause;
@@ -967,6 +984,12 @@ clause_selectivity_ext(PlannerInfo *root,
 			rinfo->outer_selec = s1;
 	}
 
+	/* Exit and clean up the planner_tmp_cxt */
+	MemoryContextSwitchTo(saved_cxt);
+	Assert(root->planner_tmp_cxt_depth > 0);
+	if (--root->planner_tmp_cxt_depth == 0)
+		MemoryContextReset(root->planner_tmp_cxt);
+
 #ifdef SELECTIVITY_DEBUG
 	elog(DEBUG4, "clause_selectivity: s1 %f", s1);
 #endif			/* SELECTIVITY_DEBUG */
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index be4e182869..44b9555dbf 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -643,6 +643,17 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
 	root->plan_params = NIL;
 	root->outer_params = NULL;
 	root->planner_cxt = CurrentMemoryContext;
+	/* a subquery can share the main query's planner_tmp_cxt */
+	if (parent_root)
+	{
+		root->planner_tmp_cxt = parent_root->planner_tmp_cxt;
+		Assert(parent_root->planner_tmp_cxt_depth == 0);
+	}
+	else
+		root->planner_tmp_cxt = AllocSetContextCreate(root->planner_cxt,
+	  "Planner temp context",
+	  ALLOCSET_DEFAULT_SIZES);
+	root->planner_tmp_cxt_depth = 0;
 	root->init_plans = NIL;
 	root->cte_plan_ids = NIL;
 	root->multiexpr_params = NIL;
@@ -6514,6 +6525,10 @@ plan_cluster_use_sort(Oid tableOid, Oid indexOid)
 	root->glob = glob;
 	root->query_level = 1;
 	root->planner_cxt = CurrentMemoryContext;
+	root->planner_tmp_cxt = AllocSetContextCreate(root->planner_cxt,
+  "Planner temp context",
+  ALLOCSET_DEFAULT_SIZES);
+	root->planner_tmp_cxt_depth = 0;
 	root->wt_param_id = -1;
 	root->join_domains = list_make1(makeNode(JoinDomain));
 
@@ -6552,7 +6567,10 @@ plan_cluster_use_sort(Oid tableOid, Oid indexOid)
 	 * trust the index contents but use seqscan-and-sort.
 	 */
 	if (lc == NULL)/* not in the list? */
+	{
+		MemoryContextDelete(root->planner_tmp_cxt);
 		return true;			/* use sort */
+	}
 
 	/*
 	 * Rather than doing all the pushups that would be needed to use
@@ 

Re: Optimize planner memory consumption for huge arrays

2024-02-19 Thread Tomas Vondra
On 2/19/24 16:45, Tom Lane wrote:
> Tomas Vondra  writes:
>> Considering there are now multiple patches improving memory usage during
>> planning with partitions, perhaps it's time to take a step back and
>> think about how we manage (or rather not manage) memory during query
>> planning, and see if we could improve that instead of an infinite
>> sequence of ad hoc patches?
> 
> +1, I've been getting an itchy feeling about that too.  I don't have
> any concrete proposals ATM, but I quite like your idea here:
> 
>> For example, I don't think we expect selectivity functions to allocate
>> long-lived objects, right? So maybe we could run them in a dedicated
>> memory context, and reset it aggressively (after each call).
> 
> That could eliminate a whole lot of potential leaks.  I'm not sure 
> though how much it moves the needle in terms of overall planner
> memory consumption.

I'm not sure about that either, maybe not much - for example it would
not help with the two other memory usage patches (which are related to
SpecialJoinInfo and RestrictInfo, outside selectivity functions).

It was an ad hoc thought, inspired by the issue at hand. Maybe it would
be possible to find similar "boundaries" in other parts of the planner.

I keep thinking about how compilers/optimizers typically have separate
optimizations passes, maybe that's something we might leverage ...

> I've always supposed that the big problem was data structures
> associated with rejected Paths, but I might be wrong. Is there some
> simple way we could get a handle on where the most memory goes while
> planning?
> 

I suspect this might have changed thanks to partitioning - it's not a
coincidence most of the recent memory usage improvements address cases
with many partitions.

As for how to analyze the memory usage - maybe there's a simpler way,
but what I did recently was adding simple instrumentation into memory
contexts, recording pointer/size/backtrace for each request, and then a
script that aggregated that into a "currently allocated" report with
information about "source" of the allocation.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Optimize planner memory consumption for huge arrays

2024-02-19 Thread Tom Lane
Tomas Vondra  writes:
> Considering there are now multiple patches improving memory usage during
> planning with partitions, perhaps it's time to take a step back and
> think about how we manage (or rather not manage) memory during query
> planning, and see if we could improve that instead of an infinite
> sequence of ad hoc patches?

+1, I've been getting an itchy feeling about that too.  I don't have
any concrete proposals ATM, but I quite like your idea here:

> For example, I don't think we expect selectivity functions to allocate
> long-lived objects, right? So maybe we could run them in a dedicated
> memory context, and reset it aggressively (after each call).

That could eliminate a whole lot of potential leaks.  I'm not sure
though how much it moves the needle in terms of overall planner memory
consumption.  I've always supposed that the big problem was data
structures associated with rejected Paths, but I might be wrong.
Is there some simple way we could get a handle on where the most
memory goes while planning?

regards, tom lane




Re: Optimize planner memory consumption for huge arrays

2024-02-19 Thread Tomas Vondra



On 9/8/23 07:11, Lepikhov Andrei wrote:
> 
> 
> On Wed, Sep 6, 2023, at 8:09 PM, Ashutosh Bapat wrote:
>> Hi Lepikhov,
>>
>> Thanks for using my patch and I am glad that you found it useful.
>>
>> On Mon, Sep 4, 2023 at 10:56 AM Lepikhov Andrei
>>  wrote:
>>>
>>> Hi, hackers,
>>>
>>> Looking at the planner behaviour with the memory consumption patch [1], I 
>>> figured out that arrays increase memory consumption by the optimizer 
>>> significantly. See init.sql in attachment.
>>> The point here is that the planner does small memory allocations for each 
>>> element during estimation. As a result, it looks like the planner consumes 
>>> about 250 bytes for each integer element.
>>
>> I guess the numbers you mentioned in init.sql are total memory used by
>> the planner (as reported by the patch in the thread) when planning
>> that query and not memory consumed by Const nodes themselves. Am I
>> right? I think the measurements need to be explained better and also
>> the realistic scenario you are trying to oprimize.
> 
> Yes, it is the total memory consumed by the planner - I used the numbers 
> generated by your patch [1]. I had been increasing the number of elements in 
> the array to exclude the memory consumed by the planner for other purposes. 
> As you can see, the array with 1 element consumes 12kB of memory, 1E4 
> elements - 2.6 MB. All of that memory increment is related to the only 
> enlargement of this array. (2600-12)/10 = 260 bytes. So, I make a conclusion: 
> each 4-byte element produces a consumption of 260 bytes of memory.
> This scenario I obtained from the user complaint - they had strict 
> restrictions on memory usage and were stuck in this unusual memory usage case.
> 
>> I guess, the reason you think that partitioning will increase the
>> memory consumed is because each partition will have the clause
>> translated for it. Selectivity estimation for each partition will
>> create those many Const nodes and hence consume memory. Am I right?
> 
> Yes.
> 
>> Can you please measure the memory consumed with and without your
>> patch.
> 
> Done. See test case and results in 'init_parts.sql' in attachment. Short 
> summary below. I varied a number of elements from 1 to 1 and partitions 
> from 1 to 100. As you can see, partitioning adds a lot of memory consumption 
> by itself. But we see an effect from patch also.
> 
> master:
> elems 1   1E1 1E2 1E3 1E4 
> parts
> 1 28kB50kB0.3MB   2.5MB   25MB
> 1045kB143kB   0.6MB   4.8MB   47MB
> 100   208kB   125kB   3.3MB   27MB274MB
> 
> patched:
> elems 1   1E1 1E2 1E3 1E4
> parts
> 1 28kB48kB0.25MB  2.2MB   22.8MB
> 1044kB100kB   313kB   2.4MB   23.7MB
> 100   208kB   101kB   0.9MB   3.7MB   32.4MB
> 
> Just for comparison, without partitioning:
> elems 1   1E1 1E2 1E3 1E4 
> master:   12kB14kB37kB266kB   2.5MB
> patched:  12kB11.5kB  13kB24kB141kB
> 

These improvements look pretty nice, considering how simple the patch
seems to be. I can't even imagine how much memory we'd need with even
more partitions (say, 1000) if 100 partitions means 274MB.

BTW when releasing memory in scalararraysel, wouldn't it be good to also
free the elem_values/elem_nulls? I haven't tried and maybe it's not that
significant amount.


Considering there are now multiple patches improving memory usage during
planning with partitions, perhaps it's time to take a step back and
think about how we manage (or rather not manage) memory during query
planning, and see if we could improve that instead of an infinite
sequence of ad hoc patches?

Our traditional attitude is to not manage memory, and rely on the memory
context to not be very long-lived. And that used to be fine, but
partitioning clearly changed the equation, increasing the amount of
allocated memory etc.

I don't think we want to stop relying on memory contexts for planning in
general - memory contexts are obviously very convenient etc. But maybe
we could identify "stages" in the planning and release the memory more
aggressively in those?

For example, I don't think we expect selectivity functions to allocate
long-lived objects, right? So maybe we could run them in a dedicated
memory context, and reset it aggressively (after each call).

Ofc, I'm not suggesting this patch should be responsible for doing this.


>>> It is maybe not a problem most of the time. However, in the case of 
>>> partitions, memory consumption multiplies by each partition. Such a corner 
>>> case looks weird, but the fix is simple. So, why not?
>>
>> With vectorized operations becoming a norm these days, it's possible
>> to have thousands of element in array of an ANY or IN clause. Also
>> will be common to have thousands of partitions. But I think what we
>> need to 

Re: Optimize planner memory consumption for huge arrays

2023-09-07 Thread Lepikhov Andrei


On Wed, Sep 6, 2023, at 8:09 PM, Ashutosh Bapat wrote:
> Hi Lepikhov,
>
> Thanks for using my patch and I am glad that you found it useful.
>
> On Mon, Sep 4, 2023 at 10:56 AM Lepikhov Andrei
>  wrote:
>>
>> Hi, hackers,
>>
>> Looking at the planner behaviour with the memory consumption patch [1], I 
>> figured out that arrays increase memory consumption by the optimizer 
>> significantly. See init.sql in attachment.
>> The point here is that the planner does small memory allocations for each 
>> element during estimation. As a result, it looks like the planner consumes 
>> about 250 bytes for each integer element.
>
> I guess the numbers you mentioned in init.sql are total memory used by
> the planner (as reported by the patch in the thread) when planning
> that query and not memory consumed by Const nodes themselves. Am I
> right? I think the measurements need to be explained better and also
> the realistic scenario you are trying to oprimize.

Yes, it is the total memory consumed by the planner - I used the numbers 
generated by your patch [1]. I had been increasing the number of elements in 
the array to exclude the memory consumed by the planner for other purposes. As 
you can see, the array with 1 element consumes 12kB of memory, 1E4 elements - 
2.6 MB. All of that memory increment is related to the only enlargement of this 
array. (2600-12)/10 = 260 bytes. So, I make a conclusion: each 4-byte element 
produces a consumption of 260 bytes of memory.
This scenario I obtained from the user complaint - they had strict restrictions 
on memory usage and were stuck in this unusual memory usage case.

> I guess, the reason you think that partitioning will increase the
> memory consumed is because each partition will have the clause
> translated for it. Selectivity estimation for each partition will
> create those many Const nodes and hence consume memory. Am I right?

Yes.

> Can you please measure the memory consumed with and without your
> patch.

Done. See test case and results in 'init_parts.sql' in attachment. Short 
summary below. I varied a number of elements from 1 to 1 and partitions 
from 1 to 100. As you can see, partitioning adds a lot of memory consumption by 
itself. But we see an effect from patch also.

master:
elems   1   1E1 1E2 1E3 1E4 
parts
1   28kB50kB0.3MB   2.5MB   25MB
10  45kB143kB   0.6MB   4.8MB   47MB
100 208kB   125kB   3.3MB   27MB274MB

patched:
elems   1   1E1 1E2 1E3 1E4
parts
1   28kB48kB0.25MB  2.2MB   22.8MB
10  44kB100kB   313kB   2.4MB   23.7MB
100 208kB   101kB   0.9MB   3.7MB   32.4MB

Just for comparison, without partitioning:
elems   1   1E1 1E2 1E3 1E4 
master: 12kB14kB37kB266kB   2.5MB
patched:12kB11.5kB  13kB24kB141kB

>> It is maybe not a problem most of the time. However, in the case of 
>> partitions, memory consumption multiplies by each partition. Such a corner 
>> case looks weird, but the fix is simple. So, why not?
>
> With vectorized operations becoming a norm these days, it's possible
> to have thousands of element in array of an ANY or IN clause. Also
> will be common to have thousands of partitions. But I think what we
> need to do here is to write a selectivity estimation function which
> takes an const array and return selectivity without requiring to
> create a Const node for each element.

Maybe you're right. Could you show any examples of vectorized usage of postgres 
to understand your idea more clearly?
Here I propose only quick simple solution. I don't think it would change the 
way of development.

>> The diff in the attachment is proof of concept showing how to reduce wasting 
>> of memory. Having benchmarked a bit, I didn't find any overhead.
>>
>
> You might want to include your benchmarking results as well.

Here is nothing interesting. pgbench TPS and planning time for the cases above 
doesn't change planning time.

[1] Report planning memory in EXPLAIN ANALYZE

-- 
Regards,
Andrei Lepikhov

init_parts.sql
Description: application/sql


Re: Optimize planner memory consumption for huge arrays

2023-09-06 Thread Ashutosh Bapat
Hi Lepikhov,

Thanks for using my patch and I am glad that you found it useful.

On Mon, Sep 4, 2023 at 10:56 AM Lepikhov Andrei
 wrote:
>
> Hi, hackers,
>
> Looking at the planner behaviour with the memory consumption patch [1], I 
> figured out that arrays increase memory consumption by the optimizer 
> significantly. See init.sql in attachment.
> The point here is that the planner does small memory allocations for each 
> element during estimation. As a result, it looks like the planner consumes 
> about 250 bytes for each integer element.

I guess the numbers you mentioned in init.sql are total memory used by
the planner (as reported by the patch in the thread) when planning
that query and not memory consumed by Const nodes themselves. Am I
right? I think the measurements need to be explained better and also
the realistic scenario you are trying to oprimize.

I guess, the reason you think that partitioning will increase the
memory consumed is because each partition will have the clause
translated for it. Selectivity estimation for each partition will
create those many Const nodes and hence consume memory. Am I right?
Can you please measure the memory consumed with and without your
patch.

>
> It is maybe not a problem most of the time. However, in the case of 
> partitions, memory consumption multiplies by each partition. Such a corner 
> case looks weird, but the fix is simple. So, why not?

With vectorized operations becoming a norm these days, it's possible
to have thousands of element in array of an ANY or IN clause. Also
will be common to have thousands of partitions. But I think what we
need to do here is to write a selectivity estimation function which
takes an const array and return selectivity without requiring to
create a Const node for each element.

>
> The diff in the attachment is proof of concept showing how to reduce wasting 
> of memory. Having benchmarked a bit, I didn't find any overhead.
>

You might want to include your benchmarking results as well.

-- 
Best Wishes,
Ashutosh Bapat




Re: Optimize planner memory consumption for huge arrays

2023-09-04 Thread Dilip Kumar
On Mon, Sep 4, 2023 at 3:49 PM Lepikhov Andrei
 wrote:
>
> > + Const *c = makeConst(nominal_element_type,
> > + -1,
> > + nominal_element_collation,
> > + elmlen,
> > + elem_values[i],
> > + elem_nulls[i],
> > + elmbyval);
> > +
> > + args = list_make2(leftop, c);
> >   if (is_join_clause)
> >   s2 = DatumGetFloat8(FunctionCall5Coll(,
> > clause->inputcollid,
> > @@ -1984,7 +1985,8 @@ scalararraysel(PlannerInfo *root,
> > ObjectIdGetDatum(operator),
> > PointerGetDatum(args),
> > Int32GetDatum(varRelid)));
> > -
> > + list_free(args);
> > + pfree(c);
> >
> > Maybe you can just use list_free_deep, instead of storing the constant
> > in a separate variable.
> As I see, the first element in the array is leftop, which is used on other 
> iterations. So, we can't use list_free_deep here.

Yeah you are right, thanks.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: Optimize planner memory consumption for huge arrays

2023-09-04 Thread Lepikhov Andrei



On Mon, Sep 4, 2023, at 3:37 PM, Dilip Kumar wrote:
> On Mon, Sep 4, 2023 at 11:58 AM Lepikhov Andrei
>  wrote:
>>
>> Hi, hackers,
>>
>> Looking at the planner behaviour with the memory consumption patch [1], I 
>> figured out that arrays increase memory consumption by the optimizer 
>> significantly. See init.sql in attachment.
>> The point here is that the planner does small memory allocations for each 
>> element during estimation. As a result, it looks like the planner consumes 
>> about 250 bytes for each integer element.
>>
>> It is maybe not a problem most of the time. However, in the case of 
>> partitions, memory consumption multiplies by each partition. Such a corner 
>> case looks weird, but the fix is simple. So, why not?
>>
>> The diff in the attachment is proof of concept showing how to reduce wasting 
>> of memory. Having benchmarked a bit, I didn't find any overhead.
>
> + Const *c = makeConst(nominal_element_type,
> + -1,
> + nominal_element_collation,
> + elmlen,
> + elem_values[i],
> + elem_nulls[i],
> + elmbyval);
> +
> + args = list_make2(leftop, c);
>   if (is_join_clause)
>   s2 = DatumGetFloat8(FunctionCall5Coll(,
> clause->inputcollid,
> @@ -1984,7 +1985,8 @@ scalararraysel(PlannerInfo *root,
> ObjectIdGetDatum(operator),
> PointerGetDatum(args),
> Int32GetDatum(varRelid)));
> -
> + list_free(args);
> + pfree(c);
>
> Maybe you can just use list_free_deep, instead of storing the constant
> in a separate variable.
As I see, the first element in the array is leftop, which is used on other 
iterations. So, we can't use list_free_deep here.

-- 
Regards,
Andrei Lepikhov




Re: Optimize planner memory consumption for huge arrays

2023-09-04 Thread Dilip Kumar
On Mon, Sep 4, 2023 at 11:58 AM Lepikhov Andrei
 wrote:
>
> Hi, hackers,
>
> Looking at the planner behaviour with the memory consumption patch [1], I 
> figured out that arrays increase memory consumption by the optimizer 
> significantly. See init.sql in attachment.
> The point here is that the planner does small memory allocations for each 
> element during estimation. As a result, it looks like the planner consumes 
> about 250 bytes for each integer element.
>
> It is maybe not a problem most of the time. However, in the case of 
> partitions, memory consumption multiplies by each partition. Such a corner 
> case looks weird, but the fix is simple. So, why not?
>
> The diff in the attachment is proof of concept showing how to reduce wasting 
> of memory. Having benchmarked a bit, I didn't find any overhead.

+ Const *c = makeConst(nominal_element_type,
+ -1,
+ nominal_element_collation,
+ elmlen,
+ elem_values[i],
+ elem_nulls[i],
+ elmbyval);
+
+ args = list_make2(leftop, c);
  if (is_join_clause)
  s2 = DatumGetFloat8(FunctionCall5Coll(,
clause->inputcollid,
@@ -1984,7 +1985,8 @@ scalararraysel(PlannerInfo *root,
ObjectIdGetDatum(operator),
PointerGetDatum(args),
Int32GetDatum(varRelid)));
-
+ list_free(args);
+ pfree(c);

Maybe you can just use list_free_deep, instead of storing the constant
in a separate variable.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Optimize planner memory consumption for huge arrays

2023-09-03 Thread Lepikhov Andrei
Hi, hackers,

Looking at the planner behaviour with the memory consumption patch [1], I 
figured out that arrays increase memory consumption by the optimizer 
significantly. See init.sql in attachment.
The point here is that the planner does small memory allocations for each 
element during estimation. As a result, it looks like the planner consumes 
about 250 bytes for each integer element.

It is maybe not a problem most of the time. However, in the case of partitions, 
memory consumption multiplies by each partition. Such a corner case looks 
weird, but the fix is simple. So, why not?

The diff in the attachment is proof of concept showing how to reduce wasting of 
memory. Having benchmarked a bit, I didn't find any overhead.

[1] Report planning memory in EXPLAIN ANALYZE
https://www.postgresql.org/message-id/flat/CAExHW5sZA=5lj_zppro-w09ck8z9p7eayaqq3ks9gdfhrxe...@mail.gmail.com

--
Regards,
Andrey Lepikhov


init.sql
Description: application/sql


array_compact_memusage.diff
Description: Binary data