uot;PAIN FREQUENCY","PAIN
INTERVENTIONS","PAIN QUALITY","PERIPHERAL TISSUE EDEMA","PERIPHERAL TISSUE
INDURATION","REASON MEASUREMENTS NOT TAKEN","RESPONSE TO PAIN
INTERVENTIONS",SHAPE,"SIGNS AND SYMPTOMS OF INFECTION","SKI
-Original Message-
From: Tom Lane
Sent: Tuesday, July 27, 2021 23:31
To: l...@laurent-hasson.com
Cc: Peter Geoghegan ; David Rowley ; Justin
Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
I wrote:
> Yeah, I wouldn't sweat
I wrote:
> Yeah, I wouldn't sweat over the specific value. The pre-v13 behavior
> was effectively equivalent to hash_mem_multiplier = infinity, so if
> you weren't having any OOM problems before, just crank it up.
Oh, wait, scratch that: the old executor's behavior is accurately
described by that
"l...@laurent-hasson.com" writes:
> One question that popped up in my head. hash_mem_multiplier is an upper-bound
> right: it doesn't reserve memory ahead of time correct? So there is no reason
> for me to spend undue amounts of time fine-tuning this parameter? If I have
> work_mem to 521MB, th
On Tue, Jul 27, 2021 at 7:57 PM l...@laurent-hasson.com <
l...@laurent-hasson.com> wrote:
> hash_mem_multiplier is an upper-bound right: it doesn't reserve memory
> ahead of time correct?
>
Yes, that is what the phrasing "maximum amount" in the docs is trying to
convey.
https://www.postgresql.or
gresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
"l...@laurent-hasson.com" writes:
> As a user of PG, we have taken pride in the last few years in tuning the heck
> out of the system and getting great performance compared to alternatives like
> SQLServe
-Original Message-
From: Tom Lane
Sent: Sunday, July 25, 2021 14:08
To: l...@laurent-hasson.com
Cc: Peter Geoghegan ; David Rowley ; Justin
Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
"l...@laurent-hasson.com" writes:
&
"l...@laurent-hasson.com" writes:
> As a user of PG, we have taken pride in the last few years in tuning the heck
> out of the system and getting great performance compared to alternatives like
> SQLServer. The customers we work with typically have data centers and are
> overwhelmingly Windows
From: Vijaykumar Jain
Sent: Friday, July 23, 2021 10:45
To: l...@laurent-hasson.com
Cc: Justin Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Fri, 23 Jul 2021 at 03:06,
l...@laurent-hasson.com<mailto:l...@laurent-hasson.com>
ma
On Fri, 23 Jul 2021 at 03:06, l...@laurent-hasson.com
wrote:
> I am not sure I understand this parameter well enough but it’s with a
> default value right now of 1000. I have read Robert’s post (
> http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html)
> and could play with tho
unlock this 2GB limit.
From: Vijaykumar Jain
Sent: Thursday, July 22, 2021 16:32
To: l...@laurent-hasson.com
Cc: Justin Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
Just asking, I may be completely wrong.
is this query parallel safe?
can we force
Just asking, I may be completely wrong.
is this query parallel safe?
can we force parallel workers, by setting low parallel_setup_cost or
otherwise to make use of scatter gather and Partial HashAggregate(s)?
I am just assuming more workers doing things in parallel, would require
less disk spill pe
On 2021-Jul-22, l...@laurent-hasson.com wrote:
> Yes, agreed Peter... The "lower priority" issue was mentioned, but not
> in terms of the applicability of the fix overall. Personally, I would
> prefer going the size_t route vs int/long/int64 in C/C++/. Of course,
> as a user, I'd love a patch on V
Em qui., 22 de jul. de 2021 às 14:28, Peter Geoghegan escreveu:
> On Thu, Jul 22, 2021 at 10:11 AM Tom Lane wrote:
> > No, he already tried, upthread. The trouble is that he's on a Windows
> > machine, so get_hash_mem is quasi-artificially constraining the product
> > to 2GB. And he needs it t
On Thu, Jul 22, 2021 at 10:35 AM Tom Lane wrote:
> Well, what we really ought to be using is size_t (a/k/a Size), at least
> for memory-space-related calculations. I don't have an opinion right
> now about what logtape.c ought to use. I do agree that avoiding "long"
> altogether would be a good
-Original Message-
From: Peter Geoghegan
Sent: Thursday, July 22, 2021 13:36
To: l...@laurent-hasson.com
Cc: Justin Pryzby ; Tom Lane ; David
Rowley ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Thu, Jul 22, 2021 at 10:33 AM l
On Thu, Jul 22, 2021 at 10:33 AM l...@laurent-hasson.com
wrote:
> Damn... I know Windows is a lower priority, and this is yet another issue,
> but in Healthcare, Windows is so prevalent everywhere...
To be clear, I didn't actually say that. I said that it doesn't matter
either way, as far as add
Peter Geoghegan writes:
> On Thu, Jul 22, 2021 at 10:11 AM Tom Lane wrote:
>> What I'm wondering about is whether it's worth putting in a solution
>> for this issue in isolation, or whether we ought to embark on the
>> long-ignored project of getting rid of use of "long" for any
>> memory-size-re
-Original Message-
From: Justin Pryzby
Sent: Thursday, July 22, 2021 13:29
To: l...@laurent-hasson.com
Cc: Tom Lane ; David Rowley ; Peter
Geoghegan ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Thu, Jul 22, 2021 at 05:26:26PM +, l
On Thu, Jul 22, 2021 at 05:26:26PM +, l...@laurent-hasson.com wrote:
> I tried this but not seeing max resident size data output.
Oh. Apparently, that's not supported under windows..
#if defined(HAVE_GETRUSAGE)
appendStringInfo(&str,
"!\t%ld k
On Thu, Jul 22, 2021 at 10:11 AM Tom Lane wrote:
> No, he already tried, upthread. The trouble is that he's on a Windows
> machine, so get_hash_mem is quasi-artificially constraining the product
> to 2GB. And he needs it to be a bit more than that. Whether the
> constraint is hitting at the ngr
-Original Message-
From: Tom Lane
Sent: Thursday, July 22, 2021 12:42
To: Peter Geoghegan
Cc: David Rowley ; l...@laurent-hasson.com; Justin Pryzby
; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
Peter Geoghegan writes:
> On Thu, Jul
-Original Message-
From: Justin Pryzby
Sent: Thursday, July 22, 2021 12:36
To: l...@laurent-hasson.com
Cc: Tom Lane ; David Rowley ; Peter
Geoghegan ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Thu, Jul 22, 2021 at 04:30:00PM +, l
_size = '52GB', from_collapse_limit = '24',
hash_mem_multiplier = '16', jit = 'off', jit_above_cost = '2e+08',
jit_inline_above_cost = '5e+08', jit_optimize_above_cost = '5e+08',
join_collapse_limit = '24', max_parallel_wor
Peter Geoghegan writes:
> I also suspect that if Laurent set work_mem and/or hash_mem_multiplier
> *extremely* aggressively, then eventually the hash agg would be
> in-memory. And without actually using all that much memory.
No, he already tried, upthread. The trouble is that he's on a Windows
m
On Thu, Jul 22, 2021 at 9:53 AM Peter Geoghegan wrote:
> I suspect David's theory about hash_agg_set_limits()'s ngroup limit is
> correct. It certainly seems like a good starting point.
I also suspect that if Laurent set work_mem and/or hash_mem_multiplier
*extremely* aggressively, then eventuall
Justin Pryzby writes:
> Oh. So the problem seems to be that:
> 1) In v12, HashAgg now obeyes work_mem*hash_mem_multiplier;
> 2) Under windows, work_mem is limited to 2GB.
And more to the point, work_mem*hash_mem_multiplier is *also* limited
to 2GB. We didn't think that through very carefully.
On Thu, Jul 22, 2021 at 9:42 AM Tom Lane wrote:
> Right. The point here is that before v13, hash aggregation was not
> subject to the work_mem limit, nor any related limit. If you did an
> aggregation requiring more than 2GB-plus-slop, it would work just fine
> as long as your machine had enough
Peter Geoghegan writes:
> On Thu, Jul 22, 2021 at 9:21 AM Tom Lane wrote:
>> Yeah, I should have said "2GB plus palloc slop". It doesn't surprise
>> me a bit that we seem to be eating another 20% on top of the nominal
>> limit.
> MAX_KILOBYTES is the max_val for the work_mem GUC itself, and has
On Thu, Jul 22, 2021 at 09:36:02AM -0700, Peter Geoghegan wrote:
> I don't see how it's possible for get_hash_mem() to be unable to
> return a hash_mem value that could be represented by work_mem
> directly. MAX_KILOBYTES is an annoyingly low limit on Windows, where
> sizeof(long) is 4. But that's
On Thu, Jul 22, 2021 at 9:21 AM Tom Lane wrote:
> Yeah, I should have said "2GB plus palloc slop". It doesn't surprise
> me a bit that we seem to be eating another 20% on top of the nominal
> limit.
MAX_KILOBYTES is the max_val for the work_mem GUC itself, and has been
for many years. The functi
On Thu, Jul 22, 2021 at 04:30:00PM +, l...@laurent-hasson.com wrote:
> Hello Justin,
>
> > log_executor_stats=on; client_min_messages=debug;
>
> Would the results then come in EXPLAIN or would I need to pick something up
> from the logs?
If you're running with psql, and client_min_messages=
-Original Message-
From: Justin Pryzby
Sent: Thursday, July 22, 2021 12:23
To: l...@laurent-hasson.com
Cc: Tom Lane ; David Rowley ; Peter
Geoghegan ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Thu, Jul 22, 2021 at 04:16:34PM +, l
I wrote:
> I think the right fix here is to remove the cap, which will require
> changing get_hash_mem to return double, and then maybe some cascading
> changes --- I've not looked at its callers.
Or, actually, returning size_t would likely make the most sense.
We'd fold the 1024L multiplier in he
-Original Message-
From: David Rowley
Sent: Thursday, July 22, 2021 12:18
To: Peter Geoghegan
Cc: Tom Lane ; Jeff Davis ;
l...@laurent-hasson.com; Justin Pryzby ;
pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Fri, 23 Jul 2021 at 04:14
On Thu, Jul 22, 2021 at 04:16:34PM +, l...@laurent-hasson.com wrote:
> Is it fair then to deduce that the total memory usage would be 2,400,305kB +
> 126,560kB? Is this what under the covers V11 is consuming more or less?
It might be helpful to know how much RAM v11 is using.
Could you run t
David Rowley writes:
> On Fri, 23 Jul 2021 at 03:56, Tom Lane wrote:
>> So basically, we now have a hard restriction that hashaggs can't use
>> more than INT_MAX kilobytes, or approximately 2.5GB, and this use case
>> is getting eaten alive by that restriction. Seems like we need to
>> do someth
-Original Message-
From: Peter Geoghegan
Sent: Thursday, July 22, 2021 12:14
To: Tom Lane
Cc: Jeff Davis ; l...@laurent-hasson.com; David Rowley
; Justin Pryzby ;
pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Thu, Jul 22, 2021 at 8:45 AM
On Fri, 23 Jul 2021 at 04:14, Peter Geoghegan wrote:
>
> On Thu, Jul 22, 2021 at 8:45 AM Tom Lane wrote:
> > That is ... weird. Maybe you have found a bug in the spill-to-disk logic;
> > it's quite new after all. Can you extract a self-contained test case that
> > behaves this way?
>
> I wonder
-Original Message-
From: Tom Lane
Sent: Thursday, July 22, 2021 11:57
To: l...@laurent-hasson.com
Cc: David Rowley ; Peter Geoghegan ; Justin
Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
I wrote:
> "l...@laurent-ha
On Thu, Jul 22, 2021 at 8:45 AM Tom Lane wrote:
> That is ... weird. Maybe you have found a bug in the spill-to-disk logic;
> it's quite new after all. Can you extract a self-contained test case that
> behaves this way?
I wonder if this has something to do with the way that the input data
is cl
On Fri, 23 Jul 2021 at 03:56, Tom Lane wrote:
> So basically, we now have a hard restriction that hashaggs can't use
> more than INT_MAX kilobytes, or approximately 2.5GB, and this use case
> is getting eaten alive by that restriction. Seems like we need to
> do something about that.
Hmm, math c
I wrote:
> "l...@laurent-hasson.com" writes:
>> It's still spilling to disk and seems to cap at 2.5GB of memory usage in
>> spite of configuration.
> That is ... weird.
Oh: see get_hash_mem:
hash_mem = (double) work_mem * hash_mem_multiplier;
/*
* guc.c enforces a MAX
"l...@laurent-hasson.com" writes:
> So, I went possibly nuclear, and still no cigar. Something's not right.
> - hash_mem_multiplier = '10'
> - work_mem = '1GB'
> The results are
> Batches: 5 Memory Usage: 2,449,457kB Disk Usage: 105,936kB
> Execution Time: 1,837,126.766 ms
> It's s
-Original Message-
From: l...@laurent-hasson.com
Sent: Thursday, July 22, 2021 09:37
To: David Rowley
Cc: Tom Lane ; Peter Geoghegan ; Justin
Pryzby ; pgsql-performa...@postgresql.org
Subject: RE: Big performance slowdown from 11.2 to 13.3
OK. Will do another round of testing
OK. Will do another round of testing.
-Original Message-
From: David Rowley
Sent: Thursday, July 22, 2021 00:44
To: l...@laurent-hasson.com
Cc: Tom Lane ; Peter Geoghegan ; Justin
Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Thu
On Thu, 22 Jul 2021 at 16:37, l...@laurent-hasson.com
wrote:
> Seems like no cigar ☹ See plan pasted below. I changed the conf as follows:
> - hash_mem_multiplier = '2'
> - work_mem = '1GB'
> Batches: 5 Memory Usage: 2400305kB Disk Usage: 126560kB
You might want to keep going highe
-Original Message-
From: l...@laurent-hasson.com
Sent: Wednesday, July 21, 2021 19:46
To: Tom Lane
Cc: Peter Geoghegan ; Justin Pryzby ;
pgsql-performa...@postgresql.org
Subject: RE: Big performance slowdown from 11.2 to 13.3
-Original Message-
From: Tom Lane
Sent
-Original Message-
From: Tom Lane
Sent: Wednesday, July 21, 2021 19:43
To: l...@laurent-hasson.com
Cc: Peter Geoghegan ; Justin Pryzby ;
pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
"l...@laurent-hasson.com" writes:
>
-Original Message-
From: Tom Lane
Sent: Wednesday, July 21, 2021 19:36
To: l...@laurent-hasson.com
Cc: Justin Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
"l...@laurent-hasson.com" writes:
> My apologies... I thought
"l...@laurent-hasson.com" writes:
> From: Peter Geoghegan
>> I imagine that this has something to do with the fact that the hash
>> aggregate spills to disk in Postgres 13.
> So how is this happening? I mean, it's the exact same query, looks like the
> same plan to me, it's the same data on t
-Original Message-
From: Peter Geoghegan
Sent: Wednesday, July 21, 2021 19:34
To: l...@laurent-hasson.com
Cc: Justin Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Wed, Jul 21, 2021 at 4:19 PM l...@laurent-hasson.com
wrote:
>
"l...@laurent-hasson.com" writes:
> My apologies... I thought this is what I had attached in my original email
> from PGADMIN. In any case, I reran from the command line and here are the two
> plans.
So the pain seems to be coming in with the upper hash aggregation, which
is spilling to disk be
On Wed, Jul 21, 2021 at 4:19 PM l...@laurent-hasson.com
wrote:
> As you can see, the V13.3 execution is about 10x slower.
>
> It may be hard for me to create a whole copy of the database on 11.12 and
> check that environment by itself. I'd want to do it on the same machine to
> control variables
From: Michael Lewis
Sent: Wednesday, July 21, 2021 18:12
To: l...@laurent-hasson.com
Cc: pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
I'm not seeing the valueof the CTE. Why not access assessmenticcqa_raw directly
in the main query and only do
-Original Message-
From: Justin Pryzby
Sent: Wednesday, July 21, 2021 15:15
To: l...@laurent-hasson.com
Cc: pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Wed, Jul 21, 2021 at 06:50:58PM +, l...@laurent-hasson.com wrote:
> The plans
I'm not seeing the valueof the CTE. Why not access assessmenticcqa_raw
directly in the main query and only do GROUP BY once? Do you have many
values in iccqar_ques_code which are not used in this query?
>
On Wed, Jul 21, 2021 at 06:50:58PM +, l...@laurent-hasson.com wrote:
> The plans are pretty much identical too. I checked line by line and couldn't
> see anything much different (note that I have a view over this query). Here
> is the V13 version of the plan:
> I am out of my wits as to what
58 matches
Mail list logo