Hi Experts,
The attached query is performing slow, this needs to be optimized to
improve the performance.
Could you help me with query rewrite (or) on new indexes to be created to
improve the performance?
Thanks a ton in advance for your support.
SELECT tab2.rulename,
tab2.totalexecuted
-Original Message-
From: Tom Lane
Sent: Tuesday, July 27, 2021 23:31
To: l...@laurent-hasson.com
Cc: Peter Geoghegan ; David Rowley ; Justin
Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
I wrote:
> Yeah, I wouldn't sweat o
I wrote:
> Yeah, I wouldn't sweat over the specific value. The pre-v13 behavior
> was effectively equivalent to hash_mem_multiplier = infinity, so if
> you weren't having any OOM problems before, just crank it up.
Oh, wait, scratch that: the old executor's behavior is accurately
described by
"l...@laurent-hasson.com" writes:
> One question that popped up in my head. hash_mem_multiplier is an upper-bound
> right: it doesn't reserve memory ahead of time correct? So there is no reason
> for me to spend undue amounts of time fine-tuning this parameter? If I have
> work_mem to 521MB,
On Tue, Jul 27, 2021 at 7:57 PM l...@laurent-hasson.com <
l...@laurent-hasson.com> wrote:
> hash_mem_multiplier is an upper-bound right: it doesn't reserve memory
> ahead of time correct?
>
Yes, that is what the phrasing "maximum amount" in the docs is trying to
convey.
Subject: Re: Big performance slowdown from 11.2 to 13.3
"l...@laurent-hasson.com" writes:
> As a user of PG, we have taken pride in the last few years in tuning the heck
> out of the system and getting great performance compared to alternatives like
> SQLServer. Th
M
pg_stats WHERE attname='dpd_bis_foedselsnummer' AND tablename='...';
If not, consider clustering on the existing "unique_descending" index and then
analyzing that table, too.
This would also affect performance of other queries - hopefully improving
several things at once.
--
Justin
On Tue, Jul 27, 2021 at 10:44:03PM +0530, kenny a wrote:
> Hi Experts,
>
> The attached query is performing slow, this needs to be optimized to
> improve the performance.
>
> Could you help me with query rewrite (or) on new indexes to be created to
> im
>
> Hi Experts,
>
> The attached query is performing slow, this needs to be optimized to
> improve the performance.
>
> Could you help me with query rewrite (or) on new indexes to be created to
> improve the performance?
>
> Thanks a ton in advance for your support.
>
On Mon, Jul 26, 2021 at 01:56:54PM +, Simen Andreas Andreassen Lønsethagen
wrote:
> To create the subsets, I (or rather my application) will receive lists of
> records which should be matched according to some business logic. Each of
> these lists will be read into a temporary table:
Easy
Hi, first time posting, hope I have included the relevant information.
I am trying to understand the performance of a query which is intended to
retrieve a subset of the following table:
Table "contracts.bis_person_alle_endringer"
-Original Message-
From: Tom Lane
Sent: Sunday, July 25, 2021 14:08
To: l...@laurent-hasson.com
Cc: Peter Geoghegan ; David Rowley ; Justin
Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
"l...@laurent-hasson.com" writes:
&
"l...@laurent-hasson.com" writes:
> As a user of PG, we have taken pride in the last few years in tuning the heck
> out of the system and getting great performance compared to alternatives like
> SQLServer. The customers we work with typically have data centers and are
>
On Thu, Jul 22, 2021 at 01:54:25PM +, Manoj Kumar wrote:
> Recently we have noticed that in one of our DB instances there is a potential
> delay in querying a table from java code. could you please check the attached
> log and help understand what is the problem and which direction should be
Dear Team,
Recently we have noticed that in one of our DB instances there is a potential
delay in querying a table from java code. could you please check the attached
log and help understand what is the problem and which direction should be look
into solving this delay of 4 odd mins ?
The
From: Vijaykumar Jain
Sent: Friday, July 23, 2021 10:45
To: l...@laurent-hasson.com
Cc: Justin Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Fri, 23 Jul 2021 at 03:06,
l...@laurent-hasson.com<mailto:l...@laurent-hasson.com>
ma
On Fri, 23 Jul 2021 at 03:06, l...@laurent-hasson.com
wrote:
> I am not sure I understand this parameter well enough but it’s with a
> default value right now of 1000. I have read Robert’s post (
> http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html)
> and could play with
will unlock this 2GB limit.
From: Vijaykumar Jain
Sent: Thursday, July 22, 2021 16:32
To: l...@laurent-hasson.com
Cc: Justin Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
Just asking, I may be completely wrong.
is this query parallel safe?
can we force
Just asking, I may be completely wrong.
is this query parallel safe?
can we force parallel workers, by setting low parallel_setup_cost or
otherwise to make use of scatter gather and Partial HashAggregate(s)?
I am just assuming more workers doing things in parallel, would require
less disk spill
On 2021-Jul-22, l...@laurent-hasson.com wrote:
> Yes, agreed Peter... The "lower priority" issue was mentioned, but not
> in terms of the applicability of the fix overall. Personally, I would
> prefer going the size_t route vs int/long/int64 in C/C++/. Of course,
> as a user, I'd love a patch on
Em qui., 22 de jul. de 2021 às 14:28, Peter Geoghegan escreveu:
> On Thu, Jul 22, 2021 at 10:11 AM Tom Lane wrote:
> > No, he already tried, upthread. The trouble is that he's on a Windows
> > machine, so get_hash_mem is quasi-artificially constraining the product
> > to 2GB. And he needs it
On Thu, Jul 22, 2021 at 10:35 AM Tom Lane wrote:
> Well, what we really ought to be using is size_t (a/k/a Size), at least
> for memory-space-related calculations. I don't have an opinion right
> now about what logtape.c ought to use. I do agree that avoiding "long"
> altogether would be a good
-Original Message-
From: Peter Geoghegan
Sent: Thursday, July 22, 2021 13:36
To: l...@laurent-hasson.com
Cc: Justin Pryzby ; Tom Lane ; David
Rowley ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Thu, Jul 22, 2021 at 10:33 AM l
On Thu, Jul 22, 2021 at 10:33 AM l...@laurent-hasson.com
wrote:
> Damn... I know Windows is a lower priority, and this is yet another issue,
> but in Healthcare, Windows is so prevalent everywhere...
To be clear, I didn't actually say that. I said that it doesn't matter
either way, as far as
Peter Geoghegan writes:
> On Thu, Jul 22, 2021 at 10:11 AM Tom Lane wrote:
>> What I'm wondering about is whether it's worth putting in a solution
>> for this issue in isolation, or whether we ought to embark on the
>> long-ignored project of getting rid of use of "long" for any
>>
-Original Message-
From: Justin Pryzby
Sent: Thursday, July 22, 2021 13:29
To: l...@laurent-hasson.com
Cc: Tom Lane ; David Rowley ; Peter
Geoghegan ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Thu, Jul 22, 2021 at 05:26:26PM +, l
On Thu, Jul 22, 2021 at 05:26:26PM +, l...@laurent-hasson.com wrote:
> I tried this but not seeing max resident size data output.
Oh. Apparently, that's not supported under windows..
#if defined(HAVE_GETRUSAGE)
appendStringInfo(,
"!\t%ld kB
On Thu, Jul 22, 2021 at 10:11 AM Tom Lane wrote:
> No, he already tried, upthread. The trouble is that he's on a Windows
> machine, so get_hash_mem is quasi-artificially constraining the product
> to 2GB. And he needs it to be a bit more than that. Whether the
> constraint is hitting at the
-Original Message-
From: Tom Lane
Sent: Thursday, July 22, 2021 12:42
To: Peter Geoghegan
Cc: David Rowley ; l...@laurent-hasson.com; Justin Pryzby
; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
Peter Geoghegan writes:
> On Thu, Jul
above_cost = '2e+08',
jit_inline_above_cost = '5e+08', jit_optimize_above_cost = '5e+08',
join_collapse_limit = '24', max_parallel_workers = '20',
max_parallel_workers_per_gather = '8', random_page_cost = '1.1', temp_buffers =
'4GB', work_mem = ' 2000MB'
Planning:
Buffers: shared hit=186 read=37
Pla
Peter Geoghegan writes:
> I also suspect that if Laurent set work_mem and/or hash_mem_multiplier
> *extremely* aggressively, then eventually the hash agg would be
> in-memory. And without actually using all that much memory.
No, he already tried, upthread. The trouble is that he's on a Windows
On Thu, Jul 22, 2021 at 9:53 AM Peter Geoghegan wrote:
> I suspect David's theory about hash_agg_set_limits()'s ngroup limit is
> correct. It certainly seems like a good starting point.
I also suspect that if Laurent set work_mem and/or hash_mem_multiplier
*extremely* aggressively, then
Justin Pryzby writes:
> Oh. So the problem seems to be that:
> 1) In v12, HashAgg now obeyes work_mem*hash_mem_multiplier;
> 2) Under windows, work_mem is limited to 2GB.
And more to the point, work_mem*hash_mem_multiplier is *also* limited
to 2GB. We didn't think that through very carefully.
machine had enough RAM. Now, the performance sucks and
> there is no knob you can turn to fix it. That's unacceptable in my book.
Oh! That makes way more sense.
I suspect David's theory about hash_agg_set_limits()'s ngroup limit is
correct. It certainly seems like a good starting point.
--
Peter Geoghegan
ax_val for the work_mem GUC itself, and has been
> for many years.
Right. The point here is that before v13, hash aggregation was not
subject to the work_mem limit, nor any related limit. If you did an
aggregation requiring more than 2GB-plus-slop, it would work just fine
as long as your machine
On Thu, Jul 22, 2021 at 09:36:02AM -0700, Peter Geoghegan wrote:
> I don't see how it's possible for get_hash_mem() to be unable to
> return a hash_mem value that could be represented by work_mem
> directly. MAX_KILOBYTES is an annoyingly low limit on Windows, where
> sizeof(long) is 4. But that's
On Thu, Jul 22, 2021 at 9:21 AM Tom Lane wrote:
> Yeah, I should have said "2GB plus palloc slop". It doesn't surprise
> me a bit that we seem to be eating another 20% on top of the nominal
> limit.
MAX_KILOBYTES is the max_val for the work_mem GUC itself, and has been
for many years. The
On Thu, Jul 22, 2021 at 04:30:00PM +, l...@laurent-hasson.com wrote:
> Hello Justin,
>
> > log_executor_stats=on; client_min_messages=debug;
>
> Would the results then come in EXPLAIN or would I need to pick something up
> from the logs?
If you're running with psql, and
-Original Message-
From: Justin Pryzby
Sent: Thursday, July 22, 2021 12:23
To: l...@laurent-hasson.com
Cc: Tom Lane ; David Rowley ; Peter
Geoghegan ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Thu, Jul 22, 2021 at 04:16:34PM +, l
I wrote:
> I think the right fix here is to remove the cap, which will require
> changing get_hash_mem to return double, and then maybe some cascading
> changes --- I've not looked at its callers.
Or, actually, returning size_t would likely make the most sense.
We'd fold the 1024L multiplier in
-Original Message-
From: David Rowley
Sent: Thursday, July 22, 2021 12:18
To: Peter Geoghegan
Cc: Tom Lane ; Jeff Davis ;
l...@laurent-hasson.com; Justin Pryzby ;
pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Fri, 23 Jul 2021 at 04:14
On Thu, Jul 22, 2021 at 04:16:34PM +, l...@laurent-hasson.com wrote:
> Is it fair then to deduce that the total memory usage would be 2,400,305kB +
> 126,560kB? Is this what under the covers V11 is consuming more or less?
It might be helpful to know how much RAM v11 is using.
Could you run
David Rowley writes:
> On Fri, 23 Jul 2021 at 03:56, Tom Lane wrote:
>> So basically, we now have a hard restriction that hashaggs can't use
>> more than INT_MAX kilobytes, or approximately 2.5GB, and this use case
>> is getting eaten alive by that restriction. Seems like we need to
>> do
-Original Message-
From: Peter Geoghegan
Sent: Thursday, July 22, 2021 12:14
To: Tom Lane
Cc: Jeff Davis ; l...@laurent-hasson.com; David Rowley
; Justin Pryzby ;
pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Thu, Jul 22, 2021 at 8:45 AM
On Fri, 23 Jul 2021 at 04:14, Peter Geoghegan wrote:
>
> On Thu, Jul 22, 2021 at 8:45 AM Tom Lane wrote:
> > That is ... weird. Maybe you have found a bug in the spill-to-disk logic;
> > it's quite new after all. Can you extract a self-contained test case that
> > behaves this way?
>
> I
-Original Message-
From: Tom Lane
Sent: Thursday, July 22, 2021 11:57
To: l...@laurent-hasson.com
Cc: David Rowley ; Peter Geoghegan ; Justin
Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
I wrote:
> "l...@laurent-ha
On Thu, Jul 22, 2021 at 8:45 AM Tom Lane wrote:
> That is ... weird. Maybe you have found a bug in the spill-to-disk logic;
> it's quite new after all. Can you extract a self-contained test case that
> behaves this way?
I wonder if this has something to do with the way that the input data
is
On Fri, 23 Jul 2021 at 03:56, Tom Lane wrote:
> So basically, we now have a hard restriction that hashaggs can't use
> more than INT_MAX kilobytes, or approximately 2.5GB, and this use case
> is getting eaten alive by that restriction. Seems like we need to
> do something about that.
Hmm, math
I wrote:
> "l...@laurent-hasson.com" writes:
>> It's still spilling to disk and seems to cap at 2.5GB of memory usage in
>> spite of configuration.
> That is ... weird.
Oh: see get_hash_mem:
hash_mem = (double) work_mem * hash_mem_multiplier;
/*
* guc.c enforces a
"l...@laurent-hasson.com" writes:
> So, I went possibly nuclear, and still no cigar. Something's not right.
> - hash_mem_multiplier = '10'
> - work_mem = '1GB'
> The results are
> Batches: 5 Memory Usage: 2,449,457kB Disk Usage: 105,936kB
> Execution Time: 1,837,126.766 ms
> It's
-Original Message-
From: l...@laurent-hasson.com
Sent: Thursday, July 22, 2021 09:37
To: David Rowley
Cc: Tom Lane ; Peter Geoghegan ; Justin
Pryzby ; pgsql-performa...@postgresql.org
Subject: RE: Big performance slowdown from 11.2 to 13.3
OK. Will do another round of testing
OK. Will do another round of testing.
-Original Message-
From: David Rowley
Sent: Thursday, July 22, 2021 00:44
To: l...@laurent-hasson.com
Cc: Tom Lane ; Peter Geoghegan ; Justin
Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Thu
On Thu, 22 Jul 2021 at 16:37, l...@laurent-hasson.com
wrote:
> Seems like no cigar ☹ See plan pasted below. I changed the conf as follows:
> - hash_mem_multiplier = '2'
> - work_mem = '1GB'
> Batches: 5 Memory Usage: 2400305kB Disk Usage: 126560kB
You might want to keep going
-Original Message-
From: l...@laurent-hasson.com
Sent: Wednesday, July 21, 2021 19:46
To: Tom Lane
Cc: Peter Geoghegan ; Justin Pryzby ;
pgsql-performa...@postgresql.org
Subject: RE: Big performance slowdown from 11.2 to 13.3
-Original Message-
From: Tom Lane
Sent
-Original Message-
From: Tom Lane
Sent: Wednesday, July 21, 2021 19:43
To: l...@laurent-hasson.com
Cc: Peter Geoghegan ; Justin Pryzby ;
pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
"l...@laurent-hasson.com" writes:
>
-Original Message-
From: Tom Lane
Sent: Wednesday, July 21, 2021 19:36
To: l...@laurent-hasson.com
Cc: Justin Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
"l...@laurent-hasson.com" writes:
> My apologies... I thought
"l...@laurent-hasson.com" writes:
> From: Peter Geoghegan
>> I imagine that this has something to do with the fact that the hash
>> aggregate spills to disk in Postgres 13.
> So how is this happening? I mean, it's the exact same query, looks like the
> same plan to me, it's the same data on
-Original Message-
From: Peter Geoghegan
Sent: Wednesday, July 21, 2021 19:34
To: l...@laurent-hasson.com
Cc: Justin Pryzby ; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Wed, Jul 21, 2021 at 4:19 PM l...@laurent-hasson.com
wrote
I don't know
of any reason that that wouldn't be a strict improvement, but if the
work_mem theory doesn't pan out then that's something that'd deserve
a closer look. Does marking the WITH as WITH MATERIALIZED change
anything about v13's performance?
regards, tom lane
On Wed, Jul 21, 2021 at 4:19 PM l...@laurent-hasson.com
wrote:
> As you can see, the V13.3 execution is about 10x slower.
>
> It may be hard for me to create a whole copy of the database on 11.12 and
> check that environment by itself. I'd want to do it on the same machine to
> control
From: Michael Lewis
Sent: Wednesday, July 21, 2021 18:12
To: l...@laurent-hasson.com
Cc: pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
I'm not seeing the valueof the CTE. Why not access assessmenticcqa_raw directly
in the main query and only do GROUP
-Original Message-
From: Justin Pryzby
Sent: Wednesday, July 21, 2021 15:15
To: l...@laurent-hasson.com
Cc: pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3
On Wed, Jul 21, 2021 at 06:50:58PM +, l...@laurent-hasson.com wrote:
> The pl
I'm not seeing the valueof the CTE. Why not access assessmenticcqa_raw
directly in the main query and only do GROUP BY once? Do you have many
values in iccqar_ques_code which are not used in this query?
>
On Wed, Jul 21, 2021 at 06:50:58PM +, l...@laurent-hasson.com wrote:
> The plans are pretty much identical too. I checked line by line and couldn't
> see anything much different (note that I have a view over this query). Here
> is the V13 version of the plan:
> I am out of my wits as to
Hello,
We have a data warehouse working on Postgres V11.2. We have a query that is
pretty beefy that has been taking under 5mn to run consistently every day for
about 2 years as part of a data warehouse ETL process. It's a pivot over 55
values on a table with some 15M rows. The total table
"Dirschel, Steve" writes:
> I have a sample query that is doing more work if some of the reads are
> physical reads and I'm trying to understand why. If you look at attached
> QueryWithPhyReads.txt it shows the query did Buffers: shared hit=171
> read=880. So it did 171 + 880 = 1051 total
New to Postgres, Oracle background. With Oracle the amount of work a query
does is tracked via logical reads. Oracle tracks logical and physical reads
differently than Postgres. With Oracle a physical read is always considered a
logical read. So if a query reads 5 blocks are all 5 are read
Thanks a lot.
On Mon, 19 Jul 2021, 22:18 Ninad Shah, wrote:
> As Thomas rightly pointed about the feasibility of benchmarking. You may
> still compare performance of queries on both Exadata as well as PostgreSQL.
> IMO, it may not be on par, but it must be acceptable.
>
> In t
As Thomas rightly pointed about the feasibility of benchmarking. You may
still compare performance of queries on both Exadata as well as PostgreSQL.
IMO, it may not be on par, but it must be acceptable.
In the contemporary world, 60TB isn't really a huge database. So, I hardly
think you should
Thank you all for your swift response.
Thank you again.
Manish
On Mon, 19 Jul 2021, 15:39 Manish Lad, wrote:
> Dear all,
>>
> We are planning to migrate Oracle exadata database to postgresql and db
> size ranges from 1 tb to 60 TB.
>
> Will the PG support this with the
Manish Lad schrieb am 19.07.2021 um 12:09:
> We are planning to migrate Oracle exadata database to postgresql and
> db size ranges from 1 tb to 60 TB.
>
> Will the PG support this with the performance matching to that of
> exadata applince? If anyone could point me in the right dire
Hi,
The question can not be answered in a proper way. Because, in PostgreSQL,
performance(response time in query execution events) depends on
1. Your disk/storage hardware. The performance can vary between SSD and HDD
for example.
2. Your PostgreSQL configurations. In other words, configuration
, wrote:
> On Mon, 2021-07-19 at 15:39 +0530, Manish Lad wrote:
> > We are planning to migrate Oracle exadata database to postgresql and db
> size ranges from 1 tb to 60 TB.
> >
> > Will the PG support this with the performance matching to that of
> exadata applince?
&
On Mon, 2021-07-19 at 15:39 +0530, Manish Lad wrote:
> We are planning to migrate Oracle exadata database to postgresql and db size
> ranges from 1 tb to 60 TB.
>
> Will the PG support this with the performance matching to that of exadata
> applince?
> If anyone could poi
>
> Dear all,
>
We are planning to migrate Oracle exadata database to postgresql and db
size ranges from 1 tb to 60 TB.
Will the PG support this with the performance matching to that of exadata
applince?
If anyone could point me in the right direction where i xan get the
benchmar
On 7/1/21 10:25 PM, Ayub Khan wrote:
> Justin,
>
> Below is the stored procedure, is there any scope for improvement?
>
Hard to say, based on just the stored procedure source code. The queries
are not too complex, but we don't know which of them gets selected for
each cursor, and which of them
On Thu, 1 Jul 2021 at 08:56, Manuel Weitzman wrote:
> For each of these RestrictInfos there *could* be one cache miss on
> cached_scansel() that *could* force the planner to compute
> get_actual_variable_range() for the same variable (a.a) over and over,
> as mergejoinscansel() always computes
> On 30-06-2021, at 16:56, Manuel Weitzman wrote:
>
> One way in which I see possible to share this kind of information (of
> extremal values) across RestrictInfos is to store the known variable
> ranges in PlannerInfo (or within a member of such struct), which seems
> to be around everywhere
st execute the queries using JDBC (Java client)
> it's fast.
>
> Is there any setting which needs to be modified to improve the performance
> of cursors. Also facing slow response with reading blobs (images) from db.
> Not an ideal way for storing images in db but this is a leg
t; it's fast.
>
> Is there any setting which needs to be modified to improve the performance
> of cursors. Also facing slow response with reading blobs (images) from db.
> Not an ideal way for storing images in db but this is a legacy application
> and wanted to check if there a quick tweak which ca
cedure,
> executing procedure which has cursor is slowing down the call. However if I
> do not use the cursor and just execute the queries using JDBC (Java client)
> it's fast.
>
> Is there any setting which needs to be modified to improve the performance
> of cursors. A
> 1. create_join_clause doesn't trouble to look for commuted
> equivalents, which perhaps is penny-wise and pound-foolish.
> The cost of re-deriving selectivity estimates could be way
> more than the cost of checking this.
Agreed.
> 2. Although these look like they ought to be equivalent to the
Manuel Weitzman writes:
> On 29-06-2021, at 15:43, Tom Lane wrote:
>> That seems a bit broken; a given WHERE clause should produce only one
>> RestrictInfo. Can you provide a more concrete example?
>> explain (analyze, buffers)
>> select * from a
>> join b b1 on (b1.a = a.a)
>> join b b2 on
> On 29-06-2021, at 15:43, Tom Lane wrote:
>
> Manuel Weitzman writes:
>>> On 20-06-2021, at 17:06, Tom Lane wrote:
>>> So ... the reason why there's not caching of get_actual_variable_range
>>> results already is that I'd supposed it wouldn't be necessary given
>>> the caching of selectivity
Manuel Weitzman writes:
>> On 20-06-2021, at 17:06, Tom Lane wrote:
>> So ... the reason why there's not caching of get_actual_variable_range
>> results already is that I'd supposed it wouldn't be necessary given
>> the caching of selectivity estimates that happens at the RestrictInfo
>> level.
> On 20-06-2021, at 17:06, Tom Lane wrote:
>
> So ... the reason why there's not caching of get_actual_variable_range
> results already is that I'd supposed it wouldn't be necessary given
> the caching of selectivity estimates that happens at the RestrictInfo
> level. I don't have any
Ayub Khan writes:
> Is there any setting which needs to be modified to improve the performance
> of cursors. Also facing slow response with reading blobs (images) from db.
> Not an ideal way for storing images in db but this is a legacy application
> and wanted to check if there a
the queries using JDBC (Java client)
> it's fast.
>
> Is there any setting which needs to be modified to improve the performance
> of cursors. Also facing slow response with reading blobs (images) from db.
> Not an ideal way for storing images in db but this is a legacy applic
t; it's fast.
Is the query slower, or is it slow to transfer tuples ?
I expect there would be a very high overhead if you read a large number of
tuples one at a time.
> Is there any setting which needs to be modified to improve the performance
> of cursors. Also facing slow response with rea
the performance
of cursors. Also facing slow response with reading blobs (images) from db.
Not an ideal way for storing images in db but this is a legacy application
and wanted to check if there a quick tweak which can improve the
performance while reading blob data from db.
--Ayub
Ranier Vilela writes:
> 3. Avoid use of type *long*, it is very problematic with 64 bits.
> Windows 64 bits, long is 4 (four) bytes.
> Linux 64 bits, long is 8 (eight) bytes.
Agreed.
> 4. Avoid C99 style declarations
> for(unsigned long i = 0;)
> Prefer:
>size_t i;
>for(i = 0;)
>
Em dom., 20 de jun. de 2021 às 14:50, Manuel Weitzman <
manuelweitz...@gmail.com> escreveu:
> Hello everyone,
>
> > Apparently, the planner isn't reusing the data boundaries across
> alternative
> > plans. It would be nicer if the planner remembered each column boundaries
> > for later reuse
Manuel Weitzman writes:
> I've written a very naive (and crappy) patch to show how adding
> memorization to get_actual_variable_range() could help the planner on
> scenarios with a big number of joins.
So ... the reason why there's not caching of get_actual_variable_range
results already is that
Hello everyone,
> Apparently, the planner isn't reusing the data boundaries across alternative
> plans. It would be nicer if the planner remembered each column boundaries
> for later reuse (within the same planner execution).
I've written a very naive (and crappy) patch to show how adding
> However, I'm skeptical that any problem actually remains in
> real-world use cases.
Hello Tom,
We also had some issues with planning and get_actual_variable_range(). We
actually found some interesting behaviour that probably requires an eye with
better expertise in how the planner works.
For
on its suitability for
> production.
>
> There are some differences reported here between lzo and zlib
> compression performance for Postgresql:
>
> https://sudonull.com/post/96976-PostgreSQL-and-btrfs-elephant-on-an-oil-diet
>
> zstd compression support for btrfs is repo
.
(In fact btrfs has helped us recover from some disk failures really
well.) While I run postgresql on my machine it is for light testing
purposes so I wouldn't want to comment on its suitability for
production.
There are some differences reported here between lzo and zlib
compression per
L 13. Does
> anyone know if it would have a major performance impact?
> >
> > Is there some reason the question is specific to postgres13 , or did you
> just
> > say that because it's your development target for your project.
> >
> > I think it almost certainly depends mor
> > > I'm curious, really. I use btrfs as my filesystem on my home systems and
> > > am setting up a server as I near releasing my project. I planned to use
> > > btrfs on the server, but it got me thinking about PostgreSQL 13. Does
> > > anyone know if it wou
o use btrfs
> > on the server, but it got me thinking about PostgreSQL 13. Does anyone know
> > if it would have a major performance impact?
>
> This is a few years old, but Tomas Vondra did a presentation comparing major
> Linux file systems for PostgreSQL:
>
>
301 - 400 of 911 matches
Mail list logo