Fwd: Very slow query performance when using CTE

2025-04-04 Thread Chris Joysn
unfortunately that increased the query execution time by a factor of 8: https://explain.dalibo.com/plan/a6d2443d87fea1ee On Tue, 1 Apr 2025 at 16:28, Tom Lane wrote: > Chris Joysn writes: > > I have an issue when using CTEs. A query, which consists of multiple > CTEs, > > runs usually rather f

Re: Very slow query performance when using CTE

2025-04-03 Thread Chris Joysn
> CREATE STATISTICS st_simrun_component_metadata (dependencies) ON >> sim_run_id, key FROM sim_run_component_metadata; >> ANALYZE sim_run_component_metadata; >> >> When I run this query, no statistics are returned: >> >> SELECT m.* FROM pg_statistic_ext join pg_statistic_ext_data on (oid = >> stxoi

Re: Very slow query performance when using CTE

2025-04-03 Thread Michael Christofides
> > CREATE STATISTICS st_simrun_component_metadata (dependencies) ON > sim_run_id, key FROM sim_run_component_metadata; > ANALYZE sim_run_component_metadata; > > When I run this query, no statistics are returned: > > SELECT m.* FROM pg_statistic_ext join pg_statistic_ext_data on (oid = > stxoid), >

Re: Very slow query performance when using CTE

2025-04-01 Thread Renan Alves Fonseca
Chris Joysn writes: > Hello, > I have an issue when using CTEs. A query, which consists of multiple CTEs, > runs usually rather fast (~5s on my > environment). But it turned out that using one CTE can lead to execution > times of up to one minute. > That CTE is used two times within the query.

Re: Very slow query performance when using CTE

2025-04-01 Thread Tom Lane
Chris Joysn writes: > I have an issue when using CTEs. A query, which consists of multiple CTEs, > runs usually rather fast (~5s on my environment). But it turned out that > using one CTE can lead to execution times of up to one minute. > That CTE is used two times within the query. Try labeling

Very slow query performance when using CTE

2025-04-01 Thread Chris Joysn
Hello, I have an issue when using CTEs. A query, which consists of multiple CTEs, runs usually rather fast (~5s on my environment). But it turned out that using one CTE can lead to execution times of up to one minute. That CTE is used two times within the query. In the CTE there are 2600 rows, comp

Re: Slow query in table where many rows were deleted. VACUUM FULL fixes it

2024-01-31 Thread Divya Sharma
Hi Pavlos This is my understanding of why you were not able to run the query fast enough after the vacuum analyze. This is possibly what would have happened: 1. The relation has 5 million expired URLs and 5 thousand non-expired URLs 2. Assuming that the table only has 5 million and 5 th

Re: Slow query in table where many rows were deleted. VACUUM FULL fixes it

2024-01-30 Thread David Rowley
On Wed, 31 Jan 2024 at 09:09, Philip Semanchuk wrote: > So in your case those 5m rows that you deleted were probably still clogging > up your table until you ran VACUUM FULL. It seems more likely to me that the VACUUM removed the rows and just left empty pages in the table. Since there's no ind

Re: Slow query in table where many rows were deleted. VACUUM FULL fixes it

2024-01-30 Thread Philip Semanchuk
> On Jan 30, 2024, at 4:40 AM, Pavlos Kallis wrote: > > Shouldn't VACUUM ANALYZE reclaim the disk space? Hi Pavlos, The short answer to this is “no”. That’s an important difference between VACUUM (also known as “plain” VACUUM) and VACUUM FULL. In some special cases plain VACUUM can reclaim

Re: Slow query in table where many rows were deleted. VACUUM FULL fixes it

2024-01-30 Thread Laurenz Albe
On Tue, 2024-01-30 at 11:40 +0200, Pavlos Kallis wrote: > I have the following table: > > CREATE TABLE IF NOT EXISTS public.shortened_url > ( >     id character varying(12) COLLATE pg_catalog."default" NOT NULL, >     created_at timestamp without time zone, >     expires_at timestamp without time

Slow query in table where many rows were deleted. VACUUM FULL fixes it

2024-01-30 Thread Pavlos Kallis
Hi all, I have the following table: CREATE TABLE IF NOT EXISTS public.shortened_url ( id character varying(12) COLLATE pg_catalog."default" NOT NULL, created_at timestamp without time zone, expires_at timestamp without time zone, url text COLLATE pg_catalog."default" NOT NULL,

Re: Slow query, possibly not using index

2023-08-28 Thread Les
> > > > > More important question is, how can I find out why the index was not > auto vacuumed. > > You should have a look at pg_stat_user_tables. It'll let you know if > the table is being autovacuumed and how often. If you're concerned > about autovacuum not running properly, then you might wan

Re: Slow query, possibly not using index

2023-08-28 Thread David Rowley
On Mon, 28 Aug 2023 at 19:21, Les wrote: > More important question is, how can I find out why the index was not auto > vacuumed. You should have a look at pg_stat_user_tables. It'll let you know if the table is being autovacuumed and how often. If you're concerned about autovacuum not running

Re: Slow query, possibly not using index

2023-08-28 Thread Pavel Stehule
po 28. 8. 2023 v 13:00 odesílatel Les napsal: > > >> >> =# select * from pgstatindex('media.idx_block_unused'); >> version | tree_level | index_size | root_block_no | internal_pages | >> leaf_pages | empty_pages | deleted_pages | avg_leaf_density | >> leaf_fragmentation >> >> -+-

Re: Slow query, possibly not using index

2023-08-28 Thread Les
> > > =# select * from pgstatindex('media.idx_block_unused'); > version | tree_level | index_size | root_block_no | internal_pages | > leaf_pages | empty_pages | deleted_pages | avg_leaf_density | > leaf_fragmentation > > -+++---++---

Re: Slow query, possibly not using index

2023-08-28 Thread Les
> >> > All right, I started pgstattuple() and I'll also do pgstatindex(), but it > takes a while. I'll get back with the results. > =# select * from pgstattuple('media.block'); table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | f

Re: Slow query, possibly not using index

2023-08-27 Thread Les
> > > > > I'm aware of the problems with random UUID values. I was using this > > function to create ulids from the beginning: > > Oh, well that would have been useful information to provide at the > outset. I'm sorry, I left this out. > Now that we know the index order is correlated with creatio

Re: Slow query, possibly not using index

2023-08-27 Thread Tom Lane
Les writes: >>> If I try to select a single unused block this way: >>> explain analyze select id from media.block b where nrefs =0 limit 1 >>> then it runs for more than 10 minutes (I'm not sure how long, I cancelled >>> the query after 10 minutes). >> You might think that even so, it shouldn't t

Re: Slow query, possibly not using index

2023-08-27 Thread Wael Khobalatte
> Nobody ever deleted anything from this table. Since it was created, this has been a write-only table. does write-only include updates? that would create the dead rows tom is referring to. > I believe it is not actually using the index, because reading a single (random?) entry from an index shou

Re: Slow query, possibly not using index

2023-08-27 Thread Les
> > If I try to select a single unused block this way: > > explain analyze select id from media.block b where nrefs =0 limit 1 > > then it runs for more than 10 minutes (I'm not sure how long, I cancelled > > the query after 10 minutes). > > Are you sure it isn't blocked on a lock? > Yes, I'm sure

Re: Slow query, possibly not using index

2023-08-27 Thread Tom Lane
Les writes: > If I try to select a single unused block this way: > explain analyze select id from media.block b where nrefs =0 limit 1 > then it runs for more than 10 minutes (I'm not sure how long, I cancelled > the query after 10 minutes). Are you sure it isn't blocked on a lock? Another theor

Slow query, possibly not using index

2023-08-27 Thread Les
I have this table: CREATE TABLE media.block ( id uuid NOT NULL, "size" int8 NOT NULL, nrefs int8 NOT NULL DEFAULT 0, block bytea NOT NULL, hs256 bytea NOT NULL, CONSTRAINT block_pkey PRIMARY KEY (id), CONSTRAINT chk_nrefs CHECK ((nrefs >= 0)) ) WITH ( toast_tuple_target=8160 ) TABLESPA

Re: slow query to improve performace

2022-02-27 Thread Jeff Janes
On Fri, Feb 25, 2022 at 3:18 PM Ayub Khan wrote: > Hi, > > Could some some verify the attached query to verify the performance and > suggest some steps to improve it, this query is created as a view. This > view is used to get the aggregates of orders based on its current status > I don't see ho

Re: slow query to improve performace

2022-02-25 Thread Justin Pryzby
Please provide some more information, like your postgres version and settings. Some relevant things are included here. https://wiki.postgresql.org/wiki/Slow_Query_Questions -- Justin

slow query to improve performace

2022-02-25 Thread Ayub Khan
Hi, Could some some verify the attached query to verify the performance and suggest some steps to improve it, this query is created as a view. This view is used to get the aggregates of orders based on its current status Thanks HashAggregate (cost=334063.59..334064.17 rows=58 width=213) (actual

Re: Terribly slow query with very good plan?

2022-02-05 Thread Vijaykumar Jain
On Fri, 4 Feb 2022 at 14:41, Les wrote: > Hello, > > The "replath" field contains the path of the file/folder. For example: > "/folder1/folder2/folder3/filename4.ext5". The replath field is managed by > triggers. There are about 1M rows for files and 600K folder rows in the > database. The files

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
> > That may be because it's expecting to get 88290 rows from the > sequential scan, and the"limit 1" means it expects sequential scan to > be fast because on average it will only need to scan 1/88290 of the > table before it finds a matching row, then it can stop. > We are looking for a single r

Re: Terribly slow query with very good plan?

2022-02-04 Thread Thomas Kellerer
Les schrieb am 04.02.2022 um 10:11: > My task is to write a query that tells if a folder has any active file inside > it - directly or in subfolders. Here is the query for that: > > EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) > > select id, title, > (exists (select f2.id from media.

Re: Terribly slow query with very good plan?

2022-02-04 Thread Nick Cleaton
On Fri, 4 Feb 2022 at 14:07, Les wrote: > > > >> > Slow >> >> What about this: >> >> fi.relpath between ('A' collate "C") and ('A'||chr(255) collate "C") > > > It uses index scan. > Although the same with 'Természettudomány' uses seq scan: > > > EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) > select

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
> Slow > > What about this: > > fi.relpath between ('A' collate "C") and ('A'||chr(255) collate "C") > It uses index scan. EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select fi.id from media.oo_file fi where fi.is_active and fi.relpath between ('A' collate "C") and ('A'||chr(255) collate "C") lim

Re: Terribly slow query with very good plan?

2022-02-04 Thread Nick Cleaton
On Fri, 4 Feb 2022 at 13:21, Les wrote: > >> What if you try applying the C collation to the values from the table: >> >> where fi.is_active and fi.relpath collate "C" ^@ 'A' > > > Slow What about this: fi.relpath between ('A' collate "C") and ('A'||chr(255) collate "C")

Re: Terribly slow query with very good plan?

2022-02-04 Thread Ninad Shah
Hi Les, I have reviewed the whole thread, and I do not see usage of gist or gin indexes. Have you tried using Gist or GIN indexes instead of a normal b-tree? B-trees are a good option when your search is simple(e.g. =, >, <). The operators you are using are "like" or "^@", which fall into a full-

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
> > > > It does not help. > > What if you try applying the C collation to the values from the table: > > where fi.is_active and fi.relpath collate "C" ^@ 'A' > Slow EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select fi.id from media.oo_file fi where fi.is_active and fi.relpath collate "C" ^@ 'A'

Re: Terribly slow query with very good plan?

2022-02-04 Thread Nick Cleaton
On Fri, 4 Feb 2022 at 13:07, Les wrote: > >> >> > PostgreSQL uses seq scan for O, F, N, T letters, but it uses index scan >> > for A, I, C letters (with the "like" query). >> >> That's interesting. >> >> Does it help if you create an additional index on relpath with the >> text_pattern_ops modifi

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
> > PostgreSQL uses seq scan for O, F, N, T letters, but it uses index scan > for A, I, C letters (with the "like" query). > > That's interesting. > > Does it help if you create an additional index on relpath with the > text_pattern_ops modifier, e.g. > > CREATE INDEX ... USING btree (relpath text_

Re: Terribly slow query with very good plan?

2022-02-04 Thread Nick Cleaton
On Fri, 4 Feb 2022 at 12:27, Les wrote: > PostgreSQL uses seq scan for O, F, N, T letters, but it uses index scan for > A, I, C letters (with the "like" query). That's interesting. Does it help if you create an additional index on relpath with the text_pattern_ops modifier, e.g. CREATE INDEX

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
Nick Cleaton ezt írta (időpont: 2022. febr. 4., P, 11:57): > > With the ^@ operator, my guess is that because the planner knows > nothing about the folder name value it could be the empty string, > which would be a prefix of everything. > I think I could narrow down the problem to the simplest q

Re: Terribly slow query with very good plan?

2022-02-04 Thread Nick Cleaton
On Fri, 4 Feb 2022 at 10:09, Les wrote: > > Oh I see, the query planner does not know that there will be no % characters > in file and folder names. > > But what is the solution then? It just seems wrong that I can speed up a > query 1000 times by replacing it with a nested loop in a pl/sql func

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
> > >> >> First of all, it CANNOT start with '%'. This is a fact and this fact can >> be determined by analyzing the query. Something that the query planner >> should do, right? >> >> Second argument: the same query is also slow with the ^@ operator... >> > > Oh I see, the query planner does not kn

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
> In the fast case the 'Felhasználók%' part is known at query planning >> time, so it can be a prefix search. >> >> In the slow case, the planner doesn't know what that value will be, it >> could be something that starts with '%' for example. >> >> > First of all, it CANNOT start with '%'. This is

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
Nick Cleaton ezt írta (időpont: 2022. febr. 4., P, 11:00): > > In the fast case the 'Felhasználók%' part is known at query planning > time, so it can be a prefix search. > > In the slow case, the planner doesn't know what that value will be, it > could be something that starts with '%' for exampl

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
: shared hit=4 | Planning Time: 0.530 ms | Execution Time: 0.055 ms | I would expect for the same originally slow query with the has_file column, but it does not happen. :-(

Re: Terribly slow query with very good plan?

2022-02-04 Thread Nick Cleaton
On Fri, 4 Feb 2022 at 09:11, Les wrote: | > -> Index Only Scan using oo_file_idx_relpath on media.oo_file f2 > (cost=0.55..108499.27 rows=5381 width=0) (actual time=564.756..564.756 rows=0 > loops=45)| > Filter:

Re: Terribly slow query with very good plan?

2022-02-04 Thread Les
Laurenz Albe ezt írta (időpont: 2022. febr. 4., P, 10:18): > | > > > > It also returns 45 rows, but in 25 seconds which is unacceptable. > > You should create an index that supports LIKE; for example > > CREATE INDEX ON media.oo_file (relp

Re: Terribly slow query with very good plan?

2022-02-04 Thread Pavel Stehule
pá 4. 2. 2022 v 10:11 odesílatel Les napsal: > Hello, > > I have a table that contains folders, and another one that contains files. > > Here are the table definitions. I have removed most of the columns because > they are not important for this question. (There are lots of columns.) > > CREATE T

Re: Terribly slow query with very good plan?

2022-02-04 Thread Laurenz Albe
On Fri, 2022-02-04 at 10:11 +0100, Les wrote: > EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) > > select id, title, >  (exists (select f2.id from media.oo_file f2 where f2.relpath like f.relpath > || '%')) as has_file > from media.oo_folder f where f.parent_id is null > > QUERY PLAN                

Terribly slow query with very good plan?

2022-02-04 Thread Les
Hello, I have a table that contains folders, and another one that contains files. Here are the table definitions. I have removed most of the columns because they are not important for this question. (There are lots of columns.) CREATE TABLE media.oo_folder ( id int8 NOT NULL, is_active bool NOT

Re: Slow query fixed by replacing equality with a nested query

2022-01-24 Thread Michael Lewis
On Mon, Jan 24, 2022 at 6:22 AM Valentin Janeiko wrote: > I have rewritten the query using JOINs. I had to make one of them a > FULL JOIN, but otherwise JOINs seem like a good idea. > I have added the new query to the (same) gist: > > https://gist.github.com/valeneiko/89f8cbe26db7ca2651b47524462b

Re: Slow query fixed by replacing equality with a nested query

2022-01-24 Thread Michael Lewis
On Fri, Jan 21, 2022 at 4:37 AM wrote: > I have done a few simple experiments in the past comparing CTEs like this > to JOINS, but the resultant query plans were the same. CTEs seemed easier > to follow when troubleshooting issues, so I left them as such. Do JOINs > become better than CTEs at a c

Re: Slow query fixed by replacing equality with a nested query

2022-01-24 Thread Valentin Janeiko
I have rewritten the query using JOINs. I had to make one of them a FULL JOIN, but otherwise JOINs seem like a good idea. I have added the new query to the (same) gist: https://gist.github.com/valeneiko/89f8cbe26db7ca2651b47524462b5d18#file-queryoptimized-sql The query plan is much better with just

RE: Slow query fixed by replacing equality with a nested query

2022-01-21 Thread val.janeiko
My mistake. I have updated the query in the gist: cte1 should have been referenced in cte2. The query plans are correct. It was just the query in the gist that was incorrect (I was just verifying cte1 was the culprit – without it the query is fast too). This SQL query is a result of transl

Re: Slow query fixed by replacing equality with a nested query

2022-01-20 Thread Michael Lewis
I don't see any reference to cte1. Is that expected? I'm unclear why these sets are not just inner join'd on resource_surrogate_id. It seems like that column it is being selected as Sid1 in each CTE, and then the next one does the below. Why? where resource_surrogate_id IN (SELECT Sid1 FROM cte_p

Slow query fixed by replacing equality with a nested query

2022-01-20 Thread Valentin Janeiko
Hi everyone, I have a SELECT query that uses a chain of CTEs (4) that is slow to run on a large database. But if I change a where clause in one of the small CTEs from an equality to an equivalent nested IN query, then the query becomes fast. Looking at the query plan I can see that after the chang

Re: Postgres chooses slow query plan from time to time

2021-09-16 Thread Kristjan Mustkivi
Understood. Thank you so much for looking into this! Best regards, Kristjan On Wed, Sep 15, 2021 at 5:34 PM Tom Lane wrote: > > Kristjan Mustkivi writes: > > On Wed, Sep 15, 2021 at 3:16 PM Tom Lane wrote: > >> Note the lack of any visible cast on the varchar column, in each one of > >> thes

Re: Postgres chooses slow query plan from time to time

2021-09-15 Thread Tom Lane
Kristjan Mustkivi writes: > On Wed, Sep 15, 2021 at 3:16 PM Tom Lane wrote: >> Note the lack of any visible cast on the varchar column, in each one of >> these queries, even where I tried to force one to appear. There is >> something happening in your database that is not happening in mine. > T

Re: Postgres chooses slow query plan from time to time

2021-09-15 Thread Kristjan Mustkivi
On Wed, Sep 15, 2021 at 3:16 PM Tom Lane wrote: > Note the lack of any visible cast on the varchar column, in each one of > these queries, even where I tried to force one to appear. There is > something happening in your database that is not happening in mine. > > My mind is now running to the p

Re: Postgres chooses slow query plan from time to time

2021-09-15 Thread Tom Lane
Kristjan Mustkivi writes: > Both are of type varchar(30). Ah, right, you showed that back at the top of the thread. > So is this something odd: Filter: (((product_code)::text = ($1)::text) > AND ((balance_type)::text = ($4)::text)) ? Yes, that is very darn odd. When I try this I get: regressi

Re: Postgres chooses slow query plan from time to time

2021-09-14 Thread Kristjan Mustkivi
Hello! Both are of type varchar(30). So is this something odd: Filter: (((product_code)::text = ($1)::text) AND ((balance_type)::text = ($4)::text)) ? But why does it do the type-cast if both product_code and balance_type are of type text (although with constraint 30) and the values are also of

Re: Postgres chooses slow query plan from time to time

2021-09-14 Thread Tom Lane
Kristjan Mustkivi writes: >>> Filter: (((product_code)::text = ($1)::text) AND >>> ((balance_type)::text = ($4)::text)) > But the Primary Key is defined as btree (cage_code, cage_player_id, > product_code, balance_type, version) so this should be exactly that > (apart from the extra "version" col

Re: Postgres chooses slow query plan from time to time

2021-09-14 Thread Kristjan Mustkivi
On Tue, Sep 14, 2021 at 5:15 PM Tom Lane wrote: > > Kristjan Mustkivi writes: > > -> Index Scan Backward using player_balance_history_idx2 on > > mytable pbh (cost=0.70..21639.94 rows=3885 width=66) (actual > > time=5934.153..5934.153 rows=1 loops=1) > > Index Cond: ((

Re: Postgres chooses slow query plan from time to time

2021-09-14 Thread Tom Lane
Kristjan Mustkivi writes: > -> Index Scan Backward using player_balance_history_idx2 on > mytable pbh (cost=0.70..21639.94 rows=3885 width=66) (actual > time=5934.153..5934.153 rows=1 loops=1) > Index Cond: ((cage_code = $3) AND (cage_player_id = > $2) AND (modified_tim

Re: Postgres chooses slow query plan from time to time

2021-09-14 Thread Kristjan Mustkivi
I am very sorry, I indeed copy-pasted an incomplete plan. Here it is in full: 2021-09-14 06:55:33 UTC, pid=27576 db=mydb, usr=myuser, client=ip, app=PostgreSQL JDBC Driver, line=55 LOG: duration: 5934.165 ms plan: Query Text: SELECT * FROM myschema.mytable pbh WHERE pbh.product_code =

Re: Postgres chooses slow query plan from time to time

2021-09-14 Thread Jeff Janes
On Tue, Sep 14, 2021 at 3:55 AM Kristjan Mustkivi wrote: > Hello Tomas, > > The auto explain analyze caught this: > > 2021-09-14 06:55:33 UTC, pid=12345 db=mydb, usr=myuser, client=ip, > app=PostgreSQL JDBC Driver, line=55 LOG: duration: 5934.165 ms plan: > Query Text: SELECT * FROM mysche

Re: Postgres chooses slow query plan from time to time

2021-09-14 Thread Laurenz Albe
On Tue, 2021-09-14 at 10:55 +0300, Kristjan Mustkivi wrote: > 2021-09-14 06:55:33 UTC, pid=12345  db=mydb, usr=myuser, client=ip, > app=PostgreSQL JDBC Driver, line=55 LOG:  duration: 5934.165 ms  plan: >   Query Text: SELECT *   FROM myschema.mytable pbh WHERE > pbh.product_code = $1   AND pbh.cag

Re: Postgres chooses slow query plan from time to time

2021-09-14 Thread Kristjan Mustkivi
Hi Jeff, The specialized index is present due to some other queries and the index is used frequently (according to the statistics). I do agree that in this particular case the index btree (cage_code, cage_player_id, product_code, balance_type, modified_time) would solve the problem but at the mome

Re: Postgres chooses slow query plan from time to time

2021-09-14 Thread Kristjan Mustkivi
Hello Tomas, The auto explain analyze caught this: 2021-09-14 06:55:33 UTC, pid=12345 db=mydb, usr=myuser, client=ip, app=PostgreSQL JDBC Driver, line=55 LOG: duration: 5934.165 ms plan: Query Text: SELECT * FROM myschema.mytable pbh WHERE pbh.product_code = $1 AND pbh.cage_player_id = $

Re: Postgres chooses slow query plan from time to time

2021-09-13 Thread Jeff Janes
On Mon, Sep 13, 2021 at 9:25 AM Kristjan Mustkivi wrote: > > I have caught this with AUTOEXPLAIN: > > Index Cond: ((cage_code = $3) AND (cage_player_id = $2) AND > (modified_time < $5)) > Filter: (((product_code)::text = ($1)::text) AND > ((balance_type)::text = ($4)::text)) >

Re: Postgres chooses slow query plan from time to time

2021-09-13 Thread Jeff Janes
On Mon, Sep 13, 2021 at 9:25 AM Kristjan Mustkivi wrote: > SELECT > * > FROM > myschema.mytable pbh > WHERE > pbh.product_code = $1 > AND pbh.cage_player_id = $2 > AND pbh.cage_code = $3 > AND balance_type = $4 > AND pbh.modified_time < $5 > ORDER BY > pbh.modifie

Re: Postgres chooses slow query plan from time to time

2021-09-13 Thread Justin Pryzby
On Mon, Sep 13, 2021 at 08:19:40AM -0600, Michael Lewis wrote: > Autovacuum will only run for freezing, right? Insert only tables don't get > autovacuumed/analyzed until PG13 if I remember right. Tomas is talking about autovacuum running *analyze*, not vacuum. It runs for analyze, except on parti

Re: Postgres chooses slow query plan from time to time

2021-09-13 Thread Michael Lewis
Autovacuum will only run for freezing, right? Insert only tables don't get autovacuumed/analyzed until PG13 if I remember right.

Re: Postgres chooses slow query plan from time to time

2021-09-13 Thread Tomas Vondra
On 9/13/21 3:24 PM, Kristjan Mustkivi wrote: > Dear community, > > I have a query that most of the time gets executed in a few > milliseconds yet occasionally takes ~20+ seconds. The difference, as > far as I am able to tell, comes whether it uses the table Primary Key > (fast) or an additional in

Postgres chooses slow query plan from time to time

2021-09-13 Thread Kristjan Mustkivi
Dear community, I have a query that most of the time gets executed in a few milliseconds yet occasionally takes ~20+ seconds. The difference, as far as I am able to tell, comes whether it uses the table Primary Key (fast) or an additional index with smaller size. The table in question is INSERT ON

Re: Slow query because lexeme index not used

2021-08-09 Thread Alex
> Could you show the table stats for product.id ?  In particular its "correlation".  frac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist | correlation --+---+-+---+---++---++-   |

Re: Slow query because lexeme index not used

2021-08-07 Thread Justin Pryzby
On Sat, Aug 07, 2021 at 07:35:25PM +, Alex wrote: > Table "product" has a GIN index on "lexeme" column (tsvector) that is not > used. > > Query that doesn't use lexeme idx:  https://explain.dalibo.com/plan/BlB#plan, > ~8s, ~60.000 blocks needed > > Query forced to use lexeme idx: https://ex

Slow query because lexeme index not used

2021-08-07 Thread Alex
Table "product" has a GIN index on "lexeme" column (tsvector) that is not used. Query that doesn't use lexeme idx:  https://explain.dalibo.com/plan/BlB#plan, ~8s, ~60.000 blocks needed Query forced to use lexeme idx: https://explain.dalibo.com/plan/i52, ~800ms (10x less), ~15.000 blocks needed

Re: slow query

2021-06-09 Thread Ayub Khan
Below is the test setup Jmeter-->(load balanced tomcat on ec2 instances)>rds read replicas All these are running on different ec2 instances in AWS cloud in the same region On Tue, 8 Jun 2021, 19:03 Ayub Khan, wrote: > > I checked all the indexes are defined on the tables however the query

Re: slow query

2021-06-09 Thread Jeff Janes
On Tue, Jun 8, 2021 at 12:32 PM Ayub Khan wrote: > In AWS RDS performance insights the client writes is high and the api > which receives data on the mobile side is slow during load test. > That indicates a client or network problem. Jeff

Re: slow query

2021-06-08 Thread Ayub Khan
below is function definition of is_menu_item_available, for each item based on current day time it returns when it's available or not. The same api works fine on oracle, I am seeing this slowness after migrating the queries to postgresql RDS on AWS CREATE OR REPLACE FUNCTION is_menu_item_availab

Re: slow query

2021-06-08 Thread Tom Lane
Ayub Khan writes: > I checked all the indexes are defined on the tables however the query seems > slow, below is the plan. Can any one give any pointers to verify ? You might try to do something about the poor selectivity estimate here: > -> Index Scan usin

Re: slow query

2021-06-08 Thread Ayub Khan
In AWS RDS performance insights the client writes is high and the api which receives data on the mobile side is slow during load test. On Tue, 8 Jun 2021, 19:03 Ayub Khan, wrote: > > I checked all the indexes are defined on the tables however the query > seems slow, below is the plan. Can any o

Re: slow query

2021-06-08 Thread Christophe Pettus
> On Jun 8, 2021, at 09:03, Ayub Khan wrote: > I checked all the indexes are defined on the tables however the query seems > slow, below is the plan. It's currently running in slightly under six milliseconds. That seems reasonably fast given the number of operations required to fulfill it.

slow query

2021-06-08 Thread Ayub Khan
I checked all the indexes are defined on the tables however the query seems slow, below is the plan. Can any one give any pointers to verify ? SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price,

Re: slow query with inline function on AWS RDS with RDS 24x large

2021-06-04 Thread Ayub Khan
You are right, I dropped BRIN and created btree and the performance on 0 rows matching criteria table is good, below is the plan with BTREE. I will test by inserting lot of data. Hash Join (cost=50186.91..3765911.10 rows=5397411 width=291) (actual time=1.501..1.504 rows=0 loops=1) Hash Cond: (

Re: slow query with inline function on AWS RDS with RDS 24x large

2021-06-04 Thread Pavel Stehule
Hi pá 4. 6. 2021 v 10:32 odesílatel Ayub Khan napsal: > BRIN index is only on the date_time column, I even tried with btree index > with no performance gains. > -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1

Re: slow query with inline function on AWS RDS with RDS 24x large

2021-06-04 Thread Ayub Khan
BRIN index is only on the date_time column, I even tried with btree index with no performance gains. On Fri, Jun 4, 2021 at 11:23 AM Pavel Stehule wrote: > > > pá 4. 6. 2021 v 10:07 odesílatel Ayub Khan napsal: > >> >> below query is slow even with no data >> >> >> explain ANALYZE >> >> WITH bu

Re: slow query with inline function on AWS RDS with RDS 24x large

2021-06-04 Thread Pavel Stehule
pá 4. 6. 2021 v 10:07 odesílatel Ayub Khan napsal: > > below query is slow even with no data > > > explain ANALYZE > > WITH business AS( SELECT * FROM get_businessday_utc_f() start_date) > SELECT ro.order_id, > ro.date_time, > round(ro.order_amount, 2) AS order_amount, > b.branch_id

slow query with inline function on AWS RDS with RDS 24x large

2021-06-04 Thread Ayub Khan
below query is slow even with no data explain ANALYZE WITH business AS( SELECT * FROM get_businessday_utc_f() start_date) SELECT ro.order_id, ro.date_time, round(ro.order_amount, 2) AS order_amount, b.branch_id, b.branch_name, st_x(b.location) AS from_x, st_y(b.location

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-10 Thread Semen Yefimenko
I've done some testing on different versions of postgres. Unfortunately after the weekend, the problem vanished. The systems are running as usual and the query finishes in 500 MS. It must have been an issue with the VMs or the DISKs. Either way, thank you for your support. Here are btw. some testi

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-08 Thread Semen Yefimenko
> > Unless I'm overlooking something obvious one result has 500 000 rows the > other 7 000. You are right, it wasn't. I have 2 datasets, one containing 12 mil entries and the other 14 mil entries. I accidentally used the one with 12 mil entries in that table which actually only contains 7000~ entr

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-07 Thread didier
Are you sure you're using the same data det ? Unless I'm overlooking something obvious one result has 500 000 rows the other 7 000. >

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-07 Thread Peter Geoghegan
On Fri, May 7, 2021 at 2:28 PM Peter Geoghegan wrote: > That's a very reasonable interpretation, since the bitmap index scans > themselves just aren't doing that much I/O -- we see that there is > much more I/O for the heap scan, which is likely to be what the > general picture looks like no matte

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-07 Thread Peter Geoghegan
On Fri, May 7, 2021 at 9:16 AM Justin Pryzby wrote: > In pg13, indexes are de-duplicated by default. > > But I suspect the performance is better because data was reload, and the > smaller indexes are a small, additional benefit. That's a very reasonable interpretation, since the bitmap index scan

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-07 Thread Tom Lane
Justin Pryzby writes: > On Fri, May 07, 2021 at 05:57:19PM +0200, Semen Yefimenko wrote: >> For testing purposes I set up a separate postgres 13.2 instance on windows. >> To my surprise, it works perfectly fine. Also indexes, have about 1/4 of >> the size they had on 12.6. > In pg13, indexes are

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-07 Thread Justin Pryzby
On Fri, May 07, 2021 at 05:57:19PM +0200, Semen Yefimenko wrote: > For testing purposes I set up a separate postgres 13.2 instance on windows. > To my surprise, it works perfectly fine. Also indexes, have about 1/4 of > the size they had on 12.6. In pg13, indexes are de-duplicated by default. But

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-07 Thread Semen Yefimenko
~ ms. Then I've tried pg_dump into a different database on the same dev database (where the slow query still exists). The performance is just as bad on this database and indexes are also all 300 MB big (whereas on my locally running instance they're at around 80 MB) Now I'm trying to

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-07 Thread Rick Otten
On Thu, May 6, 2021 at 10:38 AM Semen Yefimenko wrote: > Hi there, > > I've recently been involved in migrating our old system to SQL Server and > then PostgreSQL. Everything has been working fine so far but now after > executing our tests on Postgres, we saw a very slow running query on a > larg

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-07 Thread Vijaykumar Jain
Is this on windows ? I see a thread that mentions of performance penalty due to parallel worker There is a mailing thread with subject line - Huge performance penalty with parallel queries in Windows x64 v. Linux x64 On Fri, 7 May 2021 at 2:33 PM Semen Yefimenko wrote: > As mentionend in th

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-07 Thread Semen Yefimenko
As mentionend in the slack comments : SELECT pg_size_pretty(pg_relation_size('logtable')) as table_size, pg_size_pretty(pg_relation_size('idx_entrytype')) as index_size, (pgstattuple('logtable')).dead_tuple_percent; table_size | index_size | dead_tuple_percent ++--

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-07 Thread Vijaykumar Jain
ok one last thing, not to be a PITA, but just in case if this helps. postgres=# SELECT * FROM pg_stat_user_indexes where relname = 'logtable'; postgres=# SELECT * FROM pg_stat_user_tables where relname = 'logtable'; basically, i just to verify if the table is not bloated. looking at *n_live_tup* v

Re: Very slow Query compared to Oracle / SQL - Server

2021-05-07 Thread Semen Yefimenko
Sorry if I'm cumulatively answering everyone in one E-Mail, I'm not sure how I'm supposed to do it. (single E-Mails vs many) > Can you try tuning by increasing the shared_buffers slowly in steps of > 500MB, and running explain analyze against the query. -- 2500 MB shared buffers - random_page_c

  1   2   >