Re: slow query to improve performace

2022-02-25 Thread Justin Pryzby
Please provide some more information, like your postgres version and settings.

Some relevant things are included here.

https://wiki.postgresql.org/wiki/Slow_Query_Questions

-- 
Justin




slow query to improve performace

2022-02-25 Thread Ayub Khan
Hi,

Could some some verify the attached query to verify the performance and
suggest some steps to improve it, this query is created as a view. This
view is used to get the aggregates of orders based on its current status

Thanks
HashAggregate  (cost=334063.59..334064.17 rows=58 width=213) (actual 
time=1180.016..1180.053 rows=49 loops=1)
  Group Key: c.city_id, '2022-02-25 23:09:26.587835'::timestamp without time 
zone, (count(*))
  ->  Nested Loop Left Join  (cost=154967.02..312398.93 rows=133321 width=92) 
(actual time=450.918..961.314 rows=156105 loops=1)
Join Filter: (c_1.city_id = c.city_id)
Rows Removed by Join Filter: 103951
->  Hash Join  (cost=2343.27..154695.90 rows=133321 width=76) (actual 
time=12.345..456.445 rows=156105 loops=1)
  Hash Cond: (b.city_id = c.city_id)
  ->  Hash Join  (cost=2324.74..154301.83 rows=133321 width=55) 
(actual time=12.266..420.340 rows=156105 loops=1)
Hash Cond: (o.branch_id = b.branch_id)
->  Append  (cost=0.57..151627.65 rows=133321 width=55) 
(actual time=0.022..347.441 rows=156105 loops=1)
  Subplans Removed: 23
  ->  Index Scan using 
restaurant_order_p_202202_202203_date_time_idx on 
restaurant_order_p_202202_202203 o  (cost=0.56..150904.37 rows=133298 width=55) 
(actual time=0.022..333.768 rows=156105 loops=1)
Index Cond: ((date_time >= '2022-02-25 
05:00:00'::timestamp without time zone) AND (date_time <= '2022-02-25 
23:09:26.587835'::timestamp without time zone))
->  Hash  (cost=1942.41..1942.41 rows=30541 width=16) 
(actual time=12.172..12.172 rows=29242 loops=1)
  Buckets: 32768  Batches: 1  Memory Usage: 1627kB
  ->  Seq Scan on branch b  (cost=0.00..1942.41 
rows=30541 width=16) (actual time=0.005..7.358 rows=29242 loops=1)
  ->  Hash  (cost=17.80..17.80 rows=58 width=29) (actual 
time=0.072..0.072 rows=58 loops=1)
Buckets: 1024  Batches: 1  Memory Usage: 12kB
->  Index Only Scan using city_idx$$_274b0022 on city c  
(cost=0.27..17.80 rows=58 width=29) (actual time=0.011..0.056 rows=58 loops=1)
  Index Cond: (city_id IS NOT NULL)
  Heap Fetches: 40
->  Materialize  (cost=152623.75..155703.21 rows=1 width=16) (actual 
time=0.003..0.003 rows=1 loops=156105)
  ->  GroupAggregate  (cost=152623.75..155703.19 rows=1 width=16) 
(actual time=438.567..438.611 rows=2 loops=1)
Group Key: c_1.city_id
->  Nested Loop Left Join  (cost=152623.75..155703.18 
rows=1 width=8) (actual time=434.578..438.605 rows=6 loops=1)
  ->  Nested Loop  (cost=152623.32..155701.62 rows=1 
width=16) (actual time=434.569..438.581 rows=6 loops=1)
Join Filter: (b_1.city_id = c_1.city_id)
Rows Removed by Join Filter: 342
->  Index Only Scan using city_pk on city c_1  
(cost=0.27..13.27 rows=58 width=8) (actual time=0.015..0.062 rows=58 loops=1)
  Heap Fetches: 40
->  Materialize  (cost=152623.06..155687.49 
rows=1 width=16) (actual time=7.492..7.560 rows=6 loops=58)
  ->  Nested Loop  
(cost=152623.06..155687.48 rows=1 width=16) (actual time=434.549..438.450 
rows=6 loops=1)
->  Nested Loop  
(cost=152622.77..155687.17 rows=1 width=32) (actual time=434.539..438.425 
rows=6 loops=1)
  Join Filter: 
(r_1.restaurant_id = b_1.restaurant_id)
  ->  Nested Loop  
(cost=152622.48..155685.14 rows=1 width=24) (actual time=434.529..438.396 
rows=6 loops=1)
->  GroupAggregate  
(cost=152621.92..152628.01 rows=79 width=55) (actual time=434.465..438.214 
rows=3 loops=1)
  Group Key: 
(round(ro_1.total_amount, 2)), ro_1.phone, r_1.restaurant_id
  Filter: (count(1) 
> 1)
  Rows Removed by 
Filter: 9365
  ->  Sort  
(cost=152621.92..152622.51 rows=236 width=55) (actual time=434.273..435.044 
rows=9371 loops=1)
Sort Key: 
(round(ro_1.total_amount, 2)), ro_1.phone, r_1.restaurant_id
Sort 
Method: quicksort  Memory: 1117kB
->  Nested 
Loop Left Join  (cost=1.83..152612.62 

RE: Slow Running Queries in Azure PostgreSQL

2022-02-25 Thread Kumar, Mukesh
Hi Justin , 

Thanks for your help , After committing 1 parameter , the whole query executed 
in less than 1 min.



Thanks and Regards, 
Mukesh Kumar

-Original Message-
From: Justin Pryzby  
Sent: Wednesday, February 23, 2022 2:57 AM
To: Kumar, Mukesh 
Cc: pgsql-performa...@postgresql.org
Subject: Re: Slow Running Queries in Azure PostgreSQL

On Tue, Feb 22, 2022 at 02:11:58PM +, Kumar, Mukesh wrote:

>  ->  Hash Join  (cost=6484.69..43117.63 rows=1 width=198) (actual 
> time=155.508..820.705 rows=52841 loops=1)"
>Hash Cond: (((lms_doc_property_rights_assoc.doc_sid_c)::text = 
> (lms_doc_propright_status_assoc.doc_sid_c)::text) AND 
> ((lms_property_rights_base.property_sid_k)::text = 
> (lms_doc_propright_status_assoc.property_sid_c)::text))"

Your problem seems to start here.  It thinks it'll get one row but actually 
gets 53k.  You can join those two tables on their own to understand the problem 
better.  Is either or both halves of the AND estimated well ?

If both halves are individually estimated well, but estimated poorly together 
with AND, then you have correlation.

Are either of those conditions redundant with the other ?  Half of the AND 
might be unnecessary and could be removed.

--
Justin




Re: An I/O error occurred while sending to the backend (PG 13.4)

2022-02-25 Thread Justin Pryzby
On Wed, Feb 23, 2022 at 07:04:15PM -0600, Justin Pryzby wrote:
> And the aforementioned network trace.  You could set a capture filter on TCP
> SYN|RST so it's not absurdly large.  From my notes, it might look like this:
> (tcp[tcpflags]&(tcp-rst|tcp-syn|tcp-fin)!=0)

I'd also add '|| icmp'.  My hunch is that you'll see some ICMP (not "ping")
being sent by an intermediate gateway, resulting in the connection being reset.

-- 
Justin