Re: Query performance issue

2020-09-04 Thread Michael Lewis
"Subquery Scan on rec (cost=1628601.89..1676580.92 rows=7381 width=41) (actual time=22171.986..23549.079 rows=1236042 loops=1)" " Filter: (rec.mpos = 1)" " Rows Removed by Filter: 228737" " Buffers: shared hit=45 read=1166951" " I/O Timings: read=29.530" " -> WindowAgg (cost=1628601.89..1658127.45

Re: Query performance issue

2020-09-04 Thread Nagaraj Raj
Sorry, I have attached the wrong query planner, which executed in lower environment which has fewer resources: Updated one,eVFiF | explain.depesz.com | | | | eVFiF | explain.depesz.com | | | Thanks,RjOn Friday, September 4, 2020, 02:39:57 PM PDT, Michael Lewis wrote:

Re: Query performance issue

2020-09-04 Thread Michael Lewis
Note- you may need to vacuum* the table to get full benefit of index only scan by updating the visibility map. I think index only scan is skipped in favor of just checking visibility when the visibility map is stale. *NOT full

Re: Query performance issue

2020-09-04 Thread Michael Lewis
CREATE INDEX receiving_item_delivered_received ON receiving_item_delivered_received USING btree ( eventtype, replenishmenttype, serial_no, eventtime DESC ); > More work_mem as Tomas suggests, but also, the above index should find the candidate rows by the first two keys, and then be able to skip

Re: Query performance issue

2020-09-04 Thread Tomas Vondra
On Fri, Sep 04, 2020 at 09:18:41PM +, Nagaraj Raj wrote: I have a query which will more often run on DB and very slow and it is doing 'seqscan'. I was trying to optimize it by adding indexes in different ways but nothing helps. Any suggestions? 1) It's rather difficult to read the query

Re: Query performance issue

2020-09-04 Thread Thomas Kellerer
Nagaraj Raj schrieb am 04.09.2020 um 23:18: I have a query which will more often run on DB and very slow and it is doing 'seqscan'. I was trying to optimize it by adding indexes in different ways but nothing helps. EXPALIN ANALYZE select serial_no,receivingplant,sku,r3_eventtime from (select

Re: Query performance issue

2020-09-04 Thread Nagaraj Raj
query planner:SPJe | explain.depesz.com | | | | SPJe | explain.depesz.com | | | On Friday, September 4, 2020, 02:19:06 PM PDT, Nagaraj Raj wrote: I have a query which will more often run on DB and very slow and it is doing 'seqscan'. I was trying to optimize it by

Query performance issue

2020-09-04 Thread Nagaraj Raj
I have a query which will more often run on DB and very slow and it is doing 'seqscan'. I was trying to optimize it by adding indexes in different ways but nothing helps. Any suggestions? Query: EXPALIN ANALYZE select serial_no,receivingplant,sku,r3_eventtime from (select

Re: Sudden insert performance degradation

2020-07-15 Thread Justin Pryzby
On Wed, Jul 15, 2020 at 02:49:16PM -0400, Henrique Montenegro wrote: > Any idea why the actual time is in the 3ms range? If I query that partition > directly, like this: > > As you can see, the `actual_time` when querying the partition table > directly goes to 0.002 which is almost 2000x faster.

Re: Sudden insert performance degradation

2020-07-15 Thread Henrique Montenegro
; Chairman/Vorsitzender >> (Styrets Leder): Dr. Sverre Munck >> >> Swarm64 AS Zweigstelle Hive >> Ullsteinstr. 120 | 12109 Berlin | Germany >> Registered at Amtsgericht Charlottenburg - HRB 154382 B >> >> > Hi Sebastian, > > That is a good idea about the pa

Re: Sudden insert performance degradation

2020-07-15 Thread Henrique Montenegro
On Wed, Jul 15, 2020 at 4:03 AM Sebastian Dressler wrote: > Hi Henrique, > > On 15. Jul 2020, at 03:13, Henrique Montenegro wrote: > [...] > > ``` > ssl = off > shared_buffers = 160GB # min 128kB > work_mem = 96GB # min 64kB > maintenance_work_mem = 12GB

Re: Sudden insert performance degradation

2020-07-15 Thread Sebastian Dressler
Hi Henrique, On 15. Jul 2020, at 03:13, Henrique Montenegro mailto:typ...@gmail.com>> wrote: [...] ``` ssl = off shared_buffers = 160GB # min 128kB work_mem = 96GB # min 64kB maintenance_work_mem = 12GB # min 1MB max_stack_depth = 4MB

Re: Sudden insert performance degradation

2020-07-14 Thread Henrique Montenegro
and ran a `reindex` on all the partitions now to see if it would improve the performance and seems like that did it! I used the following script to reindex all of the partitions (the name of my partitions all start with ubp_): ``` DO $$DECLARE r record; BEGIN FOR r IN select indexname from

Re: Sudden insert performance degradation

2020-07-13 Thread Henrique Montenegro
our RAM, kernel version, settings > for dirty_background_ratio, dirty_expire_centisecs, and probably other > things, and is not easy to predict) the performance falls off a cliff when > inserting values in a random order. Every insert dirties a random index > leaf page, which quickly gets evicted

Re: Sudden insert performance degradation

2020-07-13 Thread Jeff Janes
s > ) on conflict do nothing > Once the size of the only index exceeds shared_buffers by a bit (the amount of "a bit" depends on your RAM, kernel version, settings for dirty_background_ratio, dirty_expire_centisecs, and probably other things, and is not easy to predict) the

Re: Sudden insert performance degradation

2020-07-13 Thread Henrique Montenegro
On Mon, Jul 13, 2020 at 12:50 PM Sebastian Dressler wrote: > Hi Henrique, > > On 13. Jul 2020, at 18:42, Henrique Montenegro wrote: > > On Mon, Jul 13, 2020 at 11:20 AM Sebastian Dressler > wrote: > > >> Running the above loop worked fine for about 12 hours. Each file was >> taking >> about 30

Re: Sudden insert performance degradation

2020-07-13 Thread Michael Lewis
Is this an insert only table and perhaps not being picked up by autovacuum? If so, try a manual "vacuum analyze" before/after each batch run perhaps. You don't mention updates, but also have been adjusting fillfactor so I am not not sure.

Re: Sudden insert performance degradation

2020-07-13 Thread Sebastian Dressler
Hi Henrique, On 13. Jul 2020, at 18:42, Henrique Montenegro mailto:typ...@gmail.com>> wrote: On Mon, Jul 13, 2020 at 11:20 AM Sebastian Dressler mailto:sebast...@swarm64.com>> wrote: Running the above loop worked fine for about 12 hours. Each file was taking about 30 seconds to be processed.

Re: Sudden insert performance degradation

2020-07-13 Thread Henrique Montenegro
On Mon, Jul 13, 2020 at 11:20 AM Sebastian Dressler wrote: > Hi Henrique, > > On 13. Jul 2020, at 16:23, Henrique Montenegro wrote: > > [...] > > * Insert the data from the `users` table into the `users_no_dups` table > > ``` > insert into users_no_dups ( > created_ts, > user_id, >

Re: Sudden insert performance degradation

2020-07-13 Thread Henrique Montenegro
sing with the `fillfactor` to see if that would give me any improvements. It seems to me it did since the first time I created the table, I didn't change the fillfactor and stumbled upon the performance issue after 12 hours; I then recreated the table with a fillfactor of 30 and was good again for a

Re: Sudden insert performance degradation

2020-07-13 Thread Sebastian Dressler
Hi Henrique, On 13. Jul 2020, at 16:23, Henrique Montenegro mailto:typ...@gmail.com>> wrote: [...] * Insert the data from the `users` table into the `users_no_dups` table ``` insert into users_no_dups ( created_ts, user_id, name, url ) ( select created_ts,

Sudden insert performance degradation

2020-07-13 Thread Henrique Montenegro
Hello list, I am having issues with performance inserting data in Postgres and would like to ask for help figuring out the problem as I ran out of ideas. I have a process that generates a CSV file with 1 million records in it every 5 minutes and each file is about 240MB. I need this data

Re: Performance issue

2020-06-14 Thread Justin Pryzby
please help me to tune this query and any suggestions to improve > system performance? > CREATE TABLE test1 > ( > individual_entity_proxy_id bigint NOT NULL, ... > CONSTRAINT pk_i_entity_proxy_id PRIMARY KEY (individual_entity_proxy_id) > > ); > CREATE INDEX indx_pros

Re: Performance issue

2020-06-14 Thread David Rowley
On Mon, 15 Jun 2020 at 10:46, Nagaraj Raj wrote: > CREATE TABLE test1 > ( ... > CONSTRAINT pk_i_entity_proxy_id PRIMARY KEY (individual_entity_proxy_id) > > ); > CREATE TABLE test2 > ( ... > CONSTRAINT pk_entity_proxy_id PRIMARY KEY (individual_entity_proxy_id) > ); > > > User query: >

Performance issue

2020-06-14 Thread Nagaraj Raj
performance? Table structures: Table1: -- Records 213621151 CREATE TABLE test1 ( individual_entity_proxy_id bigint NOT NULL, household_entity_proxy_id bigint, individual_personal_link_sid bigint NOT NULL, city_name character varying(100) COLLATE pg_catalog."default", sta

Re: Performance tunning

2020-05-30 Thread Jeff Janes
On Sat, May 30, 2020 at 3:37 AM sugnathi hai wrote: > Hi , > > Can you help to tune the below plan > It looks like your query (which you should show us) has something like ORDER BY modifieddate LIMIT 100 It thinks it can walk the index in order, then stop once it collects 100 qualifying

Re: Performance tunning

2020-05-30 Thread Justin Pryzby
On Sat, May 30, 2020 at 09:43:43AM +0200, Pavel Stehule wrote: > so 30. 5. 2020 v 9:37 odesílatel sugnathi hai napsal: > > Can you help to tune the below plan Could you also send it so line breaks aren't lost, as seen here:

Re: Performance tunning

2020-05-30 Thread Pavel Stehule
Hi so 30. 5. 2020 v 9:37 odesílatel sugnathi hai napsal: > Hi , > > Can you help to tune the below plan > > Limit (cost=0.87..336777.92 rows=100 width=57) (actual > time=599302.173..599481.552 rows=100 loops=1) Buffers: shared hit=78496066 > -> Nested Loop (cost=0.87..11005874.67 rows=3268

Performance tunning

2020-05-30 Thread sugnathi hai
Hi , Can you help to tune the below plan Limit (cost=0.87..336777.92 rows=100 width=57) (actual time=599302.173..599481.552 rows=100 loops=1) Buffers: shared hit=78496066 -> Nested Loop (cost=0.87..11005874.67 rows=3268 width=57) (actual time=599302.170..599481.506 rows=100 loops=1) Buffers:

Re: PostgreSQL performance problem moving from 9.6.17 to 12.3

2020-05-28 Thread Tom Lane
Kenneth Marshall writes: > I have a system that was running version 9.6.17 running on a system with > 48gb of memory and spinning disks front-ed by a HW RAID controller with > NVRAM cache. We moved to a new box running version 12.3 on a system with > 64gb of memory and NVME SSD drives. Here are

PostgreSQL performance problem moving from 9.6.17 to 12.3

2020-05-28 Thread Kenneth Marshall
Hi PostgreSQL community, I have a system that was running version 9.6.17 running on a system with 48gb of memory and spinning disks front-ed by a HW RAID controller with NVRAM cache. We moved to a new box running version 12.3 on a system with 64gb of memory and NVME SSD drives. Here are the

Re: Suggestion to improve query performance for GIS query.

2020-05-22 Thread postgann2020 s
Thanks for your support David and Afsar. Hi David, Could you please suggest the resource link to "Add a trigger to the table to normalize the contents of column1 upon insert and then rewrite your query to reference the newly created normalized fields." if anything available. So that it will

Re: Suggestion to improve query performance for GIS query.

2020-05-22 Thread Mohammed Afsar
Dear team, Kindly try to execute the vacuum analyzer on that particular table and refresh the session and execute the query. VACUUM (VERBOSE, ANALYZE) tablename; Regards, Mohammed Afsar Database engineer On Fri, May 22, 2020, 12:30 PM postgann2020 s wrote: > Hi Team, > > Thanks for your

Re: Suggestion to improve query performance for GIS query.

2020-05-22 Thread David G. Johnston
On Thursday, May 21, 2020, postgann2020 s wrote: > > SELECT seq_no+1 INTO pair_seq_no FROM SCHEMA.TABLE WHERE (Column1 like > '%,sheath--'||cable_seq_id ||',%' or Column1 like 'sheath--'||cable_seq_id > ||',%' or Column1 like '%,sheath--'||cable_seq_id or > Column1='sheath--'||cable_seq_id)

Suggestion to improve query performance for GIS query.

2020-05-22 Thread postgann2020 s
Hi Team, Thanks for your support. Could you please suggest on below query. EnvironmentPostgreSQL: 9.5.15 Postgis: 2.2.7 The table contains GIS data which is fiber data(underground routes). We are using the below query inside the proc which is taking a long time to complete.

Suggestion to improve query performance of data validation in proc.

2020-05-22 Thread postgann2020 s
.. *** Are there any other ways to validate the data, which will help us to improve the performance of the query?. Thanks for your support. Regards, PostgAnn.

Suggestion to improve query performance.

2020-05-20 Thread postgann2020 s
Hi Team, Thanks for your support. We are using below environment: Application : Programming Language : JAVA Geoserver Database Stack: PostgreSQL : 9.5.15 Postgis We have 3 geoserver queries and are getting some performance issues after changing the GeoServer queries.I have posted the queries

Re: Duplicate WHERE condition changes performance and plan

2020-05-04 Thread Michael Lewis
Why not vacuum analyze both tables to ensure stats are up to date? Have you customized default_statistics_target from 100? It may be that 250 would give you a more complete sample of the table without increasing the size of the stats tables too much such that planning time increases hugely. Do

Re: Duplicate WHERE condition changes performance and plan

2020-05-03 Thread singh...@gmail.com
> I don't *think* we are using SSDs but I'll need to confirm that though. Confirmed we are not using SSDs but '10K RPM SAS in RAID-10.' I've also been hunt for other queries that show this behaviour too, and I've found one. The PG settings/versions will be different in this example due to the

Re: Duplicate WHERE condition changes performance and plan

2020-04-24 Thread singh...@gmail.com
> If you're using SSD storage, or if the DB is small compared with > shared_buffers or RAM, then random_page_cost should be closer to > seq_page_cost. I don't *think* we are using SSDs but I'll need to confirm that though. > How large are the indexes? problem_id_idx1 ? Using the query from

Re: Duplicate WHERE condition changes performance and plan

2020-04-19 Thread Justin Pryzby
On Wed, Apr 15, 2020 at 08:55:53PM +0100, singh...@gmail.com wrote: > We have an odd issue where specifying the same where clause twice causes PG > to pick a much more efficent plan. We would like to know why. > Query B (this is the 'fast' query): > UPDATE problem_instance SET processed = false >

Re: Duplicate WHERE condition changes performance and plan

2020-04-19 Thread singh...@gmail.com
> Starting with PostgreSQL 9.6, foreign keys are also used to help with > join selectivity estimations. I see you have a suitable foreign key > from the schema you posted. You might want to add that to the list of > reasons to upgrade. Apologies for the delay in response. I've had "PostgreSQL

Re: Duplicate WHERE condition changes performance and plan

2020-04-15 Thread David Rowley
On Thu, 16 Apr 2020 at 07:56, singh...@gmail.com wrote: > We have an odd issue where specifying the same where clause twice causes PG > to pick a much more efficent plan. We would like to know why. > The EXPLAIN ANALYZE for both queries can be found here:- > Query A:

Duplicate WHERE condition changes performance and plan

2020-04-15 Thread singh...@gmail.com
Hi, We have an odd issue where specifying the same where clause twice causes PG to pick a much more efficent plan. We would like to know why. Query A (this is the 'slow' query): UPDATE problem_instance SET processed = false FROM problem WHERE problem.id = problem_instance.problem_id AND

Re: BUG #16334: We recently upgraded PG version from 9.5 to 10.10 and system performance is not so good

2020-04-04 Thread Tejaswini GC
;> > > Hi, >> > > >> > > On Thu, Apr 02, 2020 at 03:52:25AM +, PG Bug reporting form wrote: >> > > > The following bug has been logged on the website: >> > > > >> > > > Bug reference: 16334 >> > > >

Re: BUG #16334: We recently upgraded PG version from 9.5 to 10.10 and system performance is not so good

2020-04-04 Thread Julien Rouhaud
Once again, please don't top post. On Sat, Apr 04, 2020 at 11:57:02AM +0530, Tejaswini GC wrote: > Hello Julien, > > The procedure for doing the upgrade is different for AWS. > And is it possible to know what the procedure was? > > After the PG upgrade we can see many locks in our system

Re: BUG #16334: We recently upgraded PG version from 9.5 to 10.10 and system performance is not so good

2020-04-02 Thread Tejaswini GC
e: > > > > The following bug has been logged on the website: > > > > > > > > Bug reference: 16334 > > > > Logged by: Tejaswini GC > > > > Email address: tejaswini...@decathlon.com > > > > PostgreSQL version: 10.10

Re: BUG #16334: We recently upgraded PG version from 9.5 to 10.10 and system performance is not so good

2020-04-02 Thread Julien Rouhaud
on the website: > > > > > > Bug reference: 16334 > > > Logged by: Tejaswini GC > > > Email address: tejaswini...@decathlon.com > > > PostgreSQL version: 10.10 > > > Operating system: Centos 7 > > > Description: > > >

Re: BUG #16334: We recently upgraded PG version from 9.5 to 10.10 and system performance is not so good

2020-04-02 Thread Tejaswini GC
tem: Centos 7 > > Description: > > > First of all, this is not a bug. You should have instead started a > discussion > on pgsql-general or pgsql-performance. I'm redirecting the discussion on > -performance. > > > > We have upgraded our database into version 10.10. >

Re: BUG #16334: We recently upgraded PG version from 9.5 to 10.10 and system performance is not so good

2020-04-02 Thread Julien Rouhaud
> Operating system: Centos 7 > Description: First of all, this is not a bug. You should have instead started a discussion on pgsql-general or pgsql-performance. I'm redirecting the discussion on -performance. > We have upgraded our database into version 10.10. How did you upgr

Re: pg12 partitions show bad performance vs pg96

2020-03-09 Thread David Rowley
On Tue, 10 Mar 2020 at 02:08, Mariel Cherkassky wrote: > PG12 - 3 PARTITIONS > > QUERY > PLAN >

Re: pg12 partitions show bad performance vs pg96

2020-03-09 Thread Mariel Cherkassky
> Also, it's not required, but I think a typical partitioning schema would > have > an index on the column being partitioned. I see you have an index on > iot_data(metadata,lower(data)), so I still wonder whether you'd have better > results partitioned on metadata, or otherwise maybe adding an

Re: pg12 partitions show bad performance vs pg96

2020-03-09 Thread Mariel Cherkassky
4 ms (20 rows) I tried disabling max_parallel_workers_gathers but It just decreased the db`s performance. Now regarding the main issue here - as u can see when I used the second query, I didnt mentioned the partition column specificly but I joined another table based on it( where de.name in ('50a') and

Re: pg12 partitions show bad performance vs pg96

2020-03-09 Thread Justin Pryzby
On Mon, Mar 09, 2020 at 12:31:15PM +0200, Mariel Cherkassky wrote: > > I tried to do even something simpler, run the query with only the > > partition column in the where clause and the results werent good for pg12 : > > PG12 : > postgres=# explain analyze select * from iot_data where device=51; >

Re: pg12 partitions show bad performance vs pg96

2020-03-09 Thread Mariel Cherkassky
> > I tried to do even something simpler, run the query with only the > partition column in the where clause and the results werent good for pg12 : > PG12 : postgres=# explain analyze select * from iot_data where device=51; QUERY PLAN

Re: pg12 partitions show bad performance vs pg96

2020-03-09 Thread Mariel Cherkassky
stead of running on a table with 10m records, I can run over a partition with 3m records. isnt less data means better performance for simple queries like the one I used ? I read the best practice for the docs, and I think that I met most of them - I chose the right partition key(in this cas

Re: pg12 partitions show bad performance vs pg96

2020-03-08 Thread Justin Pryzby
> and now for the performance: > query : explain analyze select * from iot_data da,iot_device de where > de.name in ('50a') and de.id = da.device and da.metadata=50 and > lower(da.data) like '50'; > I dont understand why in pg12 it scans all the partitions instead of the > relevant one..

Re: pg12 partitions show bad performance vs pg96

2020-03-08 Thread David Rowley
t's only possible that they'll improve the performance during query execution. For such small fast queries as the ones you've shown, it's important to remember that more complex query plans (ones with more nodes) do lead to longer times for executor startup and shutdown. EXPLAIN (without ANALYZE

Re: pg12 partitions show bad performance vs pg96

2020-03-08 Thread Mariel Cherkassky
I realized that the planner goes to the right partition because "(never executed)" is mentioned near the scan of the other partitions. However, still i'm not sure why performance is better in pg96. ‫בתאריך יום א׳, 8 במרץ 2020 ב-18:05 מאת ‪Mariel Cherkassky‬‏ <‪ mariel.cherkas..

pg12 partitions show bad performance vs pg96

2020-03-08 Thread Mariel Cherkassky
Hey, I upgraded from 96 to 12 in our test env and I'm seeing that for queries that involve join operation between a partition table and other tables there is degradation is performance compared to pg96 performance. My machine : 8cpu,16gb,regular hd,linux redhat 6 pg settings : max_wal_size = 2GB

Re: SubtransControlLock and performance problems

2020-02-20 Thread Lars Aksel Opsahl
Hi >From: Alvaro Herrera >Sent: Wednesday, February 19, 2020 4:23 PM >To: Lars Aksel Opsahl >Cc: Laurenz Albe ; Pavel Stehule >; Tom Lane ; >pgsql-performance@lists.postgresql.org >Subject: Re: SubtransControlLock and performance

Re: How to avoid UPDATE performance degradation in a transaction

2020-02-19 Thread Andres Freund
Hi, On 2020-02-13 16:16:14 -0500, Tom Lane wrote: > In principle perhaps we could improve the granularity of dead-row > detection, so that if a row version is both created and deleted by > the current transaction, and we have no live snapshots that could > see it, we could go ahead and mark the

Re: SubtransControlLock and performance problems

2020-02-19 Thread Lars Aksel Opsahl
Hi >From: Laurenz Albe >Sent: Tuesday, February 18, 2020 6:27 PM >ATo: Pavel Stehule ; Tom Lane >Cc: Lars Aksel Opsahl ; >pgsql-performance@lists.postgresql.org >Subject: Re: SubtransControlLock and performance problems >

Re: SubtransControlLock and performance problems

2020-02-18 Thread Pavel Stehule
út 18. 2. 2020 v 18:27 odesílatel Laurenz Albe napsal: > On Mon, 2020-02-17 at 19:41 +0100, Pavel Stehule wrote: > > I tested > > > > CREATE OR REPLACE FUNCTION public.fx(integer) > > RETURNS void > > LANGUAGE plpgsql > > AS $function$ > > begin > > for i in 1..$1 loop > > begin > >

Re: SubtransControlLock and performance problems

2020-02-17 Thread Alvaro Herrera
On 2020-Feb-16, Lars Aksel Opsahl wrote: > On a server with 32 cores and 250 GB memory, with CentOS 7 and kernel > 4.4.214-1.el7.elrepo.x86_64, I try to run 30 parallel threads using > dblink. (https://github.com/larsop/postgres_execute_parallel) . I have > tried to disconnect and reconnect in

Re: SubtransControlLock and performance problems

2020-02-17 Thread Pavel Stehule
po 17. 2. 2020 v 19:23 odesílatel Tom Lane napsal: > Pavel Stehule writes: > > po 17. 2. 2020 v 17:36 odesílatel Laurenz Albe > > > napsal: > >> Either use shorter transactions, or start fewer subtransactions. > > > it is interesting topic, but I don't see it in my example > > > CREATE OR

Re: SubtransControlLock and performance problems

2020-02-17 Thread Tom Lane
Pavel Stehule writes: > po 17. 2. 2020 v 17:36 odesílatel Laurenz Albe > napsal: >> Either use shorter transactions, or start fewer subtransactions. > it is interesting topic, but I don't see it in my example > CREATE OR REPLACE FUNCTION public.fx(integer) > RETURNS void > LANGUAGE plpgsql >

Re: SubtransControlLock and performance problems

2020-02-17 Thread Pavel Stehule
you enter a block with an EXCEPTION clause more > than 64 times in a single transaction, performance will drop. > > > Is it any way increase from 64 to a much higher level, when compiling > the code ? > > Yes, you can increase PGPROC_MAX_CACHED_SUBXIDS in > src/include/storage/

Re: SubtransControlLock and performance problems

2020-02-17 Thread Laurenz Albe
at if have 32 threads running in parallel and I get 2 > exceptions in each thread I have reached a state where I will get contention ? No, it means that if you enter a block with an EXCEPTION clause more than 64 times in a single transaction, performance will drop. > Is it any way increase fro

Re: SubtransControlLock and performance problems

2020-02-17 Thread Lars Aksel Opsahl
>From: Laurenz Albe >Sent: Monday, February 17, 2020 10:53 AM >To: Lars Aksel Opsahl ; >pgsql-performance@lists.postgresql.org >Subject: Re: SubtransControlLock and performance problems > >Lars Aksel Opsahl wrote: >> What happens is that after some minutes t

Re: SubtransControlLock and performance problems

2020-02-17 Thread Laurenz Albe
Lars Aksel Opsahl wrote: > What happens is that after some minutes the CPU can fall to maybe 20% usage > and most of > the threads are blocked by SubtransControlLock, and when the number > SubtransControlLock > goes down the CPU load increases again. The jobs usually goes through without > any

SubtransControlLock and performance problems

2020-02-16 Thread Lars Aksel Opsahl
Hi On a server with 32 cores and 250 GB memory, with CentOS 7 and kernel 4.4.214-1.el7.elrepo.x86_64, I try to run 30 parallel threads using dblink. (https://github.com/larsop/postgres_execute_parallel) . I have tried to disconnect and reconnect in the dblink code and that did not help. If I

Re: How to avoid UPDATE performance degradation in a transaction

2020-02-14 Thread Michael Lewis
If your trigger is supposed to change certain fields, you could return OLD instead of NEW if those fields have not been changed by the trigger. You could also check an updated_on timestamp field to verify if the row has already been modified and potentially skip the trigger altogether. Just a

Re: How to avoid UPDATE performance degradation in a transaction

2020-02-14 Thread Karl Düüna
Thank you for the explanation. That is pretty much what I suspected, but I held out hope that there is some functionality I could use to clear the bloat as the transaction progresses and bring the UPDATE time back down again. "dont do that" is sensible, but much more easily said than done, as the

Re: How to avoid UPDATE performance degradation in a transaction

2020-02-13 Thread Tom Lane
=?UTF-8?B?S2FybCBEw7zDvG5h?= writes: > -- TL; DR; > UPDATE on a row takes relatively constant amount of time outside a > transaction block, but running UPDATE on a single row over and over inside > a transaction gets slower and slower as the number of UPDATE operations > increases. Yeah, that's

Re: How to avoid UPDATE performance degradation in a transaction

2020-02-13 Thread David G. Johnston
On Thu, Feb 13, 2020 at 1:42 PM Karl Düüna wrote: > It really isn't noticeable until about 5k UPDATEs on a single row. > Don't know why, and never dealt with a scenario where this would even come up, but that this doesn't perform well inside a transaction isn't surprising to me. Kinda

How to avoid UPDATE performance degradation in a transaction

2020-02-13 Thread Karl Düüna
Hi I recently came across a performance problem with a big transaction block, which doesn't make sense to me and hopefully someone more knowledgeable can explain the reasons and point out a direction for a solution. -- TL; DR; UPDATE on a row takes relatively constant amount of time outside

Re: Fwd: TOAST table performance problem

2020-02-10 Thread Asya Nevra Buyuksoy
Ok, you are right. Thanks for everything. Jeff Janes , 10 Şub 2020 Pzt, 15:54 tarihinde şunu yazdı: > On Mon, Feb 10, 2020 at 7:38 AM Asya Nevra Buyuksoy > wrote: > >> Got it, thanks! I understand and know it that PostgreSQL is not slow, but >> I want to a piece of advice how can I load this

Re: Fwd: TOAST table performance problem

2020-02-10 Thread Asya Nevra Buyuksoy
Şub 2020 Pzt, 10:51 > Subject: Re: TOAST table performance problem > To: Andreas Joseph Krogh > > > I copied my data to the CSV file, yes it is very fast. However, this does > not solve my problem. > After deserializing the on the front side, I want to visualize my data on >

Re: Fwd: TOAST table performance problem

2020-02-10 Thread Luís Roberto Weck
-- Forwarded message - Gönderen: *Asya Nevra Buyuksoy* <mailto:ayisan1...@gmail.com>> Date: 10 Şub 2020 Pzt, 10:51 Subject: Re: TOAST table performance problem To: Andreas Joseph Krogh mailto:andr...@visena.com>> I copied my data to the CSV file, yes it is ver

Fwd: TOAST table performance problem

2020-02-09 Thread Asya Nevra Buyuksoy
-- Forwarded message - Gönderen: Asya Nevra Buyuksoy Date: 10 Şub 2020 Pzt, 10:51 Subject: Re: TOAST table performance problem To: Andreas Joseph Krogh I copied my data to the CSV file, yes it is very fast. However, this does not solve my problem. After deserializing

Re: TOAST table performance problem

2020-02-07 Thread Andreas Joseph Krogh
Try \o in psql, to redirect the output to file, and prevent it from processing the json (ie. format it) Den 7. februar 2020 15:59:05 CET, skrev Asya Nevra Buyuksoy : >I try it, but there is no enhancement. >I read this link is about TOAST and also its sub_links;

Re: TOAST table performance problem

2020-02-07 Thread Asya Nevra Buyuksoy
I try it, but there is no enhancement. I read this link is about TOAST and also its sub_links; https://blog.gojekengineering.com/a-toast-from-postgresql-83b83d0d0683 When I execute this query, except JSON data like; SELECT paramuser_id, paramperson_id, paramisdeleted, paramactiontime FROM

Re: TOAST table performance problem

2020-02-07 Thread Andreas Joseph Krogh
På fredag 07. februar 2020 kl. 15:35:04, skrev Asya Nevra Buyuksoy < ayisan1...@gmail.com >: I use pgadmin3. Try "psql", it has the lowest overhead (I think). pgAdmin might use time presenting the results etc. which is easy to overlook. -- Andreas Joseph Krogh

Re: TOAST table performance problem

2020-02-07 Thread Andreas Joseph Krogh
På fredag 07. februar 2020 kl. 15:16:13, skrev Asya Nevra Buyuksoy < ayisan1...@gmail.com >: Andreas Joseph Krogh wrote on 2/7/2020 8:41 AM: På fredag 07. februar 2020 kl. 14:23:35, skrev Asya Nevra Buyuksoy < ayisan1...@gmail.com >:

Re: TOAST table performance problem

2020-02-07 Thread Asya Nevra Buyuksoy
> > > Andreas Joseph Krogh wrote on 2/7/2020 8:41 AM: > > På fredag 07. februar 2020 kl. 14:23:35, skrev Asya Nevra Buyuksoy < > ayisan1...@gmail.com>: > > > My guess is the time is spent in the *client* retrieving the data, not in > the DB itself. Are you on a slow network? > >It

Re: TOAST table performance problem

2020-02-07 Thread MichaelDBA
Yes, I would concur that this planning time and execution time do not take into account the network time sending the data back to the client, especially since your are sending back the entire contents of the table. Regards, Michael Vitale Andreas Joseph Krogh wrote on 2/7/2020 8:41 AM: På

Re: TOAST table performance problem

2020-02-07 Thread Andreas Joseph Krogh
På fredag 07. februar 2020 kl. 14:23:35, skrev Asya Nevra Buyuksoy < ayisan1...@gmail.com >: Sorry for the misunderstanding. I have a table like; CREATE TABLE zamazin ( paramuser_id text, paramperson_id integer, paramdata json, paramisdeleted boolean,

Re: TOAST table performance problem

2020-02-07 Thread Asya Nevra Buyuksoy
> > Andreas Joseph Krogh , 7 Şub 2020 Cum, 16:12 > tarihinde şunu yazdı: > >> På fredag 07. februar 2020 kl. 14:07:28, skrev Asya Nevra Buyuksoy < >> ayisan1...@gmail.com>: >> >> >> >> *[...]* >> >> *And of course you'll be encurraged to upgrade to latest version (12.1) >> as 9.4.1 is now 5 years

Re: TOAST table performance problem

2020-02-07 Thread Asya Nevra Buyuksoy
7=att=safe=f_k6c77tsa1> Andreas Joseph Krogh , 7 Şub 2020 Cum, 16:12 tarihinde şunu yazdı: > På fredag 07. februar 2020 kl. 14:07:28, skrev Asya Nevra Buyuksoy < > ayisan1...@gmail.com>: > > Hello everybody, > > [...] > > How I can achieve good performanc

Sv: TOAST table performance problem

2020-02-07 Thread Andreas Joseph Krogh
På fredag 07. februar 2020 kl. 14:07:28, skrev Asya Nevra Buyuksoy < ayisan1...@gmail.com <mailto:ayisan1...@gmail.com>>: Hello everybody, [...] How I can achieve good performance? Nobody here understands anything unless you show the exact query and schema... And of c

TOAST table performance problem

2020-02-07 Thread Asya Nevra Buyuksoy
some details on TOAST logic and increased some config like shared_buffers, work_mem, maintenance_work_mem, max_file_per_process. But there was no performance improvement on my query. I do not understand why it happens. My table size is 168 MB, but my TOAST table size that is related to that table

Re: Slow performance with trivial self-joins

2020-02-05 Thread David Rowley
(hidden) self-joins. > > I will concur with this use case being pretty common, but also something I > have actively avoided anywhere performance is important because of the > lack of this optimization. > > Even still, I have 20+ views like that in my database. I think the best di

Re: Slow performance with trivial self-joins

2020-02-05 Thread Adam Brusselback
so something I have actively avoided anywhere performance is important because of the lack of this optimization. Even still, I have 20+ views like that in my database.

Re: Slow performance with trivial self-joins

2020-02-05 Thread Benny Kramek
Thank you for your response. I have tested out the patch in the linked thread and it works very well on a bunch of complex queries that I have tested, improving both the planning time significantly and the execution time drastically. I have also read through the entire linked discussion thread as

Re: Slow performance with trivial self-joins

2020-02-03 Thread Tom Lane
Benny Kramek writes: > I expect the query plan to be identical for both of the below queries (and I > expect the performance to also be identical). [ shrug... ] Your expectation is mistaken. There is no code in Postgres to eliminate useless self-joins. People have been fooling

Slow performance with trivial self-joins

2020-02-03 Thread Benny Kramek
Hello, I am experiencing slow performance when joining a table against itself on its primary key column. I expect the query plan to be identical for both of the below queries (and I expect the performance to also be identical). But the second one is much slower: The FAST QUERY has a planning

Re: Seeking reason behind performance gain in 12 with HashAggregate

2020-01-13 Thread Shira Bezalel
On Mon, Jan 13, 2020 at 2:15 PM Alvaro Herrera wrote: > On 2020-Jan-13, Shira Bezalel wrote: > > > Hi All, > > > > I'm testing an upgrade from Postgres 9.6.16 to 12.1 and seeing a > > significant performance gain in one specific query. This is really great, > >

Re: Seeking reason behind performance gain in 12 with HashAggregate

2020-01-13 Thread Alvaro Herrera
On 2020-Jan-13, Shira Bezalel wrote: > Hi All, > > I'm testing an upgrade from Postgres 9.6.16 to 12.1 and seeing a > significant performance gain in one specific query. This is really great, > but I'm just looking to understand why. pg12 reads half the number of buffers. I b

Re: Seeking reason behind performance gain in 12 with HashAggregate

2020-01-13 Thread Shira Bezalel
Thanks Tomas. I ran a vacuum full on the 9.6 table -- still no difference in the query plan. The shared buffers hit went up slightly to 36069. Shira On Mon, Jan 13, 2020 at 1:12 PM Tomas Vondra wrote: > On Mon, Jan 13, 2020 at 12:44:14PM -0800, Shira Bezalel wrote: > >Hi Michael, > > > >I

<    1   2   3   4   5   6   7   8   9   10   >