Re: Temporarily very slow planning time after a big delete

2019-05-20 Thread David Rowley
On Tue, 21 May 2019 at 14:04, Walter Smith wrote: > I'm so sorry -- I meant to give the version, of course. It's 9.6.13. https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3ca930fc3 has been applied since then. It would be good if you could confirm the problem is resolved after a

Re: Temporarily very slow planning time after a big delete

2019-05-20 Thread Walter Smith
Tom Lane writes: >I'm assuming your problem >query involved a join on the indexed column --- whether or not the >final plan did a mergejoin, the planner would consider this There's no join -- the query is SELECT "notifications".* FROM "notifications" WHERE "notifications"."person_id" = ? AND

Re: Temporarily very slow planning time after a big delete

2019-05-20 Thread Walter Smith
I'm so sorry -- I meant to give the version, of course. It's 9.6.13. Thanks, Walter On Mon, May 20, 2019 at 6:05 PM Tom Lane wrote: > Walter Smith writes: > > Today we deleted about 15 million rows in one transaction from this > table. > > Immediately afterwards, a particular SELECT started

Re: Temporarily very slow planning time after a big delete

2019-05-20 Thread Tom Lane
Walter Smith writes: > Today we deleted about 15 million rows in one transaction from this table. > Immediately afterwards, a particular SELECT started running very slowly -- > 500 to 3000 ms rather than the usual <1ms. > We did an EXPLAIN ANALYZE on this select and it was still doing an index >

Re: Temporarily very slow planning time after a big delete

2019-05-20 Thread David Rowley
On Tue, 21 May 2019 at 12:44, Walter Smith wrote: > > We had a mysterious (to us) slowdown today that I'm hoping someone can > explain just based on PG's principles of operation. It got better by itself > so it seems like it was "normal" behavior -- I just don't know what behavior > it was

Temporarily very slow planning time after a big delete

2019-05-20 Thread Walter Smith
We had a mysterious (to us) slowdown today that I'm hoping someone can explain just based on PG's principles of operation. It got better by itself so it seems like it was "normal" behavior -- I just don't know what behavior it was exhibiting. We have a table of user notifications containing about

Re: Generic Plans for Prepared Statement are 158155 times slower than Custom Plans

2019-05-20 Thread Tomas Vondra
On Mon, May 20, 2019 at 09:37:34PM +, Deepak Somaiya wrote: wow this is interesting!  @Tom, Bruce, David - Experts Any idea why would changing the datatype would cause so much degradation - this is even when plan remains the same ,data is same. Deepak On Friday, May 17, 2019, 2:36:05 AM

Re: Re: Re: Generic Plans for Prepared Statement are 158155 times slower than Custom Plans

2019-05-20 Thread Deepak Somaiya
wow this is interesting!  @Tom, Bruce, David - Experts Any idea why would changing the datatype would cause so much degradation - this is even when plan remains the same ,data is same. Deepak On Friday, May 17, 2019, 2:36:05 AM PDT, Naik, Sameer wrote: Deepak, I changed the

Re: Trying to handle db corruption 9.6

2019-05-20 Thread Tomas Vondra
On Mon, May 20, 2019 at 08:20:33PM +0300, Mariel Cherkassky wrote: Hey Greg, Basically my backup was made after the first pg_resetxlog so I was wrong. Bummer. However, the customer had a secondary machine that wasn't synced for a month. I have all the walls since the moment the

Re: Trying to handle db corruption 9.6

2019-05-20 Thread Tomas Vondra
On Mon, May 20, 2019 at 04:20:45PM +, Bimal wrote: I had ran into same issue about year back, luckily I had standby to quickly promote. But, I wish there was better a documentation on how to handle WAL log fill up and resetting them. pg_resetxlog is not a tool to deal with "WAL

Re: Analyze results in more expensive query plan

2019-05-20 Thread Jeremy Altavilla
Thanks for the suggestion. I created extended statistics objects for the two tables in question. Unfortunately the resulting plan was the same (and had the same estimates). It looks like the extended stats discovered a potentially useful correlation on bag: "2, 3 => 1" (owner_id, bag_type_id =>

Re: Trying to handle db corruption 9.6

2019-05-20 Thread Mariel Cherkassky
Hey Greg, Basically my backup was made after the first pg_resetxlog so I was wrong. However, the customer had a secondary machine that wasn't synced for a month. I have all the walls since the moment the secondary went out of sync. Once I started it I hoped that it will start recover the wals and

Re: Trying to handle db corruption 9.6

2019-05-20 Thread Bimal
I had ran into same issue about year back, luckily I had standby to quickly promote.  But, I wish there was better a documentation on how to handle WAL log fill up and resetting them.   On Monday, May 20, 2019, 9:08:19 AM PDT, Mariel Cherkassky wrote: A backup was made after the

Re: Trying to handle db corruption 9.6

2019-05-20 Thread Mariel Cherkassky
A backup was made after the corruption appeared but before I tried using the pg_resetxlog command. Basically I just want to start the database with the data that is available in the files(I'm ok with loosing data that was in the cache and wasnt written to disk). My question is how can I continue

Re: Trying to handle db corruption 9.6

2019-05-20 Thread Mariel Cherkassky
Yes I understand that.. I'm trying to handle it after the backup that I have taken.. On Mon, May 20, 2019, 5:49 PM Flo Rance wrote: > Hi, > > First of all, as stated in the wiki, you'll need to do a filesystem level > copy of the database files and put them on another drive before attempting >

Re: Trying to handle db corruption 9.6

2019-05-20 Thread Flo Rance
Hi, First of all, as stated in the wiki, you'll need to do a filesystem level copy of the database files and put them on another drive before attempting to do anything else ! https://wiki.postgresql.org/wiki/Corruption regards, Flo On Mon, May 20, 2019 at 4:40 PM Mariel Cherkassky <

Trying to handle db corruption 9.6

2019-05-20 Thread Mariel Cherkassky
Hey, I'm trying to handle a corruption that one of our customers is facing. His disk space was full and as a result of that he decided to run pg_resetxlog a few times(bad idea..) . When I connected to the machine I saw that the db was down. When I started the db (service postgresql start) I saw