Why not vacuum analyze both tables to ensure stats are up to date?
Have you customized default_statistics_target from 100? It may be that 250
would give you a more complete sample of the table without increasing the
size of the stats tables too much such that planning time increases hugely.
Do
> I don't *think* we are using SSDs but I'll need to confirm that though.
Confirmed we are not using SSDs but '10K RPM SAS in RAID-10.'
I've also been hunt for other queries that show this behaviour too,
and I've found one. The PG settings/versions will be different in this
example due to the
> If you're using SSD storage, or if the DB is small compared with
> shared_buffers or RAM, then random_page_cost should be closer to
> seq_page_cost.
I don't *think* we are using SSDs but I'll need to confirm that though.
> How large are the indexes? problem_id_idx1 ?
Using the query from
On Wed, Apr 15, 2020 at 08:55:53PM +0100, singh...@gmail.com wrote:
> We have an odd issue where specifying the same where clause twice causes PG
> to pick a much more efficent plan. We would like to know why.
> Query B (this is the 'fast' query):
> UPDATE problem_instance SET processed = false
>
> Starting with PostgreSQL 9.6, foreign keys are also used to help with
> join selectivity estimations. I see you have a suitable foreign key
> from the schema you posted. You might want to add that to the list of
> reasons to upgrade.
Apologies for the delay in response. I've had "PostgreSQL
On Thu, 16 Apr 2020 at 07:56, singh...@gmail.com wrote:
> We have an odd issue where specifying the same where clause twice causes PG
> to pick a much more efficent plan. We would like to know why.
> The EXPLAIN ANALYZE for both queries can be found here:-
> Query A: