And I would absolutely crank up autovacuum and analyze settings. Turn up
the cost limits, turn down the cost delays, decrease the scale factor.
Whatever you need to do such that autovacuum runs often. No need to
schedule a manual vacuum at all. Just don't wait until 20% of the table is
dead before
If you expect to have high cache hits and/or have ssd or similar fast
storage, random page cost should be more like 1-2 rather than the default
4. When using jsonb, you'd normally have estimates based solely on the
constants for the associated datatype (1/3 or 2/3 for a nullable boolean
for instanc
Hello,
Michael, Tom: thanks for all the insights and informations in your previous
mails.
A quick update of the explain outputs (this time using explain (analyze,
buffers, verbose))
*The good: *
*LOG Time: | 2022-02-28 09:30:01.400777+01 | order rows: | 9668*
Limit (cost=616.37..653.30 ro
Peter Adlersburg writes:
> Limit (cost=0.00..804.97 rows=10 width=22) (actual
> time=23970.845..25588.432 rows=1 loops=1)
>-> Seq Scan on "order" (cost=0.00..3863.86 rows=48 width=22) (actual
> time=23970.843..25588.429 rows=1 loops=1)
> Filter: (jsonb_to_tsvector('english'::regco
You are getting row estimate 48 in both cases, so it seems perhaps tied to
the free space map that will mean more heap lookups from the index, to the
point where the planner thinks that doing sequential scan is less costly.
What is random_page_cost set to? Do you have default autovacuum/analyze
se
Dear fellow DBAs,
I am seeking for some guidance with the following case that our developers
have thrown at me and I apologize in advance for this lengthy mail ...
$> postgres --version
postgres (PostgreSQL) 13.6
We are dealing with the following issue:
select version, content from orderstore