Timothy Garnett <tgarn...@panjiva.com> wrote:
 
>>>    ->  Index Scan Backward using
>>> index_customs_records_on_month_and_bl_number on customs_records
>>> (cost=0.00..78426750.74 rows=48623 width=908) (actual
>>> time=171344.182..3858893.588 rows=100 loops=1)
 
We've seen a lot of those lately -- Index Scan Backward performing
far worse than alternatives.  One part of it is that disk sectors
are arranged for optimal performance on forward scans; but I don't
think we've properly accounted for the higher cost of moving
backward through our btree indexes, either.  To quote from the
README for the btree AM:
 
| A backwards scan has one additional bit of complexity: after
| following the left-link we must account for the possibility that
| the left sibling page got split before we could read it.  So, we
| have to move right until we find a page whose right-link matches
| the page we came from.  (Actually, it's even harder than that; see
| deletion discussion below.)
 
I'm wondering whether the planner should have some multiplier or
other adjustment to attempt to approximate the known higher cost of
backward scans.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to