However, EXPLAIN always says seq scan for the test data I'm using. I've done 'set enable_seqscan to off' and it still says seq scan. I was curious as to how the index will help this query:
db1=> set enable_seqscan to off; SET Time: 5.732 ms db1=> explain select * from t where f like 'xx%'; QUERY PLAN ------------------------------------------------------------------- Seq Scan on t (cost=100000000.00..100002698.90 rows=89 width=14) Filter: (f ~~ 'xx%'::text) (2 rows)
db1=> explain select * from t where lower(f) like 'xx%'; QUERY PLAN -------------------------------------------------------------------- Seq Scan on t (cost=100000000.00..100002893.68 rows=390 width=14) Filter: (lower(f) ~~ 'xx%'::text) (2 rows)
The table is:
db1=> \d t Table "public.t" Column | Type | Modifiers --------+------+----------- f | text | Indexes: "i1" unique, btree (lower(f)) "i2" unique, btree (f)
It contains +- 250k rows of totally random 10-char-long strings (containing upper- & lowercase letters and numbers). Here's how the LIKE performs:
db1=> select * from t where f like 'xx%'; f ------------ xxEqfLZMkH xxBRRnLetJ ... xxFPYJEiYf (98 rows)
Time: 452.613 ms
Would using an index potentially help the performance of this query, and if yes, how do I force Postgres to use the index?
db1=> select * from t where lower(f) like 'mmm%'; f ------------ MmmyEVmfSY MMmzolhHtq ... mMMWEQzlKm (16 rows)
Time: 634.470 ms
-- dave
---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster