Hello, I have a table with some 2.2 million rows on a Pentium4, 1.8GHz with 512 MB RAM. Some queries I launch take quite a long time, and I'm wondering whether this is normal,or whether I can get better performance somehow.
As an example, I have a field which is in char(2), with 23 different possible values. When I launch a "select * from table where field = 'xx'" this takes a very long time (some 230194.10 msec, i.e. almost 4 minutes, according to "explain analyze"). I've tried indexing on this column and vacuuming ("vacuum analyze"), but this doesn't change anything. "explain select" always gives me a sequential scan. The correlation value in pg_stats for this column is 0.0583268, so a seq scan is probably cheaper than index scan. Now maybe this is just the way it is and there is no possibility to enhance performance, but if someone has some tips on what I might try, they are very welcome ! One question I asked myself is whether the use of char(2) is the best option. The column (and most others in the table) contains codes that designate different characteristics (for ex. in a column 'sex' one would find '1'=male, '2'=female). Moritz ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]