Running PostgreSQL 7.4.2, Solaris.
Client is reporting that the size of an index is
greater than the number of rows in the table (1.9
million vs. 1.5 million). Index was automatically
created from a 'bigserial unique' column.
Database contains several tables with exactly the same
columns (including 'bigserial unique' column). This
is the only table where this index is out of line with
the actual # of rows.
Queries on this table take 40 seconds to retrieve 2000
rows as opposed to 1-2 seconds on the other tables.
We have been running 'VACUUM ANALYZE' very regularly.
In fact, our vacuum schedule has probably been
overkill. We have been running on a per-table basis
after every update (many per day, only inserts
occurring) and after every purge (one per day,
deleting a day's worth of data).
It is theoretically possible that at some time a
process was run that deleted all rows in the table
followed by a VACUUM FULL. In this case we would have
dropped/recreated our own indexes on the table but not
the index automatically created for the bigserial
column. If that happened, could that cause these
What about if an out-of-the-ordinary number of rows
were deleted (say 75% of rows in the table, as opposed
to normal 5%) followed by a 'VACUUM ANALYZE'? Could
things get out of whack because of that situation?
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match