Re: [HACKERS] SELECT * FROM LIMIT 1; is really slow

Wed, 26 May 2004 16:07:20 -0700

Tom Lane wrote:
INFO: "csn_edges": found 0 removable, 16289929 nonremovable row versions in 2783986 pages


That works out to just under 6 rows per 8K page, which wouldn't be too
bad if the rows are 1K wide on average, but are they?  (You might want
to run contrib/pgstattuple to get some exact information about average
tuple size.)

The rows are "wide" - there's a PostGIS geometry present.


INFO: analyzing "public.csn_edges"
INFO: "csn_edges": 2783986 pages, 3000 rows sampled, 6724 estimated total rows


This looks like a smoking gun to me.  The huge underestimate of number
of rows from ANALYZE is a known failure mode of the existing sampling
method when the early pages of the table are thinly populated.  (Manfred
just fixed that for 7.5, btw.)

I think you want to VACUUM FULL or CLUSTER the table, and then take a
look at your FSM settings and routine vacuuming frequency to see if
you need to adjust them to keep this from happening again.


I'm now clustering - thanks for the help!

The history of this table is quite short - I just created it last week.

The original table had a bigint column that I converted to int (using the "alter table csn_edges rename to csn_edges_backup;CREATE TABLE csn_edges AS SELECT a,b,c::int,d,e FROM csn_edges; delete table csn_edges_backup;" trick). I dont think there were any changes to the current csn_edges table after it was created.


I have another copy of this table in another database - vacuum analyse verbose says its "only" 1,500,000 pages (vs 2,800,000). Shouldnt vacuum know your table is wasting 10Gb of space and fix it for you? Or at least HINT? Or a "TIDY" command?



Should I be upping my FSM to 2,000,000 pages?

dave






---------------------------(end of broadcast)--------------------------- TIP 7: don't forget to increase your free space map settings

Reply via email to