On 02/04/2011 01:26 PM, felix wrote:

because I asked it to: -W on the production server I need to enter
password and I'm testing on dev first.

Right. I'm just surprised it threw up the prompt so many times.

I just sudo tried it but still no report

Nono... you have to run the vacuum command with the -U for a superuser in the database. Like the postgres user.

but this means I have to manually run cluster from time to time, right ?
not that there will be much or any reordering.  or it should be fine
going forward with vacuum and enlarging the free space memory map.

It should be fine going forward. You only need to re-cluster if you want to force the table to remain in the order you chose, since it doesn't maintain the order for updates and new inserts. Since you're only doing it as a cleanup, that's not a concern for you.

do we know that ?  many of the tables are fairly static. only this
one is seriously borked, and yet other related tables seem to be
fine.

Probably not in your case. I just mean that any non-static table is going to have this problem. If you know what those are, great. I don't usually have that luxury, so I err on the side of assuming the whole DB is borked. :)

Also, here's a query you may find useful in the future. It reports the top 20 tables by size, but also reports the row counts and what not. It's a good way to find possibly bloated tables, or tables you could archive:

SELECT n.nspname AS schema_name, c.relname AS table_name,
       c.reltuples AS row_count,
       c.relpages*8/1024 AS mb_used,
       pg_total_relation_size(c.oid)/1024/1024 AS total_mb_used
  FROM pg_class c
  JOIN pg_namespace n ON (n.oid=c.relnamespace)
 WHERE c.relkind = 'r'
 ORDER BY total_mb_used DESC
 LIMIT 20;

The total_mb_used column is the table + all of the indexes and toast table space. The mb_used is just for the table itself. This will also help you see index bloat, or if a table has too much toasted data.

well who knew the defaults were unsane ? :)

Not really "unsane," but for any large database, they're not ideal. This also goes for the default_statistics_target setting. If you haven't already, you may want to bump this up to 100 from the default of 10. Not enough stats can make the planner ignore indexes and other bad things, and it sounds like your DB is big enough to benefit from that.

Later versions have made 100 the default, so you'd just be catching up. :)

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

______________________________________________

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to