I recently pg_dumpall'd my DB from a (used for testing) v20z install of
postgresql 8.0.1, and restored it to my production (but not yet in
service) v40z running the same version. The test DB has had multiple
millions of rows created/dropped during testing. The results of VACUUM
VERBOSE are giving me pause:

* v40z (hardly used after restore):

orfs=# vacuum full analyze verbose;
<snip>
INFO:  free space map: 114 relations, 84 pages stored; 1824 total pages needed
DETAIL:  Allocated FSM size: 1000 relations + 300000 pages = 1864 kB shared 
memory.
VACUUM


* v20z (after having undergone extensive tweaking, deletes, and inserts):

orfs=# vacuum full analyze verbose;
<snip>
INFO:  free space map: 53 relations, 13502 pages stored; 9776 total pages needed
DETAIL:  Allocated FSM size: 1000 relations + 300000 pages = 1864 kB shared 
memory.
VACUUM

1) Should I be concerned about the total pages needed? ISTM that using
that many more pages can't help but degrade relative performance on my
testing machine.

2) How is it that my FSM has different numbers of relations?

3) Are either of these affects normal for an oft-used (or not) DB?

FWIW:
                         v40z      v20z
maintenance_work_mem   262144     16384 
shared_buffers          30000      1000

Thanks,
-- 
Karim Nassar
Department of Computer Science
Box 15600, College of Engineering and Natural Sciences
Northern Arizona University,  Flagstaff, Arizona 86011
Office: (928) 523-5868 -=- Mobile: (928) 699-9221


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to