Sean Leach wrote
> On Feb 25, 2008, at 1:19 PM, Stephen Denne wrote:
> >
> >> So should I do a vacuum full and then hope this doesn't
> >> happen again?
> >> Or should I run a VACUUM FULL after each aggregation run?
> >
> > If your usage pattern results in generating all of that 
> unused space  
> > in one transaction, and no further inserts or updates to 
> that table  
> > till next time you run the same process, then my guess is that you  
> > probably should run a vacuum full on that table after each  
> > aggregation run. In that case you wouldn't have to increase  
> > max_fsm_pages solely to keep track of large amount of unused space  
> > in that table, since you're cleaning it up as soon as you're  
> > generating it.
> >
> > You earlier had 5.5 million row versions, 2 million of them 
> dead but  
> > not yet removable, and you said (even earlier) that the table had  
> > 3.3 million rows in it.
> > You now say you've got 6.2 million row versions (with the same 2M  
> > dead). So it looks like you're creating new row versions at 
> quite a  
> > pace, in which case increasing max_fsm_pages, and not 
> worrying about  
> > doing a vacuum full _every_ time is probably a good idea.
> 
> So 281727 should be the minimum I bump it to correct?

Please know that I'm very new at advising PostgreSQL users how they should tune 
their system...

My understanding of your vacuum verbose output was that it was pointing out 
that max_fsm_pages was currently smaller than 281727, so therefore there was no 
way it could contain mappings to all the reusable space. However I don't think 
it is hinting at, nor recommending a value that you should be using.

If you do nothing, then this number of pages with reusable space will probably 
continue to grow, therefore, it probably has been growing.

So, for example, if your max_fsm_pages is currently only 20000, then perhaps 
20000 of the 281727 pages with reusable space are in the free space map. The 
remaining 260000 pages _may_ have been generated through 20 different processes 
each of which created 13000 more pages with reusable space than the map could 
reference. If that was the case, then a max_fsm_pages of 33000 might be large 
enough.

Do you see what I'm getting at?
I think that you should do a vacuum full of that table once, then monitor the 
number of pages in it with reusable space for a while (over a few iterations of 
your regular processes). That should give you information about how much larger 
your max_fsm_pages should be than it currently is.

Regards,
Stephen Denne.

Disclaimer:
At the Datamail Group we value team commitment, respect, achievement, customer 
focus, and courage. This email with any attachments is confidential and may be 
subject to legal privilege.  If it is not intended for you please advise by 
reply immediately, destroy it and do not copy, disclose or use it in any way.

__________________________________________________________________
  This email has been scanned by the DMZGlobal Business Quality 
              Electronic Messaging Suite.
Please see http://www.dmzglobal.com/services/bqem.htm for details.
__________________________________________________________________



---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to