vacuumdb -a -v -z -U postgres -W &> vacuum.log
Password:
Password:
Password:
Password:
Password:
Password:
Password:
Password:
Password:
Password:
Password:
cruxnu:nsbuildout crucial$

do you think its possible that it just doesn't have anything to complain
about ?
or the password is affecting it ?

In any case I'm not sure I want to run this even at night on production.

what is the downside to estimating max_fsm_pages too high ?

3000000 should be safe
its certainly not 150k

I have one very large table (10m) that is being analyzed before I warehouse
it.
that could've been the monster that ate the free map.
I think today I've learned that even unused tables affect postgres
performance.


and do you agree that I should turn CLUSTER ON ?
I have no problem to stop all tasks to this table at night and just reload
it



On Fri, Feb 4, 2011 at 6:47 PM, Shaun Thomas <stho...@peak6.com> wrote:

> On 02/04/2011 11:44 AM, felix wrote:
>
>  the very end:
>>
>> There were 0 unused item pointers.
>> 0 pages are entirely empty.
>> CPU 0.00s/0.00u sec elapsed 0.00 sec.
>> INFO:  analyzing "public.seo_partnerlinkcategory"
>> INFO: "seo_partnerlinkcategory": scanned 0 of 0 pages, containing 0 live
>> rows and 0 dead rows; 0 rows in sample, 0 estimated total rows
>>
>
> That looks to me like it didn't finish. Did you fork it off with '&' or run
> it and wait until it gave control back to you?
>
> It really should be telling you how many pages it wanted, and are in use.
> If not, something odd is going on.
>
>
> --
> Shaun Thomas
> OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
> 312-676-8870
> stho...@peak6.com
>
> ______________________________________________
>
> See  http://www.peak6.com/email_disclaimer.php
> for terms and conditions related to this email
>

Reply via email to