> "SD" == Shridhar Daithankar <[EMAIL PROTECTED]> writes:
SD> If you have 150MB type of data as you said last time, you could
SD> take a pg_dump of database, drop the database and recreate it. By
SD> all chances it will take less time than compacting a database from
SD> 2GB to 150MB.
That's i
On Monday 13 October 2003 19:22, Seum-Lim Gan wrote:
> I am not sure I can do the full vacuum.
> If my system is doing updates in realtime and needs to be
> ok 24 hours and 7 days a week non-stop, once I do
> vacuum full, even on that table, that table will
> get locked out and any quiery or update
I am not sure I can do the full vacuum.
If my system is doing updates in realtime and needs to be
ok 24 hours and 7 days a week non-stop, once I do
vacuum full, even on that table, that table will
get locked out and any quiery or updates that come in
will timeout.
Any suggestion on what to do besid
Seum-Lim Gan wrote:
I have a table that keeps being updated and noticed
that after a few days, the disk usage has growned to
from just over 150 MB to like 2 GB !
Hmm... You have quite a lot of wasted space there..
I followed the recommendations from the various search
of the archives, changed the m
On Mon, 13 Oct 2003, Seum-Lim Gan wrote:
> Hi,
>
> I did a search in the discussion lists and found several
> pointers about setting the max_fsm_relations and pages.
>
> I have a table that keeps being updated and noticed
> that after a few days, the disk usage has growned to
> from just over 15