On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
There's not too much simultaneous transaction on the database, most of the
time it shouldn't exceed one minute (worst case). Except, as I need to run
a vacuum analyze on the whole database every day, it now takes 8 hours to
Guillaume Cottenceau wrote:
Heikki, is there theoretical need for frequent VACUUM when
max_fsm_pages is large enough to hold references of dead rows?
Not really, if you don't mind that your table with 10 rows takes
hundreds of megabytes on disk. If max_fsm_pages is large enough, the
table siz
Heikki Linnakangas writes:
> Pomarede Nicolas wrote:
> > But for the data (dead rows), even running a vacuum analyze every
> > day is not enough, and doesn't truncate some empty pages at the end,
> > so the data size remains in the order of 200-300 MB, when only a few
> > effective rows are there
Pomarede Nicolas wrote:
There's not too much simultaneous transaction on the database, most of
the time it shouldn't exceed one minute (worst case). Except, as I need
to run a vacuum analyze on the whole database every day, it now takes 8
hours to do the vacuum (I changed vacuum values to be a
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
But for the data (dead rows), even running a vacuum analyze every day is
not enough, and doesn't truncate some empty pages at the end, so the data
size
"Pomarede Nicolas" <[EMAIL PROTECTED]> writes:
> Yes, I already do this on another spool table ; I run a vacuum after
> processing
> it, but I wondered if there was another way to keep the disk size low for this
> table.
"after processing it" might be too soon if there are still transactions ar
Pomarede Nicolas wrote:
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
But for the data (dead rows), even running a vacuum analyze every day
is not enough, and doesn't truncate some empty pages at the end, so
the data size remains in the order of 200-300 MB, when only a
On Tue, 8 May 2007, Pomarede Nicolas wrote:
> As you can see, with hundreds of thousands events a day, this table will need
> being vaccumed regularly to avoid taking too much space (data and index).
>
> Note that processing rows is quite fast in fact, so at any time a count(*) on
> this table
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
But for the data (dead rows), even running a vacuum analyze every day is
not enough, and doesn't truncate some empty pages at the end, so the data
size remains in the order of 200-300 MB, when only a few effective rows are
"Pomarede Nicolas" <[EMAIL PROTECTED]> writes:
> But for the data (dead rows), even running a vacuum analyze every day is not
> enough, and doesn't truncate some empty pages at the end, so the data size
> remains in the order of 200-300 MB, when only a few effective rows are there.
Try running v
On Tue, 8 May 2007, Guillaume Cottenceau wrote:
Pomarede Nicolas writes:
Hello to all,
I have a table that is used as a spool for various events. Some
processes write data into it, and another process reads the resulting
rows, do some work, and delete the rows that were just processed.
As y
On Tue, 8 May 2007, [EMAIL PROTECTED] wrote:
On Tue, 8 May 2007, Pomarede Nicolas wrote:
As you can see, with hundreds of thousands events a day, this table will need
being vaccumed regularly to avoid taking too much space (data and index).
Note that processing rows is quite fast in fact, s
Pomarede Nicolas wrote:
But for the data (dead rows), even running a vacuum analyze every day is
not enough, and doesn't truncate some empty pages at the end, so the
data size remains in the order of 200-300 MB, when only a few effective
rows are there.
For a table like that you should run VA
Pomarede Nicolas writes:
> Hello to all,
>
> I have a table that is used as a spool for various events. Some
> processes write data into it, and another process reads the resulting
> rows, do some work, and delete the rows that were just processed.
>
> As you can see, with hundreds of thousands
Hello to all,
I have a table that is used as a spool for various events. Some processes
write data into it, and another process reads the resulting rows, do some
work, and delete the rows that were just processed.
As you can see, with hundreds of thousands events a day, this table will
nee
15 matches
Mail list logo