Philip Warner kirjutas P, 03.11.2002 kell 15:41:
> At 03:25 PM 3/11/2002 +0500, Hannu Krosing wrote:
> >a separate backend in a loop that
> >kept doing VACUUM TABLE with only 5 seconds sleep between
>
> Good grief! I thought 5 minutes was bad enough. Can't wait for b/g vacuum.
> Thanks for the in
At 03:25 PM 3/11/2002 +0500, Hannu Krosing wrote:
a separate backend in a loop that
kept doing VACUUM TABLE with only 5 seconds sleep between
Good grief! I thought 5 minutes was bad enough. Can't wait for b/g vacuum.
Thanks for the input; I'll wait for a day or so to get some figures as you
su
Philip Warner kirjutas P, 03.11.2002 kell 06:30:
> At 09:36 AM 2/11/2002 -0500, Tom Lane wrote:
>
> >Why not do frequent non-full vacuums on only that table, perhaps every
> >five minutes or so? That's certainly the direction that development is
> >headed in (we just haven't automated the vacuumi
Philip Warner <[EMAIL PROTECTED]> writes:
> At 09:36 AM 2/11/2002 -0500, Tom Lane wrote:
>> BTW, the system does not actually have any stats about dead tuples.
>> What it knows about are live tuples and total disk pages occupied by
>> the table.
> So what made it choose the index scan?
Well, the
At 09:36 AM 2/11/2002 -0500, Tom Lane wrote:
Why not do frequent non-full vacuums on only that table, perhaps every
five minutes or so? That's certainly the direction that development is
headed in (we just haven't automated the vacuuming yet).
Done this now, and I'll wait for a new high load t
Philip Warner <[EMAIL PROTECTED]> writes:
> This comes about because we have a table with 800 rows, one more more of
> which are updated every second of most days. The result in normal usage is
> that the table contains about 1 tuples one hour after vacuuming. Also,
> the databases tries to
Philip Warner <[EMAIL PROTECTED]> writes:
> At 07:39 PM 2/11/2002 +1100, Philip Warner wrote:
> The latter time is actually quote good; when the machine is more
> heavily loaded it goes up to 1ms.
>
> We currently vacuum/analyze daily, and analyze hourly.
Why not vacuum hourly (regular non-
At 07:39 PM 2/11/2002 +1100, Philip Warner wrote:
To give some numbers:
And some more numbers, directly after a vacuum and analyze:
mail=# explain analyze select * from often_updated where id between
'-1' and '10';
Index Scan using barnet_users_id on often_updated (cost=0.00..3095.66
The optimizer seems to know about dead rows in tables (ie. it will use an
index it would not ordinarily use if vacuum-full had been run, apparently
because it knows the table has many dead rows, and only a few valid ones.
I was wondering if there would any value in letting the optimizer replace
Michael Meskes <[EMAIL PROTECTED]> writes:
> I was just approached with an optimizer question I cannot answer. Does
> our optimizer know how expensive different comparisons are?
It does not, because there's noplace it could look to find the info.
> If there are different comparisons in a where cl
Hi,
I was just approached with an optimizer question I cannot answer. Does
our optimizer know how expensive different comparisons are? That is can
it do something like:
If there are different comparisons in a where clause check the ints
first, then the strings, then everything with regexp, or lik
Hannu Krosing wrote:
>
> Bruce Momjian wrote:
> >
> > > Hannu Krosing <[EMAIL PROTECTED]> writes:
> > > > Maybe rather
> > >
> > > > * Use indexes for min() and max() or convert to "SELECT col FROM tab
> > > > ORDER BY col DESC USING max_index_op LIMIT 1" if there is an index
> > > > on tab t
> Hannu Krosing <[EMAIL PROTECTED]> writes:
> > Maybe rather
>
> > * Use indexes for min() and max() or convert to "SELECT col FROM tab
> > ORDER BY col DESC USING max_index_op LIMIT 1" if there is an index
> > on tab that uses btree(col max_index_op)
>
> > it seems that in most other cases
> Bruce Momjian wrote:
> >
> > > "Reinoud van Leeuwen" <[EMAIL PROTECTED]> writes:
> > > > I have a table that contains almost 8 milion rows. The primary key is a
> > > > sequence, so the index should have a good distribution. Why does the
> > > > optimizer refuse to use the index for getting the
Bruce Momjian wrote:
>
> > "Reinoud van Leeuwen" <[EMAIL PROTECTED]> writes:
> > > I have a table that contains almost 8 milion rows. The primary key is a
> > > sequence, so the index should have a good distribution. Why does the
> > > optimizer refuse to use the index for getting the maximum val
Hannu Krosing <[EMAIL PROTECTED]> writes:
> Maybe rather
> * Use indexes for min() and max() or convert to "SELECT col FROM tab
> ORDER BY col DESC USING max_index_op LIMIT 1" if there is an index
> on tab that uses btree(col max_index_op)
> it seems that in most other cases the rewrite wou
Bruce Momjian wrote:
>
> > Bruce Momjian wrote:
> > >
> > > > "Reinoud van Leeuwen" <[EMAIL PROTECTED]> writes:
> > > > > I have a table that contains almost 8 milion rows. The primary key is a
> > > > > sequence, so the index should have a good distribution. Why does the
> > > > > optimizer refu
> "Reinoud van Leeuwen" <[EMAIL PROTECTED]> writes:
> > I have a table that contains almost 8 milion rows. The primary key is a
> > sequence, so the index should have a good distribution. Why does the
> > optimizer refuse to use the index for getting the maximum value?
>
> The optimizer has no
"Reinoud van Leeuwen" <[EMAIL PROTECTED]> writes:
> I have a table that contains almost 8 milion rows. The primary key is a
> sequence, so the index should have a good distribution. Why does the
> optimizer refuse to use the index for getting the maximum value?
The optimizer has no idea that ma
Hi,
I have a table that contains almost 8 milion rows. The primary key is a
sequence, so the index should have a good distribution. Why does the
optimizer refuse to use the index for getting the maximum value?
(even after a vacuum analyze of the table)
radius=# explain select max(radiuspk) fr
20 matches
Mail list logo