Try a), b), and c) in order on the "offending" tables as they address
the problem at increasing cost...
thanks alot for the detailed information! the entire concept of vacuum isn't
yet that clear to me, so your explanations and hints are very much
appreciated. i'll defenitely try these steps t
>>> in our db system (for a website), i notice performance boosts after
>>> a vacuum
>>> full. but then, a VACUUM FULL takes 50min+ during which the db is
>>> not really
>>> accessible to web-users. is there another way to perform
>>> maintenance tasks
>>> AND leaving the db fully operable and acce
in our db system (for a website), i notice performance boosts after a
vacuum
full. but then, a VACUUM FULL takes 50min+ during which the db is not
really
accessible to web-users. is there another way to perform maintenance
tasks
AND leaving the db fully operable and accessible?
You're not doi
<[EMAIL PROTECTED]> writes:
> in our db system (for a website), i notice performance boosts after a vacuum
> full. but then, a VACUUM FULL takes 50min+ during which the db is not really
> accessible to web-users. is there another way to perform maintenance tasks
> AND leaving the db fully operab
That does sound like a lack-of-vacuuming problem. If the performance
goes back where it was after VACUUM FULL, then you can be pretty sure
of it. Note that autovacuum is not designed to fix this for you: it
only ever issues regular vacuum not vacuum full.
in our db system (for a website), i no
Antoine <[EMAIL PROTECTED]> writes:
> So... seeing as I didn't really do any investigation as to setting
> default sizes for storage and the like - I am wondering whether our
> performance problems (a programme running 1.5x slower than two weeks
> ago) might not be coming from the db (or rather,
On Mon, Jan 16, 2006 at 11:07:52PM +0100, Antoine wrote:
> performance problems (a programme running 1.5x slower than two weeks
> ago) might not be coming from the db (or rather, my maintaining of it).
> I have turned on stats, so as to allow autovacuuming, but have no idea
> whether that could
Hi,
We have a horribly designed postgres 8.1.0 database (not my fault!). I
am pretty new to database design and management and have really no idea
how to diagnose performance problems. The db has only 25-30 tables, and
half of them are only there because our codebase needs them (long story,
ag
"Marcos" <[EMAIL PROTECTED]> wrote
>
> I always think that use of * in SELECT affected in the performance,
> becoming the search slowest.
>
> But I read in the a Postgres book's that it increases the speed of
> search.
>
> And now What the more fast?
>
If you mean use "*" vs. "explicitely nam
Alessandro Baretta <[EMAIL PROTECTED]> writes:
I am aware that what I am dreaming of is already available through
cursors, but in a web application, cursors are bad boys, and should be
avoided. What I would like to be able to do is to plan a query and run
the plan to retreive a limited number of
On Mon, 2006-01-16 at 11:13 +0100, Alessandro Baretta wrote:
> I am aware that what I am dreaming of is already available through cursors,
> but
> in a web application, cursors are bad boys, and should be avoided. What I
> would
> like to be able to do is to plan a query and run the plan to ret
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> I wonder if we could have a way to "suspend" a transaction and restart
> it later in another backend. I think we could do something like this
> using the 2PC machinery.
> Not that I'm up for coding it; just an idea that crossed my mind.
It's not imposs
Tom Lane wrote:
> Alessandro Baretta <[EMAIL PROTECTED]> writes:
> > I am aware that what I am dreaming of is already available through
> > cursors, but in a web application, cursors are bad boys, and should be
> > avoided. What I would like to be able to do is to plan a query and run
> > the plan
Alessandro Baretta <[EMAIL PROTECTED]> writes:
> I am aware that what I am dreaming of is already available through
> cursors, but in a web application, cursors are bad boys, and should be
> avoided. What I would like to be able to do is to plan a query and run
> the plan to retreive a limited numb
Hi,
I always think that use of * in SELECT affected in the performance,
becoming the search slowest.
But I read in the a Postgres book's that it increases the speed of
search.
And now What the more fast?
Thanks
---(end of broadcast)---
TIP 5
Thanks!
Of course I know that I can build materialized views with triggers, but
so far I've avoided using triggers altogether ... I would really
appreciate something like "create view foo (select * from b) materialize
on query".
But I'll look into your blog entry, thanks again!
Mike
On Mon
hi mike
In particular I'm interested in a view which materializes whenever
queried, and is invalidated as soon as underlying data is changed.
from the german pgsql list earlier last week:
http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html
this seems to be pretty much what
On Mon, 16 Jan 2006 15:36:53 +0100
Michael Riess <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I've been reading an interesting article which compared different
> database systems, focusing on materialized views. I was wondering how
> the postgresql developers feel about this feature ... is it planned
>
Hi,
I've been reading an interesting article which compared different
database systems, focusing on materialized views. I was wondering how
the postgresql developers feel about this feature ... is it planned to
implement materialized views any time soon? They would greatly improve
both perfor
I am aware that what I am dreaming of is already available through cursors, but
in a web application, cursors are bad boys, and should be avoided. What I would
like to be able to do is to plan a query and run the plan to retreive a limited
number of rows as well as the executor's state. This way
20 matches
Mail list logo