On Fri, 2004-08-06 at 23:18 +0000, Martin Foster wrote:
> Mike Benoit wrote:
> > On Wed, 2004-08-04 at 17:25 +0200, Gaetano Mendola wrote:
> >>>The queries themselves are simple, normally drawing information from one
> >>>table with few conditions or in the most complex cases using joins on
> >>>two table or sub queries. These behave very well and always have, the
> >>>problem is that these queries take place in rather large amounts due to
> >>>the dumb nature of the scripts themselves.
> >>Show us the explain analyze on that queries, how many rows the tables are
> >>containing, the table schema could be also usefull.
> > If the queries themselves are optimized as much as they can be, and as
> > you say, its just the sheer amount of similar queries hitting the
> > database, you could try using prepared queries for ones that are most
> > often executed to eliminate some of the overhead.
> > I've had relatively good success with this in the past, and it doesn't
> > take very much code modification.
> One of the biggest problems is most probably related to the indexes.
> Since the performance penalty of logging the information needed to see
> which queries are used and which are not is a slight problem, then I
> cannot really make use of it for now.
> However, I am curious how one would go about preparing query? Is this
> similar to the DBI::Prepare statement with placeholders and simply
> changing the values passed on execute? Or is this something database
> level such as a view et cetera?
Yes, always optimize your queries and GUC settings first and foremost.
Thats where you are likely to gain the most performance. After that if
you still want to push things even further I would try prepared queries.
I'm not familiar with DBI::Prepare at all, but I don't think its what
your looking for.
This is what you want:
Mike Benoit <[EMAIL PROTECTED]>
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster