I'm not sure I understand your req fully. If the same request is repeatedly
done with same parameters, you could implement a proxy web server with a
croned script to purge stale pages. If there is substantially the same data
being summarized, doing your own summary tables works; if accessed enough,
Some quick notes:
- Using a side effect of a function to update the
database feels bad to me
- how long does the SELECT into varQueryRecord
md5(upc.keyp
function take / what does it's explain look
like?
- There are a lot of non-indexed columns on that
delta master table, such as key
Pg: 7.4.5
RH 7.3
8g Ram
200 g drive space
RAID0+1
Tables vacuum on a nightly basis
The following process below takes 8
hours to run on 90k records and I'm not sure where to being to look for
the bottleneck. This isn't the only updating on this database that
seems to take a long time to complete.
People:
Transparent "query caching" is the "industry standard" for how these things
are handled. However, Postgres' lack of this feature has made me consider
other approaches, and I'm starting to wonder if the "standard" query caching
-- where a materialized query result, or some reduction th
The context of the discussion was a hack to speed queries against static
tables, so MVCC is not relevent. As soon as any work unit against a
referenced table commits, the cache is invalid, and in fact the table
shouldn't be a candidate for this caching for a while. In fact, this cache
would reduce
Josh Berkus <[EMAIL PROTECTED]> writes:
>> Most of the problem here comes from the fact that "current_date - 7"
>> isn't reducible to a constant and so the planner is making bad guesses
>> about how much of each table will be scanned.
> I thought this was fixed in 7.4. No?
No. It's not fixed
Tom,
> Most of the problem here comes from the fact that "current_date - 7"
> isn't reducible to a constant and so the planner is making bad guesses
> about how much of each table will be scanned.
I thought this was fixed in 7.4. No?
--
Josh Berkus
Aglio Database Solutions
San Francisco
---
> OK, that'd work too... the point is if you're re-connecting
> all the time it doesn't really matter what else you do for
> performance.
Yeah, although there is the chap who was asking questions on the list
recently who had some very long-running code on his app servers, so was best
off closing
On Fri, Oct 01, 2004 at 06:43:42AM +0100, Matt Clark wrote:
>
> >If you're not using a connection pool of some kind then you might as
> >well forget query plan caching, because your connect overhead will swamp
> >the planning cost. This does not mean you have to use something like
> >pgpool (which
Dustin Sallings wrote:
The following view creates the illusion of the old ``single-table''
model:
create view samples as
select * from samples_1999
union select * from samples_2000
union select * from samples_2001
union select * from samples_2002
union select * from samples_2003
Dustin Sallings <[EMAIL PROTECTED]> writes:
> The following view creates the illusion of the old ``single-table''
> model:
> create view samples as
> select * from samples_1999
> union select * from samples_2000
> union select * from samples_2001
> union select * from samples_
Dustin Sallings wrote:
[...]
OK, now that that's over with, I have this one particular query that
I attempt to run for a report from my phone that no longer works because
it tries to do a table scan on *some* of the tables. Why it chooses
this table scan, I can't imagine. The query is as
To save some time, let me start by saying
PostgreSQL 7.4.3 on powerpc-apple-darwin7.4.0, compiled by GCC gcc
(GCC) 3.3 20030304 (Apple Computer, Inc. build 1640)
OK, now on to details...
I'm trying to implement oracle style ``partitions'' in postgres. I've
run into my first snag on
13 matches
Mail list logo