[ discussion of server side result caching ]
and lets not forget PG's major fork it will throw into things: MVCC
The results of query A may hold true for txn 1, but not txn 2 and so on
.
That would have to be taken into account as well and would greatly
complicate things.
It is always possible
On Mon, 27 Sep 2004 18:20:48 +0100, Matt Clark <[EMAIL PROTECTED]> wrote:
> This is very true. Client side caching is an enormous win for apps, but it
> requires quite a lot of logic, triggers to update last-modified fields on
> relevant tables, etc etc. Moving some of this logic to the DB would
> Gaetano,
>
> > don't you think the best statistic target for a boolean
> > column is something like 2? Or in general the is useless
> > have a statistics target > data type cardinality ?
>
> It depends, really, on the proportionality of the boolean values; if they're
> about equal, I certain
On Thu, Sep 23, 2004 at 08:29:25AM -0700, Mr Pink wrote:
> Not knowing anything about the internals of pg, I don't know how this relates, but
> in theory,
> query plan caching is not just about saving time re-planning queries, it's about
> scalability.
> Optimizing queries requires shared locks
Gaetano,
> don't you think the best statistic target for a boolean
> column is something like 2? Or in general the is useless
> have a statistics target > data type cardinality ?
It depends, really, on the proportionality of the boolean values; if they're
about equal, I certainly wouldn't raise
Gregory Stark <[EMAIL PROTECTED]> writes:
> No, actually the stats table keeps the n most common values and their
> frequency (usually in percentage). So really a target of 2 ought to be enough
> for boolean values. In fact that's all I see in pg_statistic; I'm assuming
> there's a full histogram s
Basically you set a default in seconds for the HTML results to be
cached, and then have triggers set that force the cache to regenerate
(whenever CRUD happens to the content, for example).
Can't speak for Perl/Python/Ruby/.Net/Java, but Cache_Lite sure made a
believer out of me!
Nice to have it
It might be easiest to shove the caching logic into pgpool instead.
...
When pg_pool is told to cache a query, it can get a table list and
monitor for changes. When it gets changes, simply dumps the cache.
It's certainly the case that the typical web app (which, along with
warehouses, seems to
> More to the point though, I think this is a feature that really really
> should be in the DB, because then it's trivial for people to use.
How does putting it into PGPool make it any less trivial for people to
use?
---(end of broadcast)---
TIP
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Josh Berkus wrote:
| Gaetano,
|
|
|>don't you think the best statistic target for a boolean
|>column is something like 2? Or in general the is useless
|>have a statistics target > data type cardinality ?
|
|
| It depends, really, on the proportionality
On Mon, Sep 27, 2004 at 09:19:12PM +0100, Matt Clark wrote:
> >Basically you set a default in seconds for the HTML results to be
> >cached, and then have triggers set that force the cache to regenerate
> >(whenever CRUD happens to the content, for example).
> >
> >Can't speak for Perl/Python/Ruby/
More to the point though, I think this is a feature that really really
should be in the DB, because then it's trivial for people to use.
How does putting it into PGPool make it any less trivial for people to
use?
The answers are atÂ
http://www2b.biglobe.ne.jp/~caco/pgpoo
Any competently written application where caching results would be a
suitable performance boost can already implement application or
middleware caching fairly easily, and increase performance much more
than putting result caching into the database would.
I guess the performance increase is that
Tom Lane wrote:
Gregory Stark <[EMAIL PROTECTED]> writes:
No, actually the stats table keeps the n most common values and their
frequency (usually in percentage). So really a target of 2 ought to be enough
for boolean values. In fact that's all I see in pg_statistic; I'm assuming
there's a full his
On Tue, 2004-09-28 at 08:42, Gaetano Mendola wrote:
> Now I'm reading an article, written by the same author that ispired the magic "300"
> on analyze.c, about "Self-tuning Histograms". If this is implemented, I understood
> we can take rid of "vacuum analyze" for mantain up to date the statistics.
Neil Conway wrote:
On Tue, 2004-09-28 at 08:42, Gaetano Mendola wrote:
Now I'm reading an article, written by the same author that ispired the magic "300"
on analyze.c, about "Self-tuning Histograms". If this is implemented, I understood
we can take rid of "vacuum analyze" for mantain up to date th
Jim,
I can only tell you (roughly) how it works wth Oracle, and it's a very well
documented and laboured point over there - it's the cornerstone of Oracle's
scalability architecture, so if you don't believe me, or my explanation is
just plain lacking, then it wouldn't be a bad idea to check it out
"Iain" <[EMAIL PROTECTED]> writes:
> I can only tell you (roughly) how it works wth Oracle,
Which unfortunately has little to do with how it works with Postgres.
This "latches" stuff is irrelevant to us.
In practice, any repetitive planning in PG is going to be consulting
catalog rows that it dra
Hi Tom,
(B
(B> This "latches" stuff is irrelevant to us.
(B
(BWell, that's good to know anyway, thanks for setting me straight. Maybe
(BOracle could take a leaf out of PGs book instead of the other way around. I
(Brecall that you mentioned the caching of the schema before, so even though I
(
19 matches
Mail list logo