MF> I would like some system that helps me reduce some of these costs, using
MF> the approaches you list, or at least some caching somewhere. I would
MF> imagine a relational database for instance can employ caching of result
MF> sets, so that if no writes occurred, a second LIMIT query asking for a
MF> different range will return results a lot faster.
That's even not a bad idea for the catalog/index approach as it's
update is based on events. So I can know if anything was changed in
the DB or even whether the changed object is in the query results.
The next question is how long to cache the results :-)
IMHO looking at the optimization of the
catalog/index/query/batching is worth a project, worst case the
outcome is that it's not so easy to optimize.
Anyway Oracle and MS already fought the optimization battle for RDB,
there are PGSQL, MYSQL sources to look at.
Google does somehow also the batching hellfast.
Quote of the day:
The question is not whether we will die, but how we will live.
- Borysenko, Joan
For more information about ZODB, see the ZODB Wiki:
ZODB-Dev mailing list - ZODB-Dev@zope.org