Sylvain Hellegouarch wrote:
> I assume you have a global cache right? Otherwise I do wonder how this
> works when several clients update the database.

Yes in our case we have a global cache. It is possible to make it work
with local caches too, with some cache invalidation mechanism so that a
client can signal all others when an object is updated.


> Now I don't quite understand the benefit of your technique.

I was not convinced at first either, but I saw the results and it does
work. I guess one way to understand why it works is to take an example.
I have a reasonably large table in PostgreSQL here, and let's say I
want to build a "Top 100" page. Omitting the "ORDER BY rating LIMIT
100" for readability, here are some timing results from tests I just
ran:

Scenario 1: a simple "select *":
SELECT * --> 312 ms
==> 312 ms spent in DB access for each page view.

Scenario 2: select just the needed columns for the page:
SELECT id, category, title, year, rating, votes --> 156 ms
==> 156 ms spent in DB access for each page view. Also note that this
requires your building a custom query (which want to avoid, and that's
the reason why I am using an ORM layer).

Scenario 3: the two phase retrieval -->
Phase 1: SELECT id --> 47 ms
Phase 2: SELECT * --> 312 ms
On the 1st page view only: phase 1 + phase 2 = 359 ms spent in DB
access.
On all subsequent page views: only phase one + 0 (cache hit on phase
two) = 47 ms in DB access.
==> 47 ms spent in DB access for each page view.


As you can see, in my case, scenario 3 is almost an order of magnitude
faster that scenario 1. Of course, YMMV.


Cheers,

-- 
Yves-Eric


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"TurboGears" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/turbogears
-~----------~----~----~----~------~----~------~--~---

Reply via email to