-------- Original Message --------
From: Enno <[EMAIL PROTECTED]>
To: Perrin Harkins <[EMAIL PROTECTED]>
Cc: Jeff <[EMAIL PROTECTED]>, modperl@perl.apache.org
Subject: Re:Database transaction across multiple web requests
Date: Fri Mar 31 2006 15:38:47
On Fri, 31 Mar 2006, Perrin Harkins wrote:
Jeff wrote:
Your application simply uses approach (b) and MySQL does the rest
automatically. So if you
SELECT * FROM mytable WHERE something='complex' LIMIT 0,30;
and then on another page / connection:
SELECT * FROM mytable WHERE something='complex' LIMIT 30,30;
and then...
SELECT * FROM mytable WHERE something='complex' LIMIT 60,30;
The main hit is on the first query, and provided that the data is not
updated on the server, query 2 and query 3 are served directly from the
cache.
Have you tried this? I was under the impression that MySQL would just
stop when it finds enough row to satisfy LIMIT, so it wouldn't cache the
whole result set.
If you have an ordered query, doesn't it have to place ALL the
qualifying items in order before it can decide how many to return from
whichever starting point that you asked for?
- Perrin
MySQL's cache only works for exact query matches, including the values you
use for LIMIT.
Enno
Oops - checked the docs looks like you are right! I was fooled by
Qcache hits -> 64798787
and as most of our access is SELECT LIMIT... I assumed these
were cleverly cached. MySQL is even faster than I thought!
I have logged a feature request with them.
Sorry for the error,
Jeff