At 07:19 PM 3/9/2005, you wrote:
Unfortunately this doesn't work well if you want to
tell your users how many pages were found in the
query.

Correct, but the speed difference will more than make up for it. If you have a busy website, it is very important not to overtax the database server with frills, like the number of rows found. If you want to have the record counts, then you will have to realize it may slow down your web server by a factor of 5 or 10 compared to using Limits. If you think it is worth it, then by all means include it. Or you could have a button on the form so if the user really want an accurate count, he presses the button.


 Sure, you could use SQL_CALC_FOUND_ROWS with
FOUND_ROWS(), but this will defeate the purpose of the
LIMIT clause -- speed -- because MySQL will have to
figure out all the results of the query.

You would have to execute a second query to count the rows if you want a row count and use a Limit clause. This should be much faster because you are not returning the rows from the server, you're just counting them.


You also have to consider the hacker or disgruntled employee that will deliberately try to crash the server by repeating large queries. A Limit clause prevents that from happening.

Worse, MySQL
will not be able to cache the query because you're
changing the query string with each paging requsts.

If you repeat the query with "select col1, col2 from table1 limit 10,10" it will be found in the query cache. As long as someone had executed the same SQL statement before, it will be found in the cache (unless it was so long ago that it got swapped out).


Mike




--- mos <[EMAIL PROTECTED]> wrote:
> Carlos,
>          Apart from using the proper indexes, you
> should only retrieve the
> number of rows that are going to be displayed on the
> page at one time. If
> you are displaying 10 rows on the page, why retrieve
> 1000 rows? The user
> would have to page through it 100x and you have to
> ask yourself, is this
> likely going to happen for every user? If not, then
> you are wasting CPU cycles.
>
>          It is better to use "Select ... 10,10"
> which is the offset, limit.
> Keep increasing the offset (starting at 0) for each
> time the user presses
> "Next page". MySQL will then retrieve only 10 rows
> at a time. This is 100x
> faster than retrieving all 1000 rows at a time.
>
> Mike
>
> At 04:49 PM 3/9/2005, Carlos Savoretti wrote:
> >Hi all:
> >
> >I programming a GUI which retrieve big tables
> oftenly.
> >
> >So, I retrieve chunks of 1000 rows and paginate
> then
> >
> >to browse the entire table. It works fine, but it's
> rather
> >
> >slow.
> >
> >I would like to know if I could set some option
> thru
> >
> >mysql_option() to optimize the client side
> (mysql-client-3.23.58)
> >
> >and what is the the recommended value to clamp the
> `page' for a
> >
> >gui app. (For 1000 rows it uses about 12 seconds).
> >
> >Thanks a lot...
> >
> >--
> >Carlos Savoretti <[EMAIL PROTECTED]>
> >
> >
> >--
> >MySQL General Mailing List
> >For list archives: http://lists.mysql.com/mysql
> >To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
>
http://lists.mysql.com/[EMAIL PROTECTED]
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to