If your tables are mostly read-only, you could
pre-generate page numbers on a periodic basis and
select only specific ranges WHERE row_number BETWEEN
page_start AND page_finish.

Or you could just send the top 1000 IDs of the table
to the client, and have the client figure out which
IDs belong to which page, and then send these IDs in
another query to rertieve the full data for a single
page at a time. You should also take advantage of the
SQL cache to make the successive page requests snappy.

Homam



--- Carlos Savoretti <[EMAIL PROTECTED]>
wrote:
> Hi all:
> 
> I programming a GUI which retrieve big tables
> oftenly.
> 
> So, I retrieve chunks of 1000 rows and paginate then
> 
> to browse the entire table. It works fine, but it's
> rather
> 
> slow.
> 
> I would like to know if I could set some option thru
> 
> mysql_option() to optimize the client side
> (mysql-client-3.23.58)
> 
> and what is the the recommended value to clamp the
> `page' for a
> 
> gui app. (For 1000 rows it uses about 12 seconds).
> 
> Thanks a lot...
> 
> -- 
> Carlos Savoretti <[EMAIL PROTECTED]>
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:   
>
http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to