Not necessarily a bad idea even if the content changes. Think about it,
if you pull the entire recordset and page through it, if the content
changes while you're browsing you're still looking at old content. We do
this with large recordsets (as many as 100,000 rows) and what we do is
cache the query for 15 minutes, plenty of time to browse the recordset.
If a user makes a modification then we release the query cache and the
next person to hit the Next-N scenario creates a new query cache based
on the new data. The speed gain for the end-user was tremendous compared
with going back to the database each time and pulling the next however
many records.

We tried your scenario number 2 and things were considerably slower for
us than just running the whole query outright and caching it. Granted
you need a lot more RAM, but it's cheap. I'd rather buy more RAM and
have the application run quicker for the end-user.

Joshua Miller
Head Programmer / IT Manager
Garrison Enterprises Inc.
www.garrisonenterprises.net
[EMAIL PROTECTED]
(704) 569-0801 ext. 254
 
************************************************************************
*************
Any views expressed in this message are those of the individual sender,
except where the sender states them to be the views of 
Garrison Enterprises Inc.
 
This e-mail is intended only for the individual or entity to which it is
addressed and contains information that is private and confidential. If
you are not the intended recipient you are hereby notified that any
dissemination, distribution or copying is strictly prohibited. If you 
have received this e-mail in error please delete it immediately and
advise us by return e-mail to [EMAIL PROTECTED]
************************************************************************
*************


-----Original Message-----
From: Geoffrey V. Brown [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, September 10, 2003 9:47 AM
To: CF-Talk
Subject: RE: Paging Principals


Hi,
Good idea.
A cached query wouldn't work for us, unfortunately, as the data changes
almost by the minute.

Geoff B

> -----Original Message-----
> From: Ben Doom [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, September 10, 2003 9:43 AM
> To: CF-Talk
> Subject: RE: Paging Principals
>
>
> When I had something like this, I used the full query but only
> retrieved the
> top n, where n is the highest index to display.  So, if you are
displaying
> 10 per page...
>
> page 1, n=10
> page 2, n=20
>
> and so on.  Maybe not a perfect solution, but dropped average
processing
> time a lot, since people would generally either narrow the search or
only
> look at the first page or two.
>
> If you think people might be flipping through a large number of
> pages, look
> at query caching.
>
>
> --  Ben Doom
>     Programmer & General Lackey
>     Moonbow Software, Inc
>
> : -----Original Message-----
> : From: Geoffrey V. Brown [mailto:[EMAIL PROTECTED]
> : Sent: Wednesday, September 10, 2003 9:18 AM
> : To: CF-Talk
> : Subject: Paging Principals
> :
> :
> : Hi,
> : I have an app that queries a table and displays a subset of the
query
> : results in a paged format, showing only 10 results per page.
> Some queries
> : will return upwards of 1,000 rows.
> :
> : This query joins quite a few tables - say 8 or so, however the
> : query itself
> : only requires one table to obtain the basis of the dataset(the
> rest of the
> : tables in the join are used for display info or 'cosmetic' type
> : information).
> :
> : Generally speaking, which of the following would be more efficient:
> :
> : Option 1:
> : - Query database, joining 8 tables, get entire dataset (1000 rows)
> : - CFOUTPUT / Loop limited to start row (page ordinal) and maxrows
(10)
> :
> : Option 2:
> : - Query database only based on the results of one table (1000 rows)
> : - Cfoutput / Loop limited to start row (page ordinal) and maxrows
(10)
> : - within output loop, perform second query joining the 8 tables,
> : based on a
> : single primary key returned from the initial dataset. (returns
> single row)
> :
> : In essence, the question revolves around whether it is more
expensive to
> : pull an entire 1,000 record joined dataset in one shot, or to make
the
> : initial query lean, and then do ten consecutive queries
> : performing the join.
> : The former likely incurs more database load, while the latter
> would incur
> : more interaction with odbc.
> :
> : Whew.  Interested in your answers!
> :
> : Thanks,
> : Geoffrey Brown
> : ________________________
> : Online Operations Manager
> : Deerfield.com
> : 989.732.8856 ext. 227 (Gaylord)
> : 231.935.4640 ext. 123 (Traverse City)
> : [EMAIL PROTECTED]
> :
> : http://www.deerfield.com
> : Personal Service with a :-)
> :
> : VisNetic (viz-net-ik) 2002: vision of the Internet
http://www.visnetic.com
:
:


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|
Archives: http://www.houseoffusion.com/lists.cfm?link=t:4
Subscription: http://www.houseoffusion.com/lists.cfm?link=s:4
Unsubscribe: http://www.houseoffusion.com/cf_lists/unsubscribe.cfm?user=89.70.4

Get the mailserver that powers this list at 
http://www.coolfusion.com

Reply via email to