> Actually, I think you missed part of the point. Regardless of whether
> the article is data in a database or a text file sitting on a disk,
> the amount of disk I/O to get those blocks into memory is the same
> order of magnitude (db page size and fragmentation vs os block size
> and fragmentation probably plays a role, but still should be same
> order of magnitude). 

I agree, then you have the overhead of processing through CF and the
additional I/O that it has to do if you were to go the db query application
cache, etc.

> Plus your query is probably only going to the app
> server (ColdFusion's query cache) or to the database's memory (cached
> database pages) and not the disk directly if it's a frequently used
> item -- frequent use is exactly why relying on the inherent caching in
> the lower-level components is useful!

With Static Publishing you don't have to worry about the coupling of the
files to the db if the db were to die, you don't have to worry about the
extra overhead that CF has to do. Actually CF I believe uses static
publishing itself. Why send a file through the engine, have it access the
db, and then be cached by your server? That's silly IMO, do it once and
update as needed instead of basically updating everytime it is accessed,
whether you are using application caching or not.

With Static Publishing your killing a couple of needless steps and you get
the same performance if not better and less to worry about.

> The database option requires a little more *processor* overhead (for
> the database and the app server) but the slow step is the disk i/o.
> And that difference may be moot if you're using some sort of
> compression on the web server (e.g. mod_deflate, mod_gzip, or the IIS
> equivalent filters) which also requires overhead If either your web
> server cache or your database cache (or you app server cache for that
> matter) can hold the entire collection, the point is pretty moot about
> where it lives -- though I'd personally go for the disk files.

I agree

> Legal documents -- maybe 4MB each? 2.5 GB of memory puts a big dent in
> that and isn't very expensive, relatively speaking. So to be safe, a
> server with 4 GB of RAM could run the collection (or at least the
> hottest part of the collection) in main memory -- regardless of
> whether it's web server, app server, or database server.

Again I agree, however I disagree that it _should_ be done this way. It can
be done, but there is more to worry about if something goes wrong. If for
example the machine you've got all of this data in memory on dies, you have
to go through the overhead of building the data again which sucks on many
levels.

> My real vote would probably be RAM disk with Apache to serve it up...
> or MySQL 4.1 with the query cache set to the size of the database :)
> Plus I'd make sure to use the meta headers on the resulting pages to
> prevent duplicate downloads. Of course there is a hit if you have to
> completely rebuild the collection (catastrophic disk failure on web
> server) but that's probably on the order of a database restore.

I agree, but depending on your setup that can be a real PITA.

> Of course I like playing with 10-100GB MySQL data warehouses, so I'm >
biased :)

Have fun. ;-)

Say a 10M index and static files seems to me to be better than 100GB of
data. ;-)






-- 
John Paul Ashenfelter
CTO/Transitionpoint
(blog) http://www.ashenfelter.com
(email) [EMAIL PROTECTED]



~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|
Discover CFTicket - The leading ColdFusion Help Desk and Trouble 
Ticket application

http://www.houseoffusion.com/banners/view.cfm?bannerid=48

Message: http://www.houseoffusion.com/lists.cfm/link=i:4:196318
Archives: http://www.houseoffusion.com/cf_lists/threads.cfm/4
Subscription: http://www.houseoffusion.com/lists.cfm/link=s:4
Unsubscribe: http://www.houseoffusion.com/cf_lists/unsubscribe.cfm?user=89.70.4
Donations & Support: http://www.houseoffusion.com/tiny.cfm/54

Reply via email to