> That's not really large data -- you're talking about dealing with  
> 10-300k per request ( it should never go beyond that, because you'd  
> be chunking stuff off the db for ease of download to the end user ).
> 
> I've been under the impression ( and I'd imagine that others on this  
> list are as well ) that you're talking about loading 10-100mb data  
> structures for some sort of parsing or analysis -- which a lot of  
> people here do.  but you're talking about comparatively tiny amounts  
> of data.


Agreed - and given the price of memory, it is a whole lot cheaper to use
some extra memory rather than building complicated micro-optimizations
to send the data out byte by byte.

Much easier to just program naturally, with considerations like the
following:
 - use Apache2::SizeLimit to kill off a process if it gets too big
   (but doesn't work under windows)
 - force your child process to exit after serving the request if you
   have to do something big (eg process a large image, generate a PDF)

Obviously, for the second case, I'm assuming that you would do these
things on a small percentage of your total requests, otherwise killing
off your child would be a major bottleneck.

Clint
> 

Reply via email to