On Sat, 2003-11-08 at 08:18, Gonzalo A. Arana wrote: > > My instinctive reaction is to run the compression & > > decompression in a separate thread. If the queue to the > > compression/decompression engine is > > large, decrease the compression level used. > > Good instinct :-) > Squid-3 does not have an API for running jobs asyncrhonously, right?
Not as such. However, a little generalisation of the aufs or diskd queueing mechanisms will likely do what you need. There are other constraints you'll need to address though. I suspect you'll want (from my arch repository [EMAIL PROTECTED]: my squid--diskio--3.0 branch (separates out all the storage layout stuff from actual diskio, which will ease generalisation of the threaded engine) my squid--mempools--3.0 branch (removes global variable manipulation from mempool alloc and frees, which allows separate pool allocs to be thread safe, but not those from the same pool. You'll want to have a sync layer over whatever allocator your compressor/decompressor use). Rob -- GPG key available at: <http://members.aardvark.net.au/lifeless/keys.txt>.
signature.asc
Description: This is a digitally signed message part
