On 07/05/11 08:54, Dave Dykstra wrote:
Ah, but as explained here
     http://www.squid-cache.org/mail-archive/squid-users/200903/0509.html
this does risk using up a lot of memory because squid keeps all of the
read-ahead data in memory.  I don't see a reason why it couldn't instead
write it all out to the disk cache as normal and then read it back from
there as needed.  Is there some way to do that currently?  If not,

Squid should be writing to the cache in parallel to the data arrival, the only bit required in memory being the bit queued for sending to the client. Which gets bigger, and bigger... up to the read_ahead_gap limit.

IIRC it is supposed to be taken out of the cache_mem space available, but I've not seen anything to confirm that.

perhaps I'll just submit a ticket as a feature request.  I *think* that
under normal circumstances in my application squid won't run out of
memory, but I'll see after running it in production for a while.

- Dave

On Wed, May 04, 2011 at 02:52:12PM -0500, Dave Dykstra wrote:
I found the answer: set "read_ahead_gap" to a buffer larger than the
largest data chunk I transfer.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1

Reply via email to