On 23/09/2008, at 1:42 PM, Martin Langhoff wrote:

There's no "hard limit" for squid and squid (any version) handles
memory allocation failures very very poorly (read: crashes.)

Is it relatively sane to run it with a tight rlimit and restart it
often? Or just monitor it and restart it?

That's about the worst thing you can do; it will go down hard if it hits that limit.

It's not that Squid's memory use will necessarily increase over time (at least, when cache_mem is full); rather, it's that in-transit objects and internal accounting use memory on top of cache_mem. As such, intense traffic (e.g., lots of simultaneous connections) will cause more memory use. Likewise, if you use disk caching, there's a certain amount of overhead (I believe about 10M of memory per G of disk).

FWIW, one of my standard squid packages uses 48M cache_mem, and I advise people that it shouldn't use more than 96M of memory (and that rarely). However, that's predicated on a small number of local users and no disk caching; if you have more users and connections are long- lived (which I'd imagine they will be in an OLPC deployment), there may be more overhead.

- The XS will (in some locations) be hooked to *very* unreliable
power... uncontrolled shutdowns are the norm. Is this ever a problem with Squid?

- After a bad shutdown, graceful recovery is the most important
aspect. If a few cached items are lost, we can cope...

Squid will handle being taken down roughly OK; at worst, the swap.state may get corrupted, which means it'll have to rebuild it next time around.


Overall, what do you want to use Squid for here; caching, access control..? If you want caching, realise that you're not going to see much benefit from such a resource-limited box, and indeed it may be more of a bottleneck than is worthwhile.

Cheers,




--
Mark Nottingham       [EMAIL PROTECTED]


Reply via email to