Chris Woodfield wrote:
One performance-specific 2.7 question - I recall hearing mention of an issue with 2.6 where larger objects held in mem_cache required exponentially more CPU cycles to serve up (i.e. n cycles for a 4KB object, n*2 for an 8KB object, n*4 for a 12KB object, etc). Does anyone know if this issue is still present in 2.7 code?


Henrik knows more, but I believe its the same in all Squid-2 code. IIRC it was an architecture change that fixed it in 3.0.

Amos

Thanks,

-C

On Apr 1, 2009, at 12:42 AM, Amos Jeffries wrote:


Hello people, I'm having some bottlenecks on our squid deployement and I
was
wondering if anyone had any recommendations because I'm near out of ideas.
Would someone change my architecture or does anyone have any experience
with
a squid deployment of this size? Basically we are pushing 600mb at
240,000k
connections. When we reach speeds on our around that that number, we start seeing slow performance and getting alot of page timeouts. We are running
20
squid 2.6 boxes running dsr behind a single foundry gte load balancer. I
recently had 18 squid boxes and thought we had a squid bottleneck but no
change. I was kinda leaning in the direction of splitting it in half and
doing multiple load balancers. Does anyone have any experience pushing
this
much traffic?

I'd go to 2.7 if it was squid being slow. It has a number of small
performance boosters missing in 2.6.

But, it does sound like a load balancer bottleneck if you had zero change
from adding two more Squid. The Yahoo and Wikimedia guys are the poster
installs for these types of Squid deployment and they use CARP meshing.

PS: We are very interested in getting some real-world benchmarks from
high-load systems like yours. Can you grab the data needed for
http://wiki.squid-cache.org/KnowledgeBase/Benchmarks ?

Amos






--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6

Reply via email to