2008/3/11 Neil Harkins [EMAIL PROTECTED]:
F5 has some documents on how to implement consisent hashes in bigip
irules (tcl),
but i wound up writing a custom one for use in front of our squids
that only does one
checksum per request, as opposed to one per squid in the pool, to avoid
F5 has some documents on how to implement consisent hashes in bigip
irules (tcl),
but i wound up writing a custom one for use in front of our squids
that only does one
checksum per request, as opposed to one per squid in the pool, to avoid wasting
cpu cycles on the LB.
it uses a precomputed table
F5 has some documents on how to implement consisent hashes in bigip
irules (tcl),
but i wound up writing a custom one for use in front of our squids
that only does one
checksum per request, as opposed to one per squid in the pool, to avoid
wasting
cpu cycles on the LB.
it uses a
I have the same problem, tough I don't remove my squid servers very often.
I've partially resolved this problem thanks to the implementation of
the algorithm of my load balancer.
What it does, it's calculate the hash taking into account all the
squids in the pool, whether they're up or down. Then
I dealt with the same problem using a load balancer in front of the
cache farm, using a URL-HASH algorithm to send the same url to the
same cache every time. It works great, and also increases the hit
ratio a lot.
Regards, Pablo
2008/3/6 Siu Kin LAM [EMAIL PROTECTED]:
Dear all
At this moment,
On Fri, Mar 07, 2008, Siu Kin LAM wrote:
Actually, it is my case.
The URL-hash is helpful to reduce the duplicated
objects. However, once adding/removing squid server,
load balancer needs to re-calculate the hash of URL
which cause lot of TCP_MISS in squid server at the
inital stage.
Do
Dear all
At this moment, I have several squid servers for http
caching. Many duplicated objects have been found in
different servers. I would minimize to data storage
by installing a large centralized storage and the
squid servers mount to the storage as data disk.
Have anyone tried this
Dear all
At this moment, I have several squid servers for http
caching. Many duplicated objects have been found in
different servers. I would minimize to data storage
by installing a large centralized storage and the
squid servers mount to the storage as data disk.
Have anyone tried this
On Thu, Mar 06, 2008, Siu Kin LAM wrote:
Dear all
At this moment, I have several squid servers for http
caching. Many duplicated objects have been found in
different servers. I would minimize to data storage
by installing a large centralized storage and the
squid servers mount to the
2008/3/6 Siu Kin LAM [EMAIL PROTECTED]:
Dear all
At this moment, I have several squid servers for http
caching. Many duplicated objects have been found in
different servers. I would minimize to data storage
by installing a large centralized storage and the
squid servers mount to the
10 matches
Mail list logo