-encoding to gzip, deflate, while some set
it to gzip,deflate (no space), and this results in two different
objects
for the same URL
--
Robert Borkowski
with four servers at 250Mbps, or
one server at 1Gbps. Smaller servers
are typically much cheaper than Big Iron, and you gain the ability to add extra
power in smaller, less painful chunks.
--
Robert Borkowski
for finding bandwidth hogs.
If you're handy with scripting languages you could use tools such as netcat or tcpdump to monitor traffic passing
through the gateway and produce whatever reports are required.
--
Robert Borkowski
mean squid restarts every 66 minutes, or
6 minutes past every hour (1:06, 2:06, 3:06)
If it's the second one, then try turning off the cron daemon about 15 minutes
before the new hour
/etc/init.d/cron stop
--
Robert Borkowski
crontabs installed?
Take a look in /var/spool/cron/crontabs
--
Robert Borkowski
replug themselves
into the unrestricted
network.
First option is best, but for some reason you're letting users change their IP
addresses, so
there's some restrictons there we don't know about ;-)
--
Robert Borkowski
How much memory is in your server?
What does the command 'dmesg' return?
What does 'ps aux|grep squid' return just before a crash?
--
Robert Borkowski
of memory', or 'OOM killer', or 'zero order
allocation' errors in the dmesg output.
If they're not there then the second (ulimit) possibility is most likely.
--
Robert Borkowski
Robert Borkowski wrote:
Gix, Lilian (CI/OSR) * wrote:
The server has 1G of RAM (only 100M for squid)
2005/11/02 10:07:05| Max Mem size: 102400 KB
^^
I asked about memory because of this line...
Two possibilities
1) The kernel is killing off squid
requested URLs before
logging. This protects your user's privacy.
--
Robert Borkowski
.
--
Robert Borkowski
client. This is due to squid having a limit to
the read-ahead difference between the client and origin server.
The read-ahead amount is tunable in squid3.
I found this out the hard way while fighting a very frustrating and
intermittent failure at our site :-)
--
Robert Borkowski
on my servers in exchange for
porn accounts. 10 users (out of 200,000) accounted for 75% of my bandwidth.
--
Robert Borkowski
content on your servers (like p___).
I've encountered something similar in the past and found out some users
were giving p___ sites access to webspace on my servers in exchange for
p___ accounts. 10 users (out of 200,000) accounted for 75% of my bandwidth.
--
Robert Borkowski
are
caused by the huge amount of users we have. is our situation normal to
you considering there are no abuse?
How many squid servers are there, and how are they loadbalanced?
Can you post your squid config and output of squidclient mgr:info ?
--
Robert Borkowski
between peers based on URL. Basically a hash based load
balancing algorithm.
If you have a load balancer with packet inspection capabilities you can
also direct traffic that way. On F5 BigIPs the facility is called
iRules. I'm pretty sure NetScaler can do that too.
--
Robert Borkowski
a serious issue to fix there.
--
Robert Borkowski
for that same object?
--
Robert Borkowski
Henrik Nordstrom wrote:
On Mon, 10 Jan 2005, Robert Borkowski wrote:
A wget in a loop retrieving the main page of our site will
occasionally take just under 15 minutes to complete the retrieval.
Normally it takes 0.02 seconds.
A related note: The default timeout waiting for data from the server
, but nothing looks applicable to the
problem.
I am having no luck reproducing this on a test system.
--
Robert Borkowski
20 matches
Mail list logo