El 20/09/2012 5:55, Amos Jeffries escribió:
> Welcome to the real world. Software all has capacity limits. Someone is
> performing a *DoS* on your proxy using an internal link with higher
> capacity than your service software. What do you do about that?
> * close the hole (fix the app, disable it
On 19/09/2012 7:04 p.m., Fran Márquez wrote:
Hi friends,
I have a weird problem of saturation due to a broken client and I don't
know how fix it (I can force user to disable the app who cause the
problem, but I think that should be a solution for avoid that a bad
client can overload the server a
Hi friends,
I have a weird problem of saturation due to a broken client and I don't
know how fix it (I can force user to disable the app who cause the
problem, but I think that should be a solution for avoid that a bad
client can overload the server and affect to proxy service by itself).
I have
Hi!
I would like to receive an advice from people more experienced with
squid than me.. :)
We are trying to setup fully transparent squid proxy (with TPROXY) for
about 6000 clients, according to instructions on wiki.
At the moment system is configured and working well at the half of
planned loa
On Mon, 23 Aug 2010 13:11:49 -0300, Robert Pipca
wrote:
> Hi,
>
> 2010/8/19 Amos Jeffries :
>> Your COSS dirs are already sized at nearly 64GB each (65520 MB). With
>> objects up to 1MB stored there. That holds most Windows updates, which
>> are
>> usually only a few hundred KB each.
>> I'm not s
Hi,
2010/8/19 Amos Jeffries :
> Your COSS dirs are already sized at nearly 64GB each (65520 MB). With
> objects up to 1MB stored there. That holds most Windows updates, which are
> usually only a few hundred KB each.
> I'm not sure what your slice size is, but 15 of them are stored in RAM at
> any
Hi!
That's a dansguardian issue: I had something similar, but specially
with SSL sites.
I just got tired of dansguardian (I made it "work", but from time to
time the problem would come back), and started to use plain squid ACLs
for small lists, and squidguard.
I hope this helps,
Ildefonso Camar
I put a new squid/dansguardian in place duplicating what I had for a couple of
other networks. The proxy is configured for everyone going through one of two
groups with the ability in the 2nd group to elevate their privileges to bypass
the filter by clicking on a link in the denied page. The
Hi,
2010/8/19 Amos Jeffries :
>> I'd like to know if I can adjust the max-size option of coss, with
>> something like "--with-coss-membuf-size" ? Or is really hard-coded?
>
> It can be altered but not to anything big...
What's something not big? Around 10MB?
Does --with-coss-membuf-size=10485760
Robert Pipca wrote:
Hi,
2010/8/18 Jose Ildefonso Camargo Tolosa :
Yeah, I missed that last night (I was sleepy, I guess), thanks God you
people are around!. Still, he would need faster disk access, unless
he is talking about 110Mbps (~12MB/s) instead of 110MB/s (~1Gbps).
So, Robert, is that 1
Hi,
2010/8/18 Jose Ildefonso Camargo Tolosa :
> Yeah, I missed that last night (I was sleepy, I guess), thanks God you
> people are around!. Still, he would need faster disk access, unless
> he is talking about 110Mbps (~12MB/s) instead of 110MB/s (~1Gbps).
>
> So, Robert, is that 110Mbps or 1Gbp
Hi!
On Tue, Aug 17, 2010 at 11:06 PM, Amos Jeffries wrote:
> On Tue, 17 Aug 2010 22:43:33 -0430, Jose Ildefonso Camargo Tolosa
> wrote:
>> Hi!
>>
>> In my own personal opinion: your hard drive alone is not enough to
>> handle that much traffic (110MBytes/s, ~1Gbps). See, most SATA hard
>> drive
Robert Pipca wrote:
Hi,
2010/8/18 Amos Jeffries :
cache_dir aufs /cache 756842 60 100
Whats missing appears to be min-size=1048576 on the AUFS to push all the
small objects into the better COSS directories. (NOTE: the value is COSS
max-size+1)
Duh, now I changed it to:
cache_dir aufs /cach
Hi,
2010/8/18 Amos Jeffries :
>>> cache_dir aufs /cache 756842 60 100
>
>
> Whats missing appears to be min-size=1048576 on the AUFS to push all the
> small objects into the better COSS directories. (NOTE: the value is COSS
> max-size+1)
Duh, now I changed it to:
cache_dir aufs /cache 756842 60
On Tue, 17 Aug 2010 22:43:33 -0430, Jose Ildefonso Camargo Tolosa
wrote:
> Hi!
>
> In my own personal opinion: your hard drive alone is not enough to
> handle that much traffic (110MBytes/s, ~1Gbps). See, most SATA hard
> drives (7200rpm) gives around 50~70MB/s *sequential* read speed, your
> ca
Hi!
Sorry, had to post some corrections. duh
On Tue, Aug 17, 2010 at 10:43 PM, Jose Ildefonso Camargo Tolosa
wrote:
> Hi!
>
> In my own personal opinion: your hard drive alone is not enough to
> handle that much traffic (110MBytes/s, ~1Gbps). See, most SATA hard
> drives (7200rpm) gives
Hi!
In my own personal opinion: your hard drive alone is not enough to
handle that much traffic (110MBytes/s, ~1Gbps). See, most SATA hard
drives (7200rpm) gives around 50~70MB/s *sequential* read speed, your
cache reads are *not* sequential, so, it will be slower. In my
opinion, you need someth
Hi.
I'm using squid on a high speed network (with 110M of http traffic).
I'm using 2.7.STABLE7 with these cache_dir:
cache_dir aufs /cache 756842 60 100
cache_dir coss /cache/coss1 65520 max-size=1048575
max-stripe-waste=32768 block-size=4096 membufs=15
cache_dir coss /cache/coss2 65520 max-size
Justin Lintz wrote:
On Wed, Feb 10, 2010 at 4:23 PM, Amos Jeffries wrote:
http_access allow localhost
http_access allow all
Why?
Sorry I should mention this is running in a reverse proxy setup
So what is the request/second load on Squid?
Is RAID involved?
The underlying disks are runnin
On Wed, Feb 10, 2010 at 4:23 PM, Amos Jeffries wrote:
>> http_access allow localhost
>> http_access allow all
>
> Why?
Sorry I should mention this is running in a reverse proxy setup
>
> So what is the request/second load on Squid?
> Is RAID involved?
The underlying disks are running in a RAID
On Wed, 10 Feb 2010 11:36:40 -0500, Justin Lintz wrote:
> Squid ver: squid-2.6.STABLE21-3
> The server is a xen virtual with 6GB of ram available to it.
>
> relevant lines in Squid.conf:
>
> ierarchy_stoplist cgi-bin ?
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
>
Justin Lintz wrote:
dont top list,
we have seveal heavy load squids, and we realized that sometimes inet surf is
slow, we've discovered that it is because IO (as you see in your top command
more than 1% of IO waiting), so we purge our cache to dont let it rise
cache_swap_high percentage very of
Justin Lintz wrote:
dont top list,
we have seveal heavy load squids, and we realized that sometimes inet surf is
slow, we've discovered that it is because IO (as you see in your top command
more than 1% of IO waiting), so we purge our cache to dont let it rise
cache_swap_high percentage very of
Le Mercredi 10 Février 2010 12:49:47, vous avez écrit :
> > dont top list,
> >
> > we have seveal heavy load squids, and we realized that sometimes inet
> > surf is slow, we've discovered that it is because IO (as you see in
> > your top command more than 1% of IO waiting), so we purge our cache
>
> dont top list,
>
> we have seveal heavy load squids, and we realized that sometimes inet surf is
> slow, we've discovered that it is because IO (as you see in your top command
> more than 1% of IO waiting), so we purge our cache to dont let it rise
> cache_swap_high percentage very often
>
Th
Le Mercredi 10 Février 2010 11:41:29, Justin Lintz a écrit :
> We're seeing the symptoms across 4 servers on different hardware.
> What would be the reason for adjusting the cache_swap_high to 96?
> Thanks
>
> - Justin Lintz
>
>
>
> On Wed, Feb 10, 2010 at 11:45 AM, Luis Daniel Lucio Quiroz
>
We're seeing the symptoms across 4 servers on different hardware.
What would be the reason for adjusting the cache_swap_high to 96?
Thanks
- Justin Lintz
On Wed, Feb 10, 2010 at 11:45 AM, Luis Daniel Lucio Quiroz
wrote:
> Le Mercredi 10 Février 2010 10:36:40, Justin Lintz a écrit :
>> Squid ve
Le Mercredi 10 Février 2010 10:36:40, Justin Lintz a écrit :
> Squid ver: squid-2.6.STABLE21-3
> The server is a xen virtual with 6GB of ram available to it.
>
> relevant lines in Squid.conf:
>
> ierarchy_stoplist cgi-bin ?
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apach
Squid ver: squid-2.6.STABLE21-3
The server is a xen virtual with 6GB of ram available to it.
relevant lines in Squid.conf:
ierarchy_stoplist cgi-bin ?
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem 4096 MB
maximum_object_size 8192 KB
maximum_object_size_in_memory
Thanks all
it was a uname -a problem,
> Pablo García wrote:
> > Luis, Please define "Heavy Load", how manu req/s, is this a forward,
> > transparent or reverse proxy ? is this a memory only cache ? what are
> > the vmstat outputs when it stops responding ? did run the ulimit -n
> > 16384 be
Pablo García wrote:
Luis, Please define "Heavy Load", how manu req/s, is this a forward,
transparent or reverse proxy ? is this a memory only cache ? what are
the vmstat outputs when it stops responding ? did run the ulimit -n
16384 before start the squid ?
Are there any error messages in the cac
Luis, Please define "Heavy Load", how manu req/s, is this a forward,
transparent or reverse proxy ? is this a memory only cache ? what are
the vmstat outputs when it stops responding ? did run the ulimit -n
16384 before start the squid ?
Are there any error messages in the cache.log ?
Regards, Pab
Hi Squids,
We are putting in a heavy load environment. my squid is getting tired, after
a while of load testing, 3128/tcp begins to stop responding (randomly) to
requests. All other ports at that servers responds ok
i've recompile my squid with 16k file handlers, but this does not seems to
h
Squid Cache: Version 2.5.STABLE12
[EMAIL PROTECTED] logs]# uptime
12:58:48 up 205 days, 15:18, 1 user, load average: 1.97, 2.10, 1.79
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMM
16142 squid 16 0 1144M 880M 1420 R18.5 43.4 32090m 1 squid
There is nothing
34 matches
Mail list logo