Em 28/11/2015 22:46, André Janna escreveu:
I took another network trace this time both at Squid and Windows
client ends.
cache.log:
2015/11/27 11:30:55.610 kid1| SECURITY ALERT: Host header forgery
detected on local=177.43.198.106:443 remote=192.168.64.4:61802 FD 5465
flags=33 (local IP does
Citando Amos Jeffries :
So, the first place to look is not Squid I think. But why at least 6 of
those ACK packets did not make it back to the client. That needs
resolving first to esure that the TCP level is operating correctly.
Only then if the problem remains looking at
On 27/11/2015 7:36 a.m., André Janna wrote:
>
> Assinatura
> Em 24/11/2015 00:54, Amos Jeffries escreveu:
>> FYI: unless you have a specific need for 3.5 you should be fine with
>> the 3.4 squid3 package that is available for Jesse from Debian
>> backports. The alternative is going the other way
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
27.11.15 0:36, André Janna пишет:
>
> Assinatura
> Em 24/11/2015 00:54, Amos Jeffries escreveu:
>> FYI: unless you have a specific need for 3.5 you should be fine with
the 3.4 squid3 package that is available for Jesse from Debian
backports. The
Assinatura
Em 24/11/2015 00:54, Amos Jeffries escreveu:
FYI: unless you have a specific need for 3.5 you should be fine with
the 3.4 squid3 package that is available for Jesse from Debian
backports. The alternative is going the other way and upgrading right
to the latest 3.5 snapshot (and/or
Just as a side note you should know that tcpdump on a busy server needs
bigger buffer size to prevent the drop of captured packets.
Eliezer
On 24/11/2015 04:54, Amos Jeffries wrote:
If possible identifying one of these connections from its SYN onwards
would be great, but if not then a 20min
On 24/11/2015 7:45 a.m., André Janna wrote:
>
> Assin Em 22/11/2015 16:25, Eliezer Croitoru escreveu:
>> Hey Andre,
>>
>> There are couple things to the picture.
>> It's not only squid that is the "blame".
>> It depends on what your OS tcp stack settings are.
>> To verify couple things you can
Assin Em 22/11/2015 16:25, Eliezer Croitoru escreveu:
Hey Andre,
There are couple things to the picture.
It's not only squid that is the "blame".
It depends on what your OS tcp stack settings are.
To verify couple things you can try to use the netstat tool.
run the command "netstat -nto" to
On 23/11/2015 7:25 a.m., Eliezer Croitoru wrote:
> Hey Andre,
>
> There are couple things to the picture.
> It's not only squid that is the "blame".
> It depends on what your OS tcp stack settings are.
> To verify couple things you can try to use the netstat tool.
> run the command "netstat -nto"
Citando Amos Jeffries :
CONNECT requests with tunnels can be particularly long lived, mobiles
and their applications stay active for weeks on end with few outward
signs of what is happening inside the encrypted tunnel. The only way to
be sure the connection is finished
Hey Andre,
There are couple things to the picture.
It's not only squid that is the "blame".
It depends on what your OS tcp stack settings are.
To verify couple things you can try to use the netstat tool.
run the command "netstat -nto" to see what is the timers status.
You can then see how long
Citando André Janna:
Squid is still using file descriptors 12 and 14 (and a lot of others)
for the same connections as yesterday, although the mobile devices it
was connected to have not been online in our network for at least 15
hours.
Update: Squid released file descriptors after about
I'm running Squid 3.5.10 on Debian Jessie and after some hours of execution
it runs out of file descriptors.
Squid is listening on port 3125, 3126 and 3127.
Port 3126 is used for intercepting, via iptables redirect, https
connections mostly from mobile devices like smartphones. On this port is
On 22/11/2015 4:10 p.m., André Janna wrote:
> I'm running Squid 3.5.10 on Debian Jessie and after some hours of execution
> it runs out of file descriptors.
> Squid is listening on port 3125, 3126 and 3127.
> Port 3126 is used for intercepting, via iptables redirect, https
> connections mostly
Hello,
Got a odd problem with file descriptors im hoping you guys could help me out
with?
Background
I'm running CentOS 5.5 and squid 3.0 Stable 5.
The system is configured with 4096 file descriptors with the following :
/etc/security/limits.conf
*- nofile
:[squid-users] File Descriptors
Hello,
Got a odd problem with file descriptors im hoping you guys could help me out
with?
Background
I'm running CentOS 5.5 and squid 3.0 Stable 5.
The system is configured with 4096 file descriptors with the following :
/etc/security/limits.conf
it? The upper limit is set at
compile time. I ran into this problem myself.
-Dan
-Original Message-
From: Superted666 [mailto:ruckafe...@gmail.com]
Sent: Mon 7/5/2010 3:33 PM
To: squid-users@squid-cache.org
Cc:
Subject: [squid-users] File Descriptors
Hello,
Got
Mr ED,
Changing the FD value from limits.conf and including the ulimit -HSn
4096 in squid daemon does not change the default squid FD limit.
This needs to be done in compile time.
Recompile your squid and run ulimit -HSn 4096 after compile and before
make install. That will work.
Regards,
Hello,
I'm running squid-3.0.STABLE18 that I've compiled with the options
--with-filedescriptors=32768
I have added to /etc/security/limits.conf the following lines on my
Redhat host :
squid hard nofile 32768
squid soft nofile 32768
As squid user,
On Wed, 16 Dec 2009 18:03:53 +0100, Solaris Treize
solaristre...@gmail.com wrote:
Hello,
I'm running squid-3.0.STABLE18 that I've compiled with the options
--with-filedescriptors=32768
I have added to /etc/security/limits.conf the following lines on my
Redhat host :
squid hard
Hi, I am running squid 3.0 stable 6 on Fedora core 6. When I run squid from
command prompt, it's setting up File descriptors value to 131072 what
I defined at compile time. But running at rc.local (Just put a line with
path of squid exe location) set
the file descriptors value to 1024.
Squid
tis 2008-06-24 klockan 23:42 +0600 skrev Azhar H. Chowdhury:
Hi, I am running squid 3.0 stable 6 on Fedora core 6. When I run squid from
command prompt, it's setting up File descriptors value to 131072 what
I defined at compile time. But running at rc.local (Just put a line with
path of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Fri, 10 Aug 2007 01:17:21 +0530
Preetish [EMAIL PROTECTED] wrote:
Odd.. are you sure you are really running the new binary, and that the
ulimit setting is done correctly in the start script?
#Squid startup/shutdown
if [ -z $1 ] ; then
Hi All,
Recompilng the kernel with MAXFILES=8192 worked. I even had
to add the line :openfiles-max=infinity:\
to /etc/login.def in the daemon section. Well now the File Descriptors
has increased and even the internet speed is good ( i ll know it
better by tomorrow). I have kept my
Hi Everybody
I have recompiled Squid the way i saw in one of the how to. this is what i did
1)I uninstalled Squid
2)
#ulimit -HSn 8192
#then recompiled squid with --with-maxfd=8192
then in my starting squid script i have added ulimit -HSn 8192
But still it shows the same number of file
On tor, 2007-08-09 at 17:00 +0530, Preetish wrote:
Hi Everybody
I have recompiled Squid the way i saw in one of the how to. this is what i did
1)I uninstalled Squid
2)
#ulimit -HSn 8192
#then recompiled squid with --with-maxfd=8192
then in my starting squid script i have added ulimit
Preetish wrote:
Hi Everybody
I have recompiled Squid the way i saw in one of the how to. this is what i did
1)I uninstalled Squid
2)
#ulimit -HSn 8192
#then recompiled squid with --with-maxfd=8192
then in my starting squid script i have added ulimit -HSn 8192
But still it shows the same
Odd.. are you sure you are really running the new binary, and that the
ulimit setting is done correctly in the start script?
#Squid startup/shutdown
if [ -z $1 ] ; then
echo -n Syntax is: $0 start stop
exit
fi
if [ $1 != start -a $1 != stop ]; then
echo -n Wrong
fre 2007-02-02 klockan 14:25 -0200 skrev Michel Santos:
When Squid sees it's short of filedescriptors it stops accepting
new requests, focusing on finishing what it has already accepted.
isn't this conflicting with what you said before?
No.
do squid recover or do it need to be restarted?
tor 2007-02-01 klockan 20:01 -0600 skrev Matt:
What does Squid do or act like when its out of file descriptors?
When Squid sees it's short of filedescriptors it stops accepting new
requests, focusing on finishing what it has already accepted.
And long before there is a shortage it disables the
fre 2007-02-02 klockan 10:54 +0800 skrev Adrian Chadd:
If your system or process FD limits are lower than what Squid believes it
to be, then yup. It'll get unhappy.
Only temporarily. It automatically adjusts fd usage to what the system
can sustain when hitting the limit (see fdAdjustReserved)
Henrik Nordstrom disse na ultima mensagem:
fre 2007-02-02 klockan 10:54 +0800 skrev Adrian Chadd:
If your system or process FD limits are lower than what Squid believes
it
to be, then yup. It'll get unhappy.
Only temporarily. It automatically adjusts fd usage to what the system
can
What does Squid do or act like when its out of file descriptors? If
cachemgr says it still has some left could it still really be out?
Matt
On Thu, Feb 01, 2007, Matt wrote:
What does Squid do or act like when its out of file descriptors? If
cachemgr says it still has some left could it still really be out?
If your system or process FD limits are lower than what Squid believes it
to be, then yup. It'll get unhappy.
(It generally
From: kabindra shrestha [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 22, 2006 7:28 PM
To: Gregori Parker
Subject: Re: [squid-users] FILE DESCRIPTORS
u ve to run the same command ulimit -HSn 8192 before starting the squid. it
is working fine on my server
Sorry to be pounding the list lately, but I'm about to lose it with
these file descriptors...
I've done everything I have read about to increase file descriptors on
my caching box, and now I just rebuilt a fresh clean squid. Before I
ran configure, I did ulimit -HSn 8192, and I noticed that
-Original Message-
From: Mark Elsen [mailto:[EMAIL PROTECTED]
Sent: Donnerstag, 23. Februar 2006 09:26
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] FILE DESCRIPTORS
Sorry to be pounding the list lately, but I'm about to lose it with
these file
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] FILE DESCRIPTORS
Sorry to be pounding the list lately, but I'm about to lose it with
these file descriptors...
I've done everything I have read about to increase file descriptors on
my caching box, and now I just rebuilt a fresh
Sorry to be pounding the list lately, but I'm about to lose it with
these file descriptors...
I've done everything I have read about to increase file descriptors on
my caching box, and now I just rebuilt a fresh clean squid. Before I
ran configure, I did ulimit -HSn 8192, and I noticed that
On Fri, 1 Jul 2005, Sam Reynolds wrote:
What is the maximum number of File descriptors that you can have?
Depends on your OS.
The default is 1024.
Is there a correlation between the number of File Descriptors and RAM?
YEs, but even more so between file descriptors and CPU usage.
At
I am at 8192 File Descriptors, but am still running out, so I am moving
to 16,000 next. I just need to know the ramifications of what I am
doing.
Quite likely you have some other problem with the filedescriptor problem
only being a side effect. 8K filedescriptors is well more than plenty
What is the maximum number of File descriptors that you can have?
Is there a correlation between the number of File Descriptors and RAM?
At what point does increasing the number of File Descriptors become a problem?
I am at 8192 File Descriptors, but am still running out, so I am moving to
There is, and its in the FAQ. Check
closer. http://www.squid-cache.org/Doc/FAQ/FAQ-11.html#ss11.4
At 10:55 PM 11/2/2003, [EMAIL PROTECTED] wrote:
Mark
FAQ already checked. I just wondered (grasping at straws) if there was some
way of doing it.
Thanks anyway!
Jeff
--
Sorry, I just noticed your question actually said without a
recompile. In that case, the answer is no: you must recompile to raise
the file descriptor limit in squid.
At 10:55 PM 11/2/2003, [EMAIL PROTECTED] wrote:
Mark
FAQ already checked. I just wondered (grasping at straws) if there was
:
03/11/2003 18:22 Subject: Re: [squid-users] file
descriptors - urgent request
[EMAIL PROTECTED] wrote:
If anyone can give me a definitive response in the next two hours, I would
be extremely grateful.
I believe that Squid needs a recompile, with a new system fd limit in
place, in order to increase it's fd limit(?) Is there any way of increasing
the fd limit for
]
be cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] file
descriptors - urgent request
03/11/2003 17
On Tuesday 19 August 2003 22.22, [EMAIL PROTECTED] wrote:
runtime information page it shows squid seeing 1024 file
descriptors available.
Do I need to recompile squid to fix this?
Most likely yes.
The ulimit set when you compile Squid is built into the Squid binary
and this binary can not
I have finally managed to get WCCPv2 working on my box. Works great.
Now I am having errors show up in my log saying I am running on of file
descriptors.
I have checked my /proc/sys/fs/file-max and /proc/sys/fs/inode-nr files and
they both are set pretty high. When I check the cachemgr runtime
Now I am having errors show up in my log saying I am running on of file
descriptors.
Using ulimit (or its equivalent), set the hard and soft limits both
before compiling and before running Squid.
This has been recently discussed on the list (today, I think), and
also several times in the
50 matches
Mail list logo