Re: [squid-users] Excessive TCP memory usage

2016-06-14 Thread Deniz Eren
Little bump :)

I have posted bug report with steps to reproduce. The problem still
exists and I am curious whether anyone else is having the same
problem, too.

http://bugs.squid-cache.org/show_bug.cgi?id=4526

On Wed, May 25, 2016 at 1:18 PM, Deniz Eren <denizl...@denizeren.net> wrote:
> When I listen to connections between squid and icap using tcpdump I
> saw that after a while icap closes the connection but squid does not
> close, so connection stays in CLOSE_WAIT state:
>
> [root@test ~]# tcpdump -i any -n port 34693
> tcpdump: WARNING: Promiscuous mode not supported on the "any" device
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on any, link-type LINUX_SLL (Linux cooked), capture size 96 bytes
> 13:07:31.802238 IP 127.0.0.1.icap > 127.0.0.1.34693: F
> 2207817997:2207817997(0) ack 710772005 win 395 <nop,nop,timestamp
> 104616992 104016968>
> 13:07:31.842186 IP 127.0.0.1.34693 > 127.0.0.1.icap: . ack 1 win 3186
> <nop,nop,timestamp 104617032 104616992>
>
> [root@test ~]# netstat -tulnap|grep 34693
> tcp   215688  0 127.0.0.1:34693 127.0.0.1:1344
>  CLOSE_WAIT  19740/(squid-1)
>
> These CLOSE_WAIT connections do not timeout and stay until squid
> process is killed.
>
> 2016-05-25 10:37 GMT+03:00 Deniz Eren <denizl...@denizeren.net>:
>> 2016-05-24 21:47 GMT+03:00 Amos Jeffries <squ...@treenet.co.nz>:
>>> On 25/05/2016 5:50 a.m., Deniz Eren wrote:
>>>> Hi,
>>>>
>>>> After upgrading to squid 3.5.16 I realized that squid started using
>>>> much of kernel's TCP memory.
>>>
>>> Upgrade from which version?
>>>
>> Upgrading from squid 3.1.14. I started using c-icap and ssl-bump.
>>
>>>>
>>>> When squid was running for a long time TCP memory usage is like below:
>>>> test@test:~$ cat /proc/net/sockstat
>>>> sockets: used *
>>>> TCP: inuse * orphan * tw * alloc * mem 20
>>>> UDP: inuse * mem *
>>>> UDPLITE: inuse *
>>>> RAW: inuse *
>>>> FRAG: inuse * memory *
>>>>
>>>> When I restart squid the memory usage drops dramatically:
>>>
>>> Of course it does. By restarting you just erased all of the operational
>>> state for an unknown but large number of active network connections.
>>>
>> That's true but what I mean was squid's CLOSE_WAIT connections are
>> using too much memory and they are not timing out.
>>
>>> Whether many of those should have been still active or not is a
>>> different question. the answer to which depends on how you have your
>>> Squid configured, and what the traffic through it has been doing.
>>>
>>>
>>>> test@test:~$ cat /proc/net/sockstat
>>>> sockets: used *
>>>> TCP: inuse * orphan * tw * alloc * mem 10
>>>> UDP: inuse * mem *
>>>> UDPLITE: inuse *
>>>> RAW: inuse *
>>>> FRAG: inuse * memory *
>>>>
>>>
>>> The numbers you replaced with "*" are rather important for context.
>>>
>>>
>> Today again I saw the problem:
>>
>> test@test:~$ cat /proc/net/sockstat
>> sockets: used 1304
>> TCP: inuse 876 orphan 81 tw 17 alloc 906 mem 29726
>> UDP: inuse 17 mem 8
>> UDPLITE: inuse 0
>> RAW: inuse 1
>> FRAG: inuse 0 memory 0
>>
>>>> I'm using Squid 3.5.16.
>>>>
>>>
>>> Please upgrade to 3.5.19. Some important issues have been resolved. Some
>>> of them may be related to your TCP memory problem.
>>>
>>>
>> I have upgraded now and problem still exists.
>>
>>>> When I look with "netstat" and "ss" I see lots of CLOSE_WAIT
>>>> connections from squid to ICAP or from squid to upstream server.
>>>>
>>>> Do you have any idea about this problem?
>>>
>>> Memory use by the TCP system of your kernel has very little to do with
>>> Squid. Number of sockets in CLOSE_WAIT does have some relation to Squid
>>> or at least to how the traffic going through it is handled.
>>>
>>> If you have disabled persistent connections in squid.conf then lots of
>>> closed sockets and FD are to be expected.
>>>
>>> If you have persistent connections enabled, then fewer closures should
>>> happen. But some will so expectations depends on how high the traffic
>>> load is.
>>>
>> Persistent connection parameters are enabled in my conf, the problem
>> occurs especially

[squid-users] Excessive TCP memory usage

2016-05-24 Thread Deniz Eren
Hi,

After upgrading to squid 3.5.16 I realized that squid started using
much of kernel's TCP memory.

When squid was running for a long time TCP memory usage is like below:
test@test:~$ cat /proc/net/sockstat
sockets: used *
TCP: inuse * orphan * tw * alloc * mem 20
UDP: inuse * mem *
UDPLITE: inuse *
RAW: inuse *
FRAG: inuse * memory *

When I restart squid the memory usage drops dramatically:
test@test:~$ cat /proc/net/sockstat
sockets: used *
TCP: inuse * orphan * tw * alloc * mem 10
UDP: inuse * mem *
UDPLITE: inuse *
RAW: inuse *
FRAG: inuse * memory *

I'm using Squid 3.5.16.

When I look with "netstat" and "ss" I see lots of CLOSE_WAIT
connections from squid to ICAP or from squid to upstream server.

Do you have any idea about this problem?

Regards,
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fwd: Mark outgoing connection mark same as client side mark

2016-05-11 Thread Deniz Eren
> On 11/05/2016 8:19 p.m., Deniz Eren wrote:
>> Hi,
>>
>> In my system I am using netfilter marks to shape traffic(SNAT, QoS,
>> etc.) however when I redirect traffic to Squid using Tproxy I lose the
>> mark value(obviously).
>
> Not obvious at all. The MARK vaue is available to Squid, and if
> configured to look it up Squid should be doing so.
>
By saying obviously I meant that if squid doesn't mark the packet its
not available in OUTPUT chain.

>> I saw configuration directive qos_flow but it's
>> only applicable for incoming connections( some website -> squid ->
>> client PC), what I need is the opposite one I want to pass mark of
>> outgoing connections( client PC -> squid -> some website ). I want to
>> mark packet in mangle PREROUTING and then redirect packet to TPROXY
>> and after packets coming out of squid I want to use the same mark in
>> mangle OUTPUT or POSTROUTING chains. Is there a way to do that?
>>
>
> tcp_outgoing_mark or qos_flows mark.
http://www.squid-cache.org/Doc/config/qos_flows/
"to mark outgoing connections to the client, based on where the reply
was sourced."
From here I understand that marking process is like this:
Web Site -> |  -> mark -> squid -> mark -> | -> Client PC
And in my tests I saw this behavior, the opposite did not work. Is the
opposite one possible:
ClientPC -> |  -> mark -> squid -> mark -> | -> Web Site

>
> The problem you will find however is that HTTP is both stateless and
> multiplexing. One incoming request may generate zero or several outgoing
> requests. The outbound connection may also be shared by several requests
> with differnet incoming connection MARK values.
Do you mean two sources A,B going both to C can share same outgoing
connection? Is there a way to change this behavior?

>
> So you need to design your system not to rely on an outbound connection
> existing, and to handle MARK being changed mid-connection.
>

> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Mark outgoing connection mark same as client side mark

2016-05-11 Thread Deniz Eren
Hi,

In my system I am using netfilter marks to shape traffic(SNAT, QoS,
etc.) however when I redirect traffic to Squid using Tproxy I lose the
mark value(obviously). I saw configuration directive qos_flow but it's
only applicable for incoming connections( some website -> squid ->
client PC), what I need is the opposite one I want to pass mark of
outgoing connections( client PC -> squid -> some website ). I want to
mark packet in mangle PREROUTING and then redirect packet to TPROXY
and after packets coming out of squid I want to use the same mark in
mangle OUTPUT or POSTROUTING chains. Is there a way to do that?

Discussed in this thread, but no solution is given:
http://www.squid-cache.org/mail-archive/squid-users/201403/0132.html

Regards,
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Excessive TCP memory usage

2016-05-25 Thread Deniz Eren
When I listen to connections between squid and icap using tcpdump I
saw that after a while icap closes the connection but squid does not
close, so connection stays in CLOSE_WAIT state:

[root@test ~]# tcpdump -i any -n port 34693
tcpdump: WARNING: Promiscuous mode not supported on the "any" device
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 96 bytes
13:07:31.802238 IP 127.0.0.1.icap > 127.0.0.1.34693: F
2207817997:2207817997(0) ack 710772005 win 395 <nop,nop,timestamp
104616992 104016968>
13:07:31.842186 IP 127.0.0.1.34693 > 127.0.0.1.icap: . ack 1 win 3186
<nop,nop,timestamp 104617032 104616992>

[root@test ~]# netstat -tulnap|grep 34693
tcp   215688  0 127.0.0.1:34693 127.0.0.1:1344
 CLOSE_WAIT  19740/(squid-1)

These CLOSE_WAIT connections do not timeout and stay until squid
process is killed.

2016-05-25 10:37 GMT+03:00 Deniz Eren <denizl...@denizeren.net>:
> 2016-05-24 21:47 GMT+03:00 Amos Jeffries <squ...@treenet.co.nz>:
>> On 25/05/2016 5:50 a.m., Deniz Eren wrote:
>>> Hi,
>>>
>>> After upgrading to squid 3.5.16 I realized that squid started using
>>> much of kernel's TCP memory.
>>
>> Upgrade from which version?
>>
> Upgrading from squid 3.1.14. I started using c-icap and ssl-bump.
>
>>>
>>> When squid was running for a long time TCP memory usage is like below:
>>> test@test:~$ cat /proc/net/sockstat
>>> sockets: used *
>>> TCP: inuse * orphan * tw * alloc * mem 20
>>> UDP: inuse * mem *
>>> UDPLITE: inuse *
>>> RAW: inuse *
>>> FRAG: inuse * memory *
>>>
>>> When I restart squid the memory usage drops dramatically:
>>
>> Of course it does. By restarting you just erased all of the operational
>> state for an unknown but large number of active network connections.
>>
> That's true but what I mean was squid's CLOSE_WAIT connections are
> using too much memory and they are not timing out.
>
>> Whether many of those should have been still active or not is a
>> different question. the answer to which depends on how you have your
>> Squid configured, and what the traffic through it has been doing.
>>
>>
>>> test@test:~$ cat /proc/net/sockstat
>>> sockets: used *
>>> TCP: inuse * orphan * tw * alloc * mem 10
>>> UDP: inuse * mem *
>>> UDPLITE: inuse *
>>> RAW: inuse *
>>> FRAG: inuse * memory *
>>>
>>
>> The numbers you replaced with "*" are rather important for context.
>>
>>
> Today again I saw the problem:
>
> test@test:~$ cat /proc/net/sockstat
> sockets: used 1304
> TCP: inuse 876 orphan 81 tw 17 alloc 906 mem 29726
> UDP: inuse 17 mem 8
> UDPLITE: inuse 0
> RAW: inuse 1
> FRAG: inuse 0 memory 0
>
>>> I'm using Squid 3.5.16.
>>>
>>
>> Please upgrade to 3.5.19. Some important issues have been resolved. Some
>> of them may be related to your TCP memory problem.
>>
>>
> I have upgraded now and problem still exists.
>
>>> When I look with "netstat" and "ss" I see lots of CLOSE_WAIT
>>> connections from squid to ICAP or from squid to upstream server.
>>>
>>> Do you have any idea about this problem?
>>
>> Memory use by the TCP system of your kernel has very little to do with
>> Squid. Number of sockets in CLOSE_WAIT does have some relation to Squid
>> or at least to how the traffic going through it is handled.
>>
>> If you have disabled persistent connections in squid.conf then lots of
>> closed sockets and FD are to be expected.
>>
>> If you have persistent connections enabled, then fewer closures should
>> happen. But some will so expectations depends on how high the traffic
>> load is.
>>
> Persistent connection parameters are enabled in my conf, the problem
> occurs especially with connections to c-icap service.
>
> My netstat output is like this:
> netstat -tulnap|grep squid|grep CLOSE
>
> tcp   211742  0 127.0.0.1:55751 127.0.0.1:1344
>  CLOSE_WAIT  17076/(squid-1)
> tcp   215700  0 127.0.0.1:55679 127.0.0.1:1344
>  CLOSE_WAIT  17076/(squid-1)
> tcp   215704  0 127.0.0.1:55683 127.0.0.1:1344
>  CLOSE_WAIT  17076/(squid-1)
> ...(hundreds)
> Above ones are connections to c-icap service.
>
> netstat -tulnap|grep squid|grep CLOSE
> Active Internet connections (servers and established)
> Proto Recv-Q Send-Q Local Address   Foreign Address
>  State   PID/Program name
> 

Re: [squid-users] Excessive TCP memory usage

2016-05-25 Thread Deniz Eren
2016-05-24 21:47 GMT+03:00 Amos Jeffries <squ...@treenet.co.nz>:
> On 25/05/2016 5:50 a.m., Deniz Eren wrote:
>> Hi,
>>
>> After upgrading to squid 3.5.16 I realized that squid started using
>> much of kernel's TCP memory.
>
> Upgrade from which version?
>
Upgrading from squid 3.1.14. I started using c-icap and ssl-bump.

>>
>> When squid was running for a long time TCP memory usage is like below:
>> test@test:~$ cat /proc/net/sockstat
>> sockets: used *
>> TCP: inuse * orphan * tw * alloc * mem 20
>> UDP: inuse * mem *
>> UDPLITE: inuse *
>> RAW: inuse *
>> FRAG: inuse * memory *
>>
>> When I restart squid the memory usage drops dramatically:
>
> Of course it does. By restarting you just erased all of the operational
> state for an unknown but large number of active network connections.
>
That's true but what I mean was squid's CLOSE_WAIT connections are
using too much memory and they are not timing out.

> Whether many of those should have been still active or not is a
> different question. the answer to which depends on how you have your
> Squid configured, and what the traffic through it has been doing.
>
>
>> test@test:~$ cat /proc/net/sockstat
>> sockets: used *
>> TCP: inuse * orphan * tw * alloc * mem 10
>> UDP: inuse * mem *
>> UDPLITE: inuse *
>> RAW: inuse *
>> FRAG: inuse * memory *
>>
>
> The numbers you replaced with "*" are rather important for context.
>
>
Today again I saw the problem:

test@test:~$ cat /proc/net/sockstat
sockets: used 1304
TCP: inuse 876 orphan 81 tw 17 alloc 906 mem 29726
UDP: inuse 17 mem 8
UDPLITE: inuse 0
RAW: inuse 1
FRAG: inuse 0 memory 0

>> I'm using Squid 3.5.16.
>>
>
> Please upgrade to 3.5.19. Some important issues have been resolved. Some
> of them may be related to your TCP memory problem.
>
>
I have upgraded now and problem still exists.

>> When I look with "netstat" and "ss" I see lots of CLOSE_WAIT
>> connections from squid to ICAP or from squid to upstream server.
>>
>> Do you have any idea about this problem?
>
> Memory use by the TCP system of your kernel has very little to do with
> Squid. Number of sockets in CLOSE_WAIT does have some relation to Squid
> or at least to how the traffic going through it is handled.
>
> If you have disabled persistent connections in squid.conf then lots of
> closed sockets and FD are to be expected.
>
> If you have persistent connections enabled, then fewer closures should
> happen. But some will so expectations depends on how high the traffic
> load is.
>
Persistent connection parameters are enabled in my conf, the problem
occurs especially with connections to c-icap service.

My netstat output is like this:
netstat -tulnap|grep squid|grep CLOSE

tcp   211742  0 127.0.0.1:55751 127.0.0.1:1344
 CLOSE_WAIT  17076/(squid-1)
tcp   215700  0 127.0.0.1:55679 127.0.0.1:1344
 CLOSE_WAIT  17076/(squid-1)
tcp   215704  0 127.0.0.1:55683 127.0.0.1:1344
 CLOSE_WAIT  17076/(squid-1)
...(hundreds)
Above ones are connections to c-icap service.

netstat -tulnap|grep squid|grep CLOSE
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address   Foreign Address
 State   PID/Program name
tcp1  0 192.168.2.1:8443192.168.6.180:45182
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.2.177:50020
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.2.172:60028
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.6.180:44049
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.6.180:55054
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.2.137:52177
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.6.180:43542
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.6.155:39489
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.0.147:38939
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.6.180:38754
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.0.164:39602
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.0.147:54114
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.6.180:57857
 CLOSE_WAIT  15245/(squid-1)
tcp1  0 192.168.2.1:8443192.168.0.156:43482
 CLOSE_WAIT  15245/(squid-1)
...(about 50)
Above ones are connections fro

Re: [squid-users] Squid SMP workers crash

2016-10-19 Thread Deniz Eren
On 10/18/16, Alex Rousskov <rouss...@measurement-factory.com> wrote:
> On 10/17/2016 10:37 PM, Deniz Eren wrote:
>> On Mon, Oct 17, 2016 at 7:43 PM, Alex Rousskov wrote:
>>> On 10/17/2016 02:38 AM, Deniz Eren wrote:
>>>> 2016/10/17 11:22:37 kid1| assertion failed:
>>>> ../../src/ipc/AtomicWord.h:71: "Enabled()"
>>>
>>> Either your Squid does not support SMP (a build environment problem) or
>>> Squid is trying to use SMP features when SMP is not enabled (a Squid
>>> bug).
>>>
>>> What does the following command show?
>>>
>>>   fgrep -RI HAVE_ATOMIC_OPS config.status include/autoconf.h
>> fgrep -RI HAVE_ATOMIC_OPS config.status include/autoconf.h
>> config.status:D["HAVE_ATOMIC_OPS"]=" 0"
>> include/autoconf.h:#define HAVE_ATOMIC_OPS 0
>
> Your Squid does not support SMP. The ./configure script failed to find
> the necessary APIs for SMP support. I wish Squid would tell you that in
> a less obscure way than an Enabled() assertion; feel free to file a bug
> report about that, but that is a reporting/UI problem; the assertion
> itself is correct.
Yes, you are right. Inspecting more carefully I saw that "Atomic"
support is missing.

>
> I do not know why your build environment lacks atomics support (or why
> Squid cannot detect that support), but I hope that others on the mailing
> list would be able to help you with that investigation.
Fixing system include paths solved the problem. Thanks for pointing me
what the problem is.

>
>
> Finally, in the interest of full disclosure, I have to note that, IIRC,
> atomics are not actually required for some of the primitive SMP
> features, but Squid attempts to create a few shared memory tables even
> when those tables are not needed, and those tables do require atomics
> (and will hit the Enabled() assertion you have reported).
>
> There have been improvements in this area; eventually no unnecessary
> shared memory tables will be created, but it is probably easier for you
> to get a build with working atomics (usually does not require any
> development) than to get rid of those tables (which probably require
> more development).
>
> Alex.
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid SMP workers crash

2016-10-17 Thread Deniz Eren
On Sun, Oct 16, 2016 at 2:57 AM, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
> Hey,
>
> I can try to replicate the same configuration removing couple settings just 
> to make it simpler to verify if the issue since it's similar to the next 
> testing lab I have planned.
> Can you give more detail about the OS? CentOS, Ubuntu, Other?
CentOS 5

> If it's a self compiled versions then "squid -v" output.
Squid Cache: Version 3.5.20
Service Name: squid
configure options:  '--build=i686-redhat-linux-gnu'
'--host=i686-redhat-linux-gnu' '--target=i386-redhat-linux-gnu'
'--program-prefix=' '--exec-prefix=/opt/squid'
'--datadir=/opt/squid/share' '--libdir=/opt/squid/lib'
'--libexecdir=/opt/squid/libexec' '--localstatedir=/var'
'--sharedstatedir=/opt/squid/com' '--infodir=/usr/share/info'
'--prefix=/opt/squid' '--exec_prefix=/opt/squid'
'--bindir=/opt/squid/bin' '--sbindir=/opt/squid/sbin'
'--sysconfdir=/opt/squid/etc' '--datadir=/opt/squid/share/squid'
'--includedir=/opt/squid/include' '--libdir=/opt/squid/lib/squid'
'--libexecdir=/opt/squid/lib/squid' '--localstatedir=/opt/squid/var'
'--mandir=/opt/squid/share/man' '--infodir=/opt/squid/share/info'
'--enable-epoll' '--disable-dependency-tracking' '--enable-arp-acl'
'--enable-auth' '--enable-auth-negotiate' '--enable-auth-digest'
'--enable-auth-basic' '--enable-auth-ntlm' '--enable-cache-digests'
'--enable-cachemgr-hostname=localhost' '--enable-delay-pools'
'--enable-external-acl-helpers' '--enable-icap-client'
'--with-large-files' '--enable-linux-netfilter' '--enable-referer-log'
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
'--enable-storeio=aufs,diskd,ufs' '--enable-useragent-log'
'--enable-wccpv2' '--with-aio' '--with-default-user=squid'
'--with-filedescriptors=32768' '--with-dl' '--enable-ssl-crtd'
'--with-openssl=/opt/openssl101' '--with-pthreads'
'--enable-http-violations' '--enable-follow-x-forwarded-for'
'--disable-ipv6' 'build_alias=i686-redhat-linux-gnu'
'host_alias=i686-redhat-linux-gnu'
'target_alias=i386-redhat-linux-gnu' 'CFLAGS=-fPIE -Os -g -pipe
-fsigned-char -I /usr/kerberos/include -I/opt/openssl101/include -O2
-g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic
-fasynchronous-unwind-tables' 'LDFLAGS=-pie -L/opt/openssl101/lib'
'CXXFLAGS=-fPIE -I/opt/openssl101/include -O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic
-fasynchronous-unwind-tables'
'PKG_CONFIG_PATH=/opt/squid/lib/pkgconfig:/opt/squid/share/pkgconfig'
--enable-ltdl-convenience


> I have also seen that you are intercepting both http and https traffic, have 
> you tried looking at the logs?
You are right I'm intercepting both http and https traffic. Yes I have
looked at logs and only suspicious thing is this line:
2016/10/17 11:22:37 kid1| assertion failed:
../../src/ipc/AtomicWord.h:71: "Enabled()"

>
> If you don't hear me from me fast enough just bump me with an email.
>
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile+WhatsApp: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Deniz Eren
> Sent: Thursday, October 13, 2016 10:53 AM
> To: squid-users@lists.squid-cache.org
> Subject: [squid-users] Squid SMP workers crash
>
> Hi,
>
> I'm using squid's SMP functionality to distribute requests to many
> squid instances and distribute workload to multiple processors.
> However while running squid's workers after a while worker processes
> crash with the error below and coordinator does not start them again:
> ...
> FATAL: Ipc::Mem::Segment::open failed to
> shm_open(/squid-cf__metadata.shm): (2) No such file or directory
> Squid Cache (Version 3.5.20): Terminated abnormally.
> ...
>
> Does a solution exists for this problem? (permissions are OK in /dev/shm)
>
>
> When everything is OK coordinator listens to http_ports/https_port and
> distributes connections to workers(at least that's the conclusion I
> got from looking access.logs).
> [root@squidbox ~]# netstat -nlp|grep squid
> tcp0  0 0.0.0.0:80800.0.0.0:*
>  LISTEN  7887/(squid-coord-1
> tcp0  0 0.0.0.0:31270.0.0.0:*
>  LISTEN  7887/(squid-coord-1
> tcp0  0 0.0.0.0:31280.0.0.0:*
>  LISTEN  7887/(squid-coord-1
> tcp0  0 0.0.0.0:31300.0.0.0:*
>  LISTEN  7887/(squid-coord-1
> tcp0  0 0.0.0.0:84430.0.0.0:*
>  LISTEN  7887/(squid-coord-1
> udp0  0 0.0.0.0:57850   0.0.0.0:*
>  7897/(squid-1)
> u

Re: [squid-users] Squid SMP workers crash

2016-10-17 Thread Deniz Eren
On Fri, Oct 14, 2016 at 1:50 AM, Alex Rousskov
<rouss...@measurement-factory.com> wrote:
> On 10/13/2016 01:53 AM, Deniz Eren wrote:
>
>> I'm using squid's SMP functionality to distribute requests to many
>> squid instances and distribute workload to multiple processors.
>> However while running squid's workers after a while worker processes
>> crash with the error below and coordinator does not start them again:
>> ...
>> FATAL: Ipc::Mem::Segment::open failed to
>> shm_open(/squid-cf__metadata.shm): (2) No such file or directory
>> Squid Cache (Version 3.5.20): Terminated abnormally.
>> ...
>
> Are you saying that this fatal shm_open() error happens after all
> workers have started serving/logging traffic?
Yes, they are serving.

> I would expect to see it
> at startup (first few minutes at the most if you have IPC timeout
> problems).
Both happen. Sometimes it crashes after seconds, but most of the time
it takes 5-10 minutes.


> Does the error always point to squid-cf__metadata.shm?
This error is solved but, below error still happens.
2016/10/17 11:22:37 kid1| assertion failed:
../../src/ipc/AtomicWord.h:71: "Enabled()"

>
> Are you sure that there are no other fatal errors, segmentation faults,
> or similar deathly problems _before_ this error?
> Are you sure your
> startup script does not accidentally start multiple Squid instances that
> compete with each other?
You were right there was a problem with startup script. I'm now
starting with "squid -f /conf/file/path/conffile.conf". However there
is a new problem shown below.
2016/10/17 11:22:37 kid1| assertion failed:
../../src/ipc/AtomicWord.h:71: "Enabled()"

Because of this error workers crash couple of times and after that
coordinator gives up creating workers.

> Check system error logs.
>
> FWIW, Segment::open errors without Segment::create errors are often a
> side-effect of other problems that either prevent Squid from creating
> segments or force Squid to remove created segments (both happen in the
> master process).
>
>
>> permissions are OK in /dev/shm
>
> Do you see any Squid segments there (with reasonable timestamps)?
>
>
>> Also is my way of using SMP functionality correct, since I want to
>> distribute all connections between workers and to listen only specific
>> ports?
>
> Adding "workers N" and avoiding SMP-incompatible features is the right
> way; I do not see any SMP-related problems in your configuration.
>
> Alex.
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid SMP workers crash

2016-10-17 Thread Deniz Eren
On Mon, Oct 17, 2016 at 7:43 PM, Alex Rousskov
<rouss...@measurement-factory.com> wrote:
> On 10/17/2016 02:38 AM, Deniz Eren wrote:
>> 2016/10/17 11:22:37 kid1| assertion failed:
>> ../../src/ipc/AtomicWord.h:71: "Enabled()"
>
> Either your Squid does not support SMP (a build environment problem) or
> Squid is trying to use SMP features when SMP is not enabled (a Squid bug).
>
> What does the following command show?
>
>   fgrep -RI HAVE_ATOMIC_OPS config.status include/autoconf.h
fgrep -RI HAVE_ATOMIC_OPS config.status include/autoconf.h
config.status:D["HAVE_ATOMIC_OPS"]=" 0"
include/autoconf.h:#define HAVE_ATOMIC_OPS 0

>
> (adjust filename paths as needed).
>
> Alex.
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid SMP workers crash

2016-10-13 Thread Deniz Eren
Hi,

I'm using squid's SMP functionality to distribute requests to many
squid instances and distribute workload to multiple processors.
However while running squid's workers after a while worker processes
crash with the error below and coordinator does not start them again:
...
FATAL: Ipc::Mem::Segment::open failed to
shm_open(/squid-cf__metadata.shm): (2) No such file or directory
Squid Cache (Version 3.5.20): Terminated abnormally.
...

Does a solution exists for this problem? (permissions are OK in /dev/shm)


When everything is OK coordinator listens to http_ports/https_port and
distributes connections to workers(at least that's the conclusion I
got from looking access.logs).
[root@squidbox ~]# netstat -nlp|grep squid
tcp0  0 0.0.0.0:80800.0.0.0:*
 LISTEN  7887/(squid-coord-1
tcp0  0 0.0.0.0:31270.0.0.0:*
 LISTEN  7887/(squid-coord-1
tcp0  0 0.0.0.0:31280.0.0.0:*
 LISTEN  7887/(squid-coord-1
tcp0  0 0.0.0.0:31300.0.0.0:*
 LISTEN  7887/(squid-coord-1
tcp0  0 0.0.0.0:84430.0.0.0:*
 LISTEN  7887/(squid-coord-1
udp0  0 0.0.0.0:57850   0.0.0.0:*
 7897/(squid-1)
udp0  0 0.0.0.0:33643   0.0.0.0:*
 7894/(squid-4)
udp0  0 0.0.0.0:50485   0.0.0.0:*
 7896/(squid-2)
udp0  0 0.0.0.0:46427   0.0.0.0:*
 7887/(squid-coord-1
udp0  0 0.0.0.0:58938   0.0.0.0:*
 7895/(squid-3)


Also is my way of using SMP functionality correct, since I want to
distribute all connections between workers and to listen only specific
ports?

I have attached the squid.conf.

Regards,


squid.conf
Description: Binary data
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.0.19 SSLBump Crashes

2017-05-10 Thread Deniz Eren
Hi,

I'm testing squid squid-4.0.19-20170508-r15031 when I enable ssl-bump
in intercept mode, after couple of SSL requests squid crashes in
"Parser::BinaryTokenizer::want(unsigned long long, char const*) const
()" function.

OS: CentOS 5
OpenSSL: 1.0.1e-51
g++: 4.8.2-15

I have attached part of debug log,core stack trace and squid.conf.(I
have migrated from 3.5, so there might be non-correct parts in my
squid.conf)

Does something wrong with my compilation or squid.conf; how can I
debug this issue.

Regards,
(gdb) bt
#0  0xf6f9fc80 in __kernel_vsyscall ()
#1  0xf6992b10 in raise () from /lib/libc.so.6
#2  0xf6994421 in abort () from /lib/libc.so.6
#3  0xf6bb2ab0 in __gnu_cxx::__verbose_terminate_handler() () from 
/usr/lib/libstdc++.so.6
#4  0xf6bb0515 in __gxx_personality_v0 () from /usr/lib/libstdc++.so.6
#5  0xf6bb0552 in __gxx_personality_v0 () from /usr/lib/libstdc++.so.6
#6  0xf6bb068a in __cxa_rethrow () from /usr/lib/libstdc++.so.6
#7  0xf7443830 in Parser::BinaryTokenizer::want(unsigned long long, char 
const*) const ()
#8  0xf744571d in Parser::BinaryTokenizer::area(unsigned long long, char 
const*) ()
#9  0xf7445915 in Parser::BinaryTokenizer::pstring16(char const*) ()
#10 0xf73c8238 in 
Security::TLSPlaintext::TLSPlaintext(Parser::BinaryTokenizer&) ()
#11 0xf73c9fa9 in Security::HandshakeParser::parseModernRecord() ()
#12 0xf73ca70d in Security::HandshakeParser::parseRecord() ()
#13 0xf73ca780 in Security::HandshakeParser::parseHello(SBuf const&) ()
#14 0xf73e158c in Ssl::ServerBio::readAndParse(char*, int, bio_st*) ()
#15 0xf73e195a in Ssl::ServerBio::read(char*, int, bio_st*) ()
#16 0xf73de898 in ?? ()
#17 0xf6dd7271 in BIO_read () from /lib/libcrypto.so.10
#18 0xf6f0b98b in ssl23_read_bytes () from /lib/libssl.so.10
#19 0xf6f0a902 in ssl23_connect () from /lib/libssl.so.10
#20 0xf6f1e09a in SSL_connect () from /lib/libssl.so.10
#21 0xf73d1f4d in Security::PeerConnector::negotiate() ()
#22 0xf73d4735 in NullaryMemFunT::doDial() ()
#23 0xf73d510f in JobDialer::dial(AsyncCall&) ()
#24 0xf73d52d2 in AsyncCallT::fire() 
()
#25 0xf73615fb in AsyncCall::make() ()
#26 0xf736616c in AsyncCallQueue::fireNext() ()
#27 0xf7366568 in AsyncCallQueue::fire() ()
#28 0xf7185114 in EventLoop::runOnce() ()
#29 0xf7185228 in EventLoop::run() ()
#30 0xf71fc9f9 in SquidMain(int, char**) ()
#31 0xf70ce209 in main ()
2017/05/10 16:07:57.917 kid1| 5,8| ModEpoll.cc(266) DoSelect: got FD 23 
events=4 monitoring=1c F->read_handler=0 F->write_handler=1
2017/05/10 16:07:57.917 kid1| 5,8| ModEpoll.cc(288) DoSelect: Calling write 
handler on FD 23
2017/05/10 16:07:57.917 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf959c078
2017/05/10 16:07:57.917 kid1| 45,9| cbdata.cc(351) cbdataInternalLock: 
0xf959c078=6
2017/05/10 16:07:57.917 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf959c078
2017/05/10 16:07:57.917 kid1| 45,9| cbdata.cc(351) cbdataInternalLock: 
0xf959c078=7
2017/05/10 16:07:57.917 kid1| 5,4| AsyncCall.cc(26) AsyncCall: The AsyncCall 
Comm::ConnOpener::doConnect constructed, this=0xf9791d88 [call1160]
2017/05/10 16:07:57.917 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf959c078
2017/05/10 16:07:57.917 kid1| 45,9| cbdata.cc(351) cbdataInternalLock: 
0xf959c078=8
2017/05/10 16:07:57.917 kid1| 45,9| cbdata.cc(383) cbdataInternalUnlock: 
0xf959c078=7
2017/05/10 16:07:57.917 kid1| 45,9| cbdata.cc(383) cbdataInternalUnlock: 
0xf959c078=6
2017/05/10 16:07:57.917 kid1| 5,4| AsyncCall.cc(93) ScheduleCall: 
ConnOpener.cc(463) will call Comm::ConnOpener::doConnect() [call1160]
2017/05/10 16:07:57.918 kid1| 45,9| cbdata.cc(383) cbdataInternalUnlock: 
0xf959c078=5
2017/05/10 16:07:57.918 kid1| 5,4| AsyncCallQueue.cc(55) fireNext: entering 
Comm::ConnOpener::doConnect()
2017/05/10 16:07:57.918 kid1| 5,4| AsyncCall.cc(38) make: make call 
Comm::ConnOpener::doConnect [call1160]
2017/05/10 16:07:57.918 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf959c078
2017/05/10 16:07:57.918 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf959c078
2017/05/10 16:07:57.918 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf959c078
2017/05/10 16:07:57.918 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf959c078
2017/05/10 16:07:57.918 kid1| 5,4| AsyncJob.cc(123) callStart: Comm::ConnOpener 
status in: [ job139]
2017/05/10 16:07:57.918 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf959c078
2017/05/10 16:07:57.918 kid1| 5,9| comm.cc(608) comm_connect_addr: connecting 
socket FD 23 to 192.229.233.50:443 (want family: 2)
2017/05/10 16:07:57.918 kid1| 5,9| comm.cc(714) comm_connect_addr: 
comm_connect_addr: FD 23 connected to 192.229.233.50:443
2017/05/10 16:07:57.918 kid1| 5,5| ConnOpener.cc(350) doConnect: local=0.0.0.0 
remote=192.229.233.50:443 flags=1: Comm::OK - connected
2017/05/10 16:07:57.918 kid1| 5,4| ConnOpener.cc(155) cleanFd: local=0.0.0.0 
remote=192.229.233.50:443 flags=1 closing temp FD 23
2017/05/10 16:07:57.918 kid1| 5,5| ModEpoll.cc(117) SetSelect: FD 23, type=2,