Re: [squid-users] Happy Eyeballs and connect_timeout in squid 3.4.12

2015-04-30 Thread Amos Jeffries
On 30/04/2015 7:05 a.m., Tom Tom wrote:
 Thank you Amos, for this explanation.
 
 
 On Wed, Apr 29, 2015 at 3:02 PM, Amos Jeffries wrote:
 On 29/04/2015 7:38 p.m., Tom Tom wrote:
 Hi

 I'm running squid (3.4.12) on a IPv6/IPv4-dual-stack system.

 While accessing the test-site http://test.rx.td.h.labs.apnic.net;, I
 encountered a 60s connection-timeout (configurable with
 connect_timeout) while squid is making 5 IPv6-connection-attempts
 (SYN), before it tries to connect with IPv4 (which is working on the
 test-site). I can decrease the connect_timeout-value to 1 second.
 This behaves in a better surf-experience and results in a 1s-timeout
 (also only 1 IPv6-SYN) instead of the default 60s timeout.

 Why does squid not tries to connect first IPv6 (based on the host's
 address preference-policy) and then - in case of a failure - switch to
 IPv4 during a 300ms timeout (like current Browsers are doing)?

 Several reasons:

 1) The default builds and installs do try IPv6 first in accordance with
 RFC 6540. Check your config for a dns_v4_first directive which forces
 IPv4 to be tried first.
 
 According to RFC 6555:  Over time, as most content is available via
 IPv6, the amount of IPv4 traffic will decrease.. With forcing this
 directive, I reduce the chance for connecting with IPv6 and my
 outbound connections are probably a long time with IPv4. This is maybe
 not the behaviour we want?

If you want to encourage IPv6 usage let Squid operate at its default
behaviour and fix the issues in the network which make any given request
go particularly worse than in IPv4. Sadly many of these are caused by
external sysadmins choices nowdays either to run with outdated machinery
or to explicitly break IPv6 in the name of disabling it.

As a proponent of IPv6 adoption myself I have written the IPv6
behaviours into Squid to prefer IPv6 over IPv4 whenever possible. Long
before RFC 6540 required it.



 2) Squid is not the OS built-in resolver. Any obeying of that policy by
 Squid is purely arbitrary. The host systems DNS resolver policy does not
 supposed to affect standalone resolvers such as Squids internal one.
 Particularly when there are squid.conf directives overriding the
 resolv.conf behaviour (eg. dns_nameservers).

 dns_v4_first was a partial implementation added for
 http://bugs.squid-cache.org/show_bug.cgi?id=3086.


 3) when performed by middleware such as Squid the Happy Eyeballs
 algorithm is heavily destructive.

 A browser is consuming at minimum 2 network sockets to perform Happy
 Eyeballs.

 At the middlware each of those translates to potentially 3 sockets
 (total 6 proxy sockets, 2 outgoing server sockets). If the middleware
 were to perform Happy Eyeballs itself that would increase to 4 sockets
 (total 8 proxy sockets, 4 outgoing server sockets).
 
 But only in the parallel way (1 x IPv6 and 1 x IPv4)?

No, multiplexed. Each hop has both IPv4 and IPv6 outbound possibilities
for each individual packet regardless of the inbound type. Happy
Eyeballs worst-case is a straight exponential 2^N socket usage at the
server where N is proxy hop distance from client. Best-case occurs when
an admin chooses to disable IPv4 or IPv6 and cuts the exponential growth
from their hop in half.

So the algorithm happening in middleware would actively encourage bad
network practices by sysadmin. Grr :-(

 
 Tests with the current curl (I know, curl != squid) behaves not in
 doing two similar (parallel) TCP-Connections (1x SYN for IPv6 and 1x
 SYN for IPv4). Instead, curl tries IPv6 first and in case of an
 connection-error, it tries after a few milliseconds with IPv4. This
 way, not a lot of sockets should be consumed. Does this not behave
 like a native IPv4-Stack? Squid would behave like curl, if I would be
 able to change the connect_timeout to milliseconds.

That sequential operation is already the current behaviour of Squid.
Just with resolution of seconds on the timeout and configurable choice
between ordering of {IPv6,IPv4} or {IPv4,IPv6} as to what gets tried first.

The browser Happy Eyeballs algorithm you were talking about /
proposing works quite differenty. All A and  record have DNS queries
generated at once - as those replies happen all listed IPs have TCP SYN
packets generated at once and all of them are sent. It ends when one TCP
SYN packet gets a success response. End of spec.

That performance numbers that come out of it are great ... for a single
point-to-point connection.
It looks less wonderful and more like a DoS attack in all other cases,
especially when taking server resource side-effects into account.

 
 Is there a well-known restriction (which I don't now, actually), by
 setting the connect_timeout to 1 second for all those IPv6-Adresses,
 which aren't connectable and for which the IPv4-Stack should be used
 after 1s timeout? Is this a practicable way?

There are two restrictions:

 1) the OS unix time (time_t) has a minimum resolution of 1 second.
Anything using more detailed time resolution is 

[squid-users] Assert(call-dialer.handler == callback)

2015-04-30 Thread Steve Hill


I've just migrated a system from Squid 3.4.10 to 3.5.3 and I'm getting 
frequent crashes with an assertion of call-dialer.handler == callback 
in Read.cc:comm_read_cancel().


call-dialer.handler == (IOCB *) 0x7ffe1493b2d0 
TunnelStateData::ReadClient(Comm::ConnectionPointer const, char*, 
size_t, Comm::Flag, int, void*)


callback == IdleConnList::Read(Comm::ConnectionPointer const, char*, 
size_t, Comm::Flag, int, void*)



This is quite a busy system doing server-first ssl_bump and I get a lot 
of SSL negotiation errors in cache.log (these were present under 3.4.10 
too).  I think a good chunk of these are Team Viewer, which abuses 
CONNECTs to port 443 of remote servers to do non-SSL traffic, so 
obviously isn't going to work with ssl_bump.  I _suspect_ that the 
assertion may be being triggered by these SSL errors (e.g. connection 
being unexpectedly torn down because SSL negotiation failed?), but I 
can't easily prove that.


I don't quite understand the comm_read_cancel() function though - as far 
as I can see, the callback parameter is only used in the assert() - is 
that correct?



Stack trace:
#0  0x7ffe1155d625 in raise (sig=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:64

#1  0x7ffe1155ee05 in abort () at abort.c:92
#2  0x7ffe148210df in xassert (msg=Unhandled dwarf expression opcode 
0xf3

) at debug.cc:544
#3  0x7ffe14a62787 in comm_read_cancel (fd=600, 
callback=0x7ffe148c8dd0 IdleConnList::Read(Comm::ConnectionPointer 
const, char*, size_t, Comm::Flag, int, void*),

data=0x7ffe176c8298) at Read.cc:204
#4  0x7ffe148c5e62 in IdleConnList::clearHandlers 
(this=0x7ffe176c8298, conn=...) at pconn.cc:157
#5  0x7ffe148c94ab in IdleConnList::findUseable 
(this=0x7ffe176c8298, key=...) at pconn.cc:269
#6  0x7ffe148c979d in PconnPool::pop (this=0x7ffe145db010, dest=..., 
domain=Unhandled dwarf expression opcode 0xf3

) at pconn.cc:449
#7  0x7ffe14852142 in FwdState::pconnPop (this=Unhandled dwarf 
expression opcode 0xf3

) at FwdState.cc:1153
#8  0x7ffe14855605 in FwdState::connectStart (this=0x7ffe2034c4e8) 
at FwdState.cc:850
#9  0x7ffe14856a31 in FwdState::startConnectionOrFail 
(this=0x7ffe2034c4e8) at FwdState.cc:398
#10 0x7ffe148d2fa5 in peerSelectDnsPaths (psstate=0x7ffe1fd0c028) at 
peer_select.cc:302
#11 0x7ffe148d6a1d in peerSelectDnsResults (ia=0x7ffe14f0ac20, 
details=Unhandled dwarf expression opcode 0xf3

) at peer_select.cc:383
#12 0x7ffe148a8e71 in ipcache_nbgethostbyname (name=Unhandled dwarf 
expression opcode 0xf3

) at ipcache.cc:518
#13 0x7ffe148d23c1 in peerSelectDnsPaths (psstate=0x7ffe1fd0c028) at 
peer_select.cc:259
#14 0x7ffe148d6a1d in peerSelectDnsResults (ia=0x7ffe14f0ac20, 
details=Unhandled dwarf expression opcode 0xf3

) at peer_select.cc:383
#15 0x7ffe148a8e71 in ipcache_nbgethostbyname (name=Unhandled dwarf 
expression opcode 0xf3

) at ipcache.cc:518
#16 0x7ffe148d23c1 in peerSelectDnsPaths (psstate=0x7ffe1fd0c028) at 
peer_select.cc:259
#17 0x7ffe148d382b in peerSelectFoo (ps=0x7ffe1fd0c028) at 
peer_select.cc:522
#18 0x7ffe149bba6a in ACLChecklist::checkCallback 
(this=0x7ffe2065b9e8, answer=...) at Checklist.cc:167
#19 0x7ffe148d3f5a in peerSelectFoo (ps=0x7ffe1fd0c028) at 
peer_select.cc:459
#20 0x7ffe148d5176 in peerSelect (paths=0x7ffe2034c540, 
request=0x7ffe1b660b70, al=Unhandled dwarf expression opcode 0xf3

) at peer_select.cc:163
#21 0x7ffe14852ae3 in FwdState::Start (clientConn=..., 
entry=0x7ffe1b0da790, request=0x7ffe1b660b70, al=...) at FwdState.cc:366
#22 0x7ffe14801401 in clientReplyContext::processMiss 
(this=0x7ffe1fcf5838) at client_side_reply.cc:691
#23 0x7ffe14801eb0 in clientReplyContext::doGetMoreData 
(this=0x7ffe1fcf5838) at client_side_reply.cc:1797
#24 0x7ffe14805a89 in ClientHttpRequest::httpStart 
(this=0x7ffe1dcda618) at client_side_request.cc:1518
#25 0x7ffe14808cac in ClientHttpRequest::processRequest 
(this=0x7ffe1dcda618) at client_side_request.cc:1504
#26 0x7ffe14809013 in ClientHttpRequest::doCallouts 
(this=0x7ffe1dcda618) at client_side_request.cc:1830
#27 0x7ffe1480b453 in checkNoCacheDoneWrapper (answer=..., 
data=0x7ffe1e5db378) at client_side_request.cc:1400
#28 0x7ffe149bba6a in ACLChecklist::checkCallback 
(this=0x7ffe1c88b4a8, answer=...) at Checklist.cc:167
#29 0x7ffe1480b40a in ClientRequestContext::checkNoCache 
(this=0x7ffe1e5db378) at client_side_request.cc:1385
#30 0x7ffe14809c04 in ClientHttpRequest::doCallouts 
(this=0x7ffe1dcda618) at client_side_request.cc:1748
#31 0x7ffe1480d109 in ClientRequestContext::clientAccessCheckDone 
(this=0x7ffe1e5db378, answer=Unhandled dwarf expression opcode 0xf3

) at client_side_request.cc:821
#32 0x7ffe1480d898 in ClientRequestContext::clientAccessCheck2 
(this=0x7ffe1e5db378) at client_side_request.cc:718
#33 0x7ffe14809767 in ClientHttpRequest::doCallouts 
(this=0x7ffe1dcda618) at client_side_request.cc:1721
#34 0x7ffe1480afca 

[squid-users] Squid Bugzilla is down

2015-04-30 Thread Yuri Voinov

Amos,

what's up with bugzilla? It down and not available.

WBR, Yuri


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Bugzilla is down

2015-04-30 Thread Kinkie
Hi,
  sorry, we had a severe OOM on the main squid server.
Now rebooted and hopefully better plugged. We will see about upgrading
the server as soon as possible.

On Thu, Apr 30, 2015 at 12:10 PM, Yuri Voinov yvoi...@gmail.com wrote:
 Amos,

 what's up with bugzilla? It down and not available.

 WBR, Yuri


 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to achieve squid to handle 2000 concurrent connections?

2015-04-30 Thread Amos Jeffries
On 30/04/2015 2:34 a.m., Abdelouahed Haitoute wrote:
 Selinux is in permissive mode.
 
 In cache.log I’m getting the following log:
 2015/04/29 16:31:18 kid3| Starting Squid Cache version 3.3.8 for 
 x86_64-redhat-linux-gnu...
 2015/04/29 16:31:18 kid3| Process ID 19831
 2015/04/29 16:31:18 kid3| Process Roles: coordinator
 2015/04/29 16:31:18 kid3| With 16384 file descriptors available
 2015/04/29 16:31:18 kid3| Initializing IP Cache...
 2015/04/29 16:31:18 kid3| DNS Socket created at [::], FD 8
 2015/04/29 16:31:18 kid3| DNS Socket created at 0.0.0.0, FD 9
 2015/04/29 16:31:18 kid3| Adding domain rinis.nl from /etc/resolv.conf
 2015/04/29 16:31:18 kid3| Adding nameserver 10.10.6.250 from /etc/resolv.conf
 2015/04/29 16:31:18 kid1| Starting Squid Cache version 3.3.8 for 
 x86_64-redhat-linux-gnu...
 2015/04/29 16:31:18 kid1| Process ID 19833
 2015/04/29 16:31:18 kid1| Process Roles: worker
 2015/04/29 16:31:18 kid1| With 16384 file descriptors available
 2015/04/29 16:31:18 kid1| Initializing IP Cache...
 2015/04/29 16:31:18 kid1| DNS Socket created at [::], FD 8
 2015/04/29 16:31:18 kid1| DNS Socket created at 0.0.0.0, FD 9
 2015/04/29 16:31:18 kid1| Adding domain rinis.nl from /etc/resolv.conf
 2015/04/29 16:31:18 kid1| Adding nameserver 10.10.6.250 from /etc/resolv.conf
 2015/04/29 16:31:18 kid3| Logfile: opening log 
 daemon:/var/log/squid/access.log
 2015/04/29 16:31:18 kid3| Logfile Daemon: opening log 
 /var/log/squid/access.log
 2015/04/29 16:31:18 kid1| Logfile: opening log 
 daemon:/var/log/squid/access.log
 2015/04/29 16:31:18 kid1| Logfile Daemon: opening log 
 /var/log/squid/access.log
 2015/04/29 16:31:18 kid2| Starting Squid Cache version 3.3.8 for 
 x86_64-redhat-linux-gnu...
 2015/04/29 16:31:18 kid2| Process ID 19832
 2015/04/29 16:31:18 kid2| Process Roles: worker
 2015/04/29 16:31:18 kid2| With 16384 file descriptors available
 2015/04/29 16:31:18 kid2| Initializing IP Cache...
 2015/04/29 16:31:18 kid2| DNS Socket created at [::], FD 8
 2015/04/29 16:31:18 kid2| DNS Socket created at 0.0.0.0, FD 9
 2015/04/29 16:31:18 kid2| Adding domain rinis.nl from /etc/resolv.conf
 2015/04/29 16:31:18 kid2| Adding nameserver 10.10.6.250 from /etc/resolv.conf
 2015/04/29 16:31:18 kid2| Logfile: opening log 
 daemon:/var/log/squid/access.log
 2015/04/29 16:31:18 kid2| Logfile Daemon: opening log 
 /var/log/squid/access.log
 2015/04/29 16:31:18 kid3| Local cache digest enabled; rebuild/rewrite every 
 3600/3600 sec
 2015/04/29 16:31:18 kid3| Store logging disabled
 2015/04/29 16:31:18 kid3| Swap maxSize 0 + 2097152 KB, estimated 161319 
 objects
 2015/04/29 16:31:18 kid3| Target number of buckets: 8065
 2015/04/29 16:31:18 kid3| Using 8192 Store buckets
 2015/04/29 16:31:18 kid3| Max Mem  size: 2097152 KB [shared]
 2015/04/29 16:31:18 kid3| Max Swap size: 0 KB
 2015/04/29 16:31:18 kid3| Using Least Load store dir selection
 2015/04/29 16:31:18 kid3| Set Current Directory to /var/spool/squid
 2015/04/29 16:31:18 kid1| Local cache digest enabled; rebuild/rewrite every 
 3600/3600 sec
 2015/04/29 16:31:18 kid1| Store logging disabled
 2015/04/29 16:31:18 kid1| WARNING: disk-cache maximum object size is 
 unlimited but mem-cache maximum object size is 32.00 KB
 2015/04/29 16:31:18 kid1| Swap maxSize 0 + 2097152 KB, estimated 161319 
 objects
 2015/04/29 16:31:18 kid1| Target number of buckets: 8065
 2015/04/29 16:31:18 kid1| Using 8192 Store buckets
 2015/04/29 16:31:18 kid1| Max Mem  size: 2097152 KB [shared]
 2015/04/29 16:31:18 kid1| Max Swap size: 0 KB
 2015/04/29 16:31:18 kid1| Using Least Load store dir selection
 2015/04/29 16:31:18 kid1| Set Current Directory to /var/spool/squid
 2015/04/29 16:31:18 kid2| Local cache digest enabled; rebuild/rewrite every 
 3600/3600 sec
 2015/04/29 16:31:18 kid2| Store logging disabled
 2015/04/29 16:31:18 kid2| WARNING: disk-cache maximum object size is 
 unlimited but mem-cache maximum object size is 32.00 KB
 2015/04/29 16:31:18 kid2| Swap maxSize 0 + 2097152 KB, estimated 161319 
 objects
 2015/04/29 16:31:18 kid2| Target number of buckets: 8065
 2015/04/29 16:31:18 kid2| Using 8192 Store buckets
 2015/04/29 16:31:18 kid2| Max Mem  size: 2097152 KB [shared]
 2015/04/29 16:31:18 kid2| Max Swap size: 0 KB
 2015/04/29 16:31:18 kid2| Using Least Load store dir selection
 2015/04/29 16:31:18 kid2| Set Current Directory to /var/spool/squid
 2015/04/29 16:31:18 kid3| Loaded Icons.
 2015/04/29 16:31:18 kid3| Configuring Parent 192.168.0.18/3128/0
 2015/04/29 16:31:18 kid3| Configuring Parent 192.168.0.20/3128/0
 2015/04/29 16:31:18 kid3| Squid plugin modules loaded: 0
 2015/04/29 16:31:18 kid3| Adaptation support is off.
 2015/04/29 16:31:18 kid3| commBind: Cannot bind socket FD 11 to [::]: (2) No 
 such file or directory

This is a bit odd, but it menas the UDS sockets betwen workers and
coordinator are not working.

Does /var/run/squid exist?

Does /dev/shm exist? if not that may be okay depending on OS, AFAIK for
*most* Linux its needed though and for some it 

[squid-users] How do I no-cache the following url?

2015-04-30 Thread Hussam Al-Tayeb
What rule would I have to add to not cache the following url?

http://images.example.comimageview.gif?anything

Everything up to the ? is an exact match.

So I want to not cache


http://images.example.comimageview.gif?


http://images.example.comimageview.gif?anything


http://images.example.comimageview.gif?anything.gif

etc...

Thank you.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how do I no-cache the following url pattern?

2015-04-30 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
acl no_cache urlpath_regex imageview\.gif\?

or

acl no_cache urlpath_regex imageview\.gif(\?|$)


30.04.15 6:58, Hussam Al-Tayeb пишет:

 What rule would I have to add to not cache the following url?
 http://images.example.com\imageview.gif?anything
 Everything up to the ? is an exact match.
 So I want to not cache
 http://images.example.com\imageview.gif?
 http://images.example.com\imageview.gif?anything
 http://images.example.com\imageview.gif?anything.gif
 etc...
 Thank you.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJVQn6aAAoJENNXIZxhPexG9oQIAJ/zeCm6DOrUnBf+eu0BarGu
BrZOLwRgphQh2RH2vbE2SnmyappPrCUIJGwEsWKnK1kikR27F3xNhWQFrhFLgN8B
GRfTvqp7RsyPM7X2N/4daXnVrsfWCRMhYtoBNl9P0Y1Vm+IUnQRXmzCCZl47CNlD
fgYqnCODKPG4nVRi7FrnxXgCaOk1/f8QOyvzxOluf27F/ZhHZ+Lh/OQxsEUtj3nG
4+dVpxORkFEiwXzoHBqA7QFdr0uk793MEKQiQig5NxYfGi1Jj49oISUsuPdKRyIU
h6QZm5fzdCFUZo+orR9JM7ngzFMwkDzslgX9sT8UED3IkWGLQELEOG+XoptWD4Y=
=E3wO
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Cache strategy advice

2015-04-30 Thread Yan Seiner

I am building a small embedded squid box.

It has 4GB of ram, dual core CPU, and a 32GB SSD.

Since I'm running a tiny embedded linux distro (openwrt) most of those 
resources are available; I'm only using about 1MB of RAM and about 300MB 
of the SSD.


My incoming internet service is 30 to 60 Mb/sec.

My goals:

Maximize throughput (I don't want squid to slow down the connection)
Minimize wear on the SSD

I am planning to set up two workers but beyond that I'm not really how 
to effectively use what I have.


Any thoughts and advice would be greatly appreciated.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Individual delay pools and youtube

2015-04-30 Thread Dan Charlesworth
Thanks Amos. We're using the CONNECT ACL and everything is working as
expected.

On 29 April 2015 at 20:28, Amos Jeffries squ...@treenet.co.nz wrote:

 On 29/04/2015 5:44 p.m., dan wrote:
  I mentioned last time that we had to x2 all our delay_parameter’s
  bytes because of a weird bug where squid would apply it at half speed
  for no reason.
 
  It just occurred to me that (obviously) this is why HTTPS downloads
  are going too fast; because this bug must only affect HTTP traffic.
 
  So HTTPS downloads are going at the actual speed we’ve specified and
  HTTP is going at half that.
 
  Therefore, we should be able to work around it by setting different
  delay_parameters for HTTP and HTTPS requests.
 
  So my question is, how best to target only those requests? By the
  CONNECT method?

 Yes, CONNECT ACL matching the method should work.
 Or alternatively:
  acl HTTP proto HTTP

 Amos
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] NTLM AUTH: All redirector processes are busy

2015-04-30 Thread Jagannath Naidu
Is there even any solution for this. ?
Do any one have this working ?

On 29 April 2015 at 17:04, Jagannath Naidu 
jagannath.na...@fosteringlinux.com wrote:

 Hi List/Amos,

 I am facing an using squid in production.

 I get these messages in cache.log, and service stop for a period of time
 (like 14 seconds). During this period, users get panic as they get proxy
 server resfusing connections. And automatically the service starts
 functioning again. But this happens very frequently whole day.

 2015/04/29 10:34:10| WARNING: All redirector processes are busy.
 2015/04/29 10:34:10| WARNING: 15 pending requests queued
 2015/04/29 10:34:10| storeDirWriteCleanLogs: Starting...
 2015/04/29 10:34:10| WARNING: Closing open FD 3327
 2015/04/29 10:34:10| 65536 entries written so far.
 2015/04/29 10:34:10|131072 entries written so far.
 2015/04/29 10:34:10|196608 entries written so far.
 2015/04/29 10:34:10|262144 entries written so far.
 2015/04/29 10:34:10|327680 entries written so far.
 2015/04/29 10:34:10|393216 entries written so far.
 2015/04/29 10:34:10|458752 entries written so far.
 2015/04/29 10:34:10|524288 entries written so far.
 2015/04/29 10:34:10|589824 entries written so far.
 2015/04/29 10:34:10|655360 entries written so far.
 2015/04/29 10:34:10|   Finished.  Wrote 716101 entries.
 2015/04/29 10:34:10|   Took 0.22 seconds (3266168.90 entries/sec).
 FATAL: Too many queued redirector requests
 Squid Cache (Version 3.1.10): Terminated abnormally.
 CPU Usage: 4206.393 seconds = 3778.049 user + 428.344 sys
 Maximum Resident Size: 2599760 KB
 Page faults with physical i/o: 0
 Memory usage for squid via mallinfo():
 total space in arena:  750272 KB
 Ordinary blocks:   717419 KB   6620 blks
 Small blocks:   0 KB  1 blks
 Holding blocks: 23020 KB 11 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:   32852 KB
 Total in use:  740439 KB 99%
 Total free: 32852 KB 4%
 fgets() failed! dying. errno=1 (Operation not permitted)
 2015/04/29 10:34:19| Starting Squid Cache version 3.1.10 for
 x86_64-redhat-linux-gnu...
 2015/04/29 10:34:19| Process ID 4326
 2015/04/29 10:34:19| With 10 file descriptors available
 2015/04/29 10:34:19| Initializing IP Cache...
 2015/04/29 10:34:19| DNS Socket created at [::], FD 8
 2015/04/29 10:34:19| DNS Socket created at 0.0.0.0, FD 9
 2015/04/29 10:34:19| Adding nameserver 172.16.3.34 from squid.conf
 2015/04/29 10:34:19| Adding nameserver 10.1.2.91 from squid.conf
 2015/04/29 10:34:19| helperOpenServers: Starting 5/5 'squidGuard' processes
 2015/04/29 10:34:19| helperOpenServers: Starting 1500/1500 'ntlm_auth'
 processes
 2015/04/29 10:34:24| helperOpenServers: Starting 150/150 'wbinfo_group.pl'
 processes


 ntlm helpers count is 1500 and external wbinfo_group.pl helpers are 150.

 squid.conf
 ###

 max_filedesc 10
 acl manager proto cache_object
 acl localhost src 172.16.50.61
 http_access allow manager localhost
 dns_nameservers 172.16.3.34 10.1.2.91
 acl allowips src 172.16.58.187 172.16.16.192 172.16.58.113 172.16.58.63
 172.16.58.98 172.16.60.244 172.16.58.165 172.16.58.157
 http_access allow allowips
 #acl haproxy src 172.16.50.61
 #follow_x_forwarded_for allow haproxy
 #follow_x_forwarded_for deny all
 #acl manager proto cache_object
 acl localnet src 172.16.0.0/16
 acl manager proto cache_object
 acl localhost src 127.0.0.1
 acl localnet src fc00::/7 # RFC 4193 local private network range
 acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged)
 machines
 acl office dstdomain /etc/squid/officesites
 http_access allow office
 log_ip_on_direct off
 #debug_options ALL,3
 #logformat squid %9d.%03d %6d %s %s/%03d %d %s %s %s %s%s/%s %s
 logformat squid %ts.%03tu %tl %3tr %3dt %3un %a %Ss/%Hs %st %rm %ru
 %Sh/%A %mt
 access_log /var/log/squid/access1.log squid
 auth_param basic realm Squid proxy-caching web server
 auth_param basic credentialsttl 2 hours external_acl_type nt_group ttl=0
 children=60 %LOGIN /usr/lib64/squid/wbinfo_group.pl
 #auth_param ntlm program /etc/squid/helper-mux.pl /usr/bin/ntlm_auth
 --diagnostics --helper-protocol=squid-2.5-ntlmssp --domain=HTMEDIA.NET
 auth_param ntlm program /usr/bin/ntlm_auth --diagnostics
 --helper-protocol=squid-2.5-ntlmssp --domain=HTMEDIA.NET
 auth_param ntlm children 1500
 #auth_param ntlm children 500
 auth_param ntlm keep_alive off
 auth_param ntlm program /usr/bin/ntlm_auth
 --helper-protocol=squid-2.5-ntlmssp --domain=HTMEDIA.NET
 external_acl_type wbinfo_group_helper ttl=600 children=150 %LOGIN
 /usr/lib64/squid/wbinfo_group.pl -d
 acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
 cl Safe_ports port 8080 #https
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  

Re: [squid-users] Squid Bugzilla is down

2015-04-30 Thread Kinkie
Should be fine now.
Thanks for notifying of the issue.

On Thu, Apr 30, 2015 at 7:42 PM, Yuri Voinov yvoi...@gmail.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Now server produces 500 error.

 30.04.15 23:39, Kinkie ?:
 Hi,
   sorry, we had a severe OOM on the main squid server.
 Now rebooted and hopefully better plugged. We will see about upgrading
 the server as soon as possible.

 On Thu, Apr 30, 2015 at 12:10 PM, Yuri Voinov yvoi...@gmail.com wrote:
 Amos,

 what's up with bugzilla? It down and not available.

 WBR, Yuri


 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users




 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQEcBAEBCAAGBQJVQml8AAoJENNXIZxhPexGfvEIAKHXVkgDYuOob2YgmFB0AP1h
 h3jgjoNkGbxRkV+BCjAYpn/qSDHGHMI54T6d9r0If3oFrDLccWM3Bq+eGQK1smTj
 ZbRcvt37QtjYcuRMXqU42m/mQDZ5UvEOireGwn9DR9TKsbHHn0EKynDdsFaLK3A/
 8AbSoRIxMLH9vPbhBGd0O5gFsgBit68v/8nt3P+GMbHhS/WIamG0FvlAQDqEnIir
 K2avn4C/PL4ZcKErKtCPMRYAl9KyO9HdAhXMKKAq3k0iKCknMd+NTKUtXBmDyH5Z
 F+bhdddG81OioGJ1LwMX8xIM4CT6JHEyO+dMa1n5/eydiWg6Fi0qaUYvFZytnLQ=
 =iwk9
 -END PGP SIGNATURE-




-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Bugzilla is down

2015-04-30 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Yes, it's ok now.

Thank you!

01.05.15 2:30, Kinkie ?:
 Should be fine now.
 Thanks for notifying of the issue.

 On Thu, Apr 30, 2015 at 7:42 PM, Yuri Voinov yvoi...@gmail.com wrote:

 Now server produces 500 error.

 30.04.15 23:39, Kinkie ?:
  Hi,
sorry, we had a severe OOM on the main squid server.
  Now rebooted and hopefully better plugged. We will see about upgrading
  the server as soon as possible.
 
  On Thu, Apr 30, 2015 at 12:10 PM, Yuri Voinov yvoi...@gmail.com
wrote:
  Amos,
 
  what's up with bugzilla? It down and not available.
 
  WBR, Yuri
 
 
  ___
  squid-users mailing list
  squid-users@lists.squid-cache.org
  http://lists.squid-cache.org/listinfo/squid-users
 
 
 






-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJVQpK4AAoJENNXIZxhPexGIvEH/1j06xYDqK7VjN8JuROdgCoF
sKpjBVnN+zD3WQTnkv+xCSvB0vaz1YXjEcw7i4FFGfHMrqsNqwpHHXSIHPh3YC1V
gpGUPiuNMca+hKtNXuyqRrxlWfGnPJ21VDnEnKpuwivhtBdPbZv/VvlKbsGOyiRq
8cc95k1n1zdkjyC9HNEbjcr6+Wt430SGmGUBE9Q16Xi09vIHSx0vKpUNEeLP5yVo
3tku9+vgJOQz575OXyjBXRkHKreHXl2o/FxvTdj56s8sJg11tP0pBo9h9oARrxpe
YiqeXVi33YiT4cDYGT5GJ8JiQjcRU7jlsWXwo8zAav8YApmoIZRuBQtW7L+T8n0=
=XJXe
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cache strategy advice

2015-04-30 Thread Amos Jeffries
On 1/05/2015 7:13 a.m., Yan Seiner wrote:
 I am building a small embedded squid box.
 
 It has 4GB of ram, dual core CPU, and a 32GB SSD.
 
 Since I'm running a tiny embedded linux distro (openwrt) most of those
 resources are available; I'm only using about 1MB of RAM and about 300MB
 of the SSD.
 
 My incoming internet service is 30 to 60 Mb/sec.
 
 My goals:
 
 Maximize throughput (I don't want squid to slow down the connection)
 Minimize wear on the SSD
 
 I am planning to set up two workers but beyond that I'm not really how
 to effectively use what I have.

I advise using only one worker (non-SMP) on dual-core systems. Squid
workers will happily consume the entirety of any CPU cores you let it
use at peak times. So leaving one for the OS and helpers etc to use is a
good idea.

Single worker of Squid with memory-only caching can cope with upwards of
50Mbps. So your traffic expectations should not be a worry there unless
the cores are very slow (MHz range).


If you want to minimize wear on the SSD avoid cache_dir storage types
entirely they are guaranteed to wear it out faster than normal. Current
Squid will run fine with only memory-only caching. Adjust the cache_mem
directive as wanted - default is a 256MB memory cache.

Also, avoid having the device swap memory at all costs - even with the
SSD. If it reaches the point of swapping Squid performance will drop
radically and the SSD wear will increase to match.


Any other features are optional or depend on exactly what you want to be
doing policy-wise with the proxy.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] A lot of open rewriter heplers and are hanging! Squid 3.5

2015-04-30 Thread Amos Jeffries
On 1/05/2015 4:13 p.m., Eliezer Croitoru wrote:
 On 29/04/2015 18:34, Yuri Voinov wrote:
 You really sure 20 children is enough for 1200 clients? Also whenever
 bypass on?
 
 I will add what language is the helper??.

It's a patched jesred, so C.

The problem described jesred hanging is clearly a problem in jesred
itself. Not Squid.

My guess is that its been patched to cope with the action code and
kv-pair syntax. But not concurrency enabled. Which is mandatory on the
Store-ID interface.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-ldap-group not ERR

2015-04-30 Thread Amos Jeffries
On 30/04/2015 1:26 a.m., Alex Delgado wrote:
 Hello,
  
 I'm trying to configure squid to validate Windows users  by group with 
 squid-ldap-group.
  
 Server is CENTOS 6.5 . I've installed samba, krb and squid from source.
  
 Also, I've configured samba and krb, so centos server is a Windows member.
  
 When I type :
  
 /usr/lib64/squid/squid_ldap_group -R -b dc=domain,dc=local -f 
 ((sAMAccountName=%v)(memberOf=cn=%a,dc=domain,dc=local)) -D 
 cn=user,cn=Users,dc=edvhold,dc=local -W /dir/dir/ldpass.txt -h pdcserver
 user group
  
 I got:
  
 ERR
  
 Does anybody what the erro is?

ERR is the helper protocol code for denied. The user account user is
not a member of the group group which is being checked.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How do I no-cache the following url?

2015-04-30 Thread Amos Jeffries
On 30/04/2015 12:47 p.m., Hussam Al-Tayeb wrote:
 What rule would I have to add to not cache the following url?
 http://images.example.com\imageview.gif?anything

That is not a URL. '\' is not a valid domain name character.


 Everything up to the ? is an exact match.
 So I want to not cache
 http://images.example.com\imageview.gif?
 http://images.example.com\imageview.gif?anything
 http://images.example.com\imageview.gif?anything.gif
 etc...
 Thank you.
 

The below answer assumes that you really meant the URL
http://images.example.com/imageview.gif?anything


 acl foo url_regex ^http://images\.example\.com/imageview\.gif
 cache deny foo


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users