[squid-users] Re: Duration Access Limits

2014-04-04 Thread babajaga
I could think about a custom external auth helper, checking the IP,
maintaining its own DB regarding the connect times, and allowing/disallowing
access to squid.
However, this helper has to be provided by you.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Duration-Access-Limits-tp4665424p4665435.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Duration Access Limits

2014-04-04 Thread Edmonds Namasenda
Okay!
Unless I am getting it wrong ...are you telling me to find (or
propose) a solution to my problem?

Else, what is required of me for the helper? Must it be external? Such
tend to be slow.
I want to offer free WiFi internet to people at an eatery but it
should disable their connection after some time. Otherwise, people
might pitch camp (or move offices) realizing the free internet.

Is there another open source solution I can implement besides
tinkering with the existing squid installation?

# Edmonds

On Fri, Apr 4, 2014 at 11:58 AM, babajaga augustus_me...@yahoo.de wrote:
 I could think about a custom external auth helper, checking the IP,
 maintaining its own DB regarding the connect times, and allowing/disallowing
 access to squid.
 However, this helper has to be provided by you.



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Duration-Access-Limits-tp4665424p4665435.html
 Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Problem with Digest Authentication and different browsers

2014-04-04 Thread Christian Zink
Hi,

after some strange authentication issues i came across the problem of different 
implementions of Digest Authentication in IE on the one hand and Chrome/Firefox 
on the other.
The problem occurs when a user sets a password containing a german Umlaut äöü 
or some special characters like €.
IE seems to build the digest hash with iso8859-1 charset characters whereas 
chrome uses utf-8. This leads to different hashes, and the User is forbidden 
access depending on the browser he uses and how the stored hash in the ldap was 
built. 

For example :

Chrome works:
echo -n 'USER:REALM:üBel01??' | md5sum
fbf61c978941ab35281dd99b95543943     

IE works:
echo -n 'USER:REALM:üBel01??' | iconv -t iso-8859-1 -f utf-8 | md5sum
44fce233d7bda083d54015c879c47f16 


It even works with IE and Chrome Hash if i convert the PW to UTF-8 ( 
http://www.percederberg.net/tools/text_converter.html ) and cp the utf8 string 
into the IE pw field! But thats nothing i can suggest who cant even start the 
browser if their shortcut isn't at the right place :D

The easy version is to forbidd these characters, but some of our customers 
use their Win-Pw. The complex method would be to store both hashes and a helper 
checking the browser user-agent and deliever the suitable hash... 


I know thats no problem of squid, but maybe someone came across this before, or 
someone is in the same situation and my information is helpful.
Is there mybe a hidden workaround to tell or force the browser to use a certain 
kind of encoding? Mybe in squid, or in the browser settings?

Greets 

Christian 

[squid-users] Re: Duration Access Limits

2014-04-04 Thread babajaga
Unless I am getting it wrong ...are you telling me to find (or
propose) a solution to my problem?
I proposed a possible solution using squid, however, must be implemented
(programmed) by yourself, as not available AFAIK.

 Must it be external? Such
tend to be slow.
Not necessarily, as the result of the auth helper might be cached within
squid, so number of accesses to this helper is reduced.

Is there another open source solution I can implement besides
tinkering with the existing squid installation? 
You might have a look at mikrotik.com. Their hotspot system within RoS
should be able to do, what you want. Their hardware is really cheap, and not
so bad. As there is also a huge forum regarding scripts, you should find
something suitable on the spot.
BTW: You might use squid as an upstream caching proxy for your MT-box, if
you want. Simple to implement.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Duration-Access-Limits-tp4665424p4665438.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Fwd: [squid-users] Fwd: Segfault in CommSelectEngine::checkEvents

2014-04-04 Thread Cassiano Martin
Squid is in both modes. It accepts directly configured proxy, and
interception mode.

Heres my configuration file (squid.conf). Its auto generated from a
system daemon which i've written.

acl localnet src 10.0.0.0/8
acl localnet src 172.16.0.0/12
acl localnet src 192.168.0.0/16
acl localnet src fc00::/7
acl localnet src fe80::/10
acl SSL_ports port 443
acl CONNECT method CONNECT
dns_nameservers 127.0.0.1
hierarchy_stoplist cgi-bin ?
http_access allow manager localhost
http_access deny manager
http_access deny CONNECT !SSL_ports

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
acl localdst dst 127.0.0.1/32
acl localdst dst 192.168.100.0/24
acl localdst dst 192.168.150.0/24
acl localdst dst 192.168.50.0/24
acl localdst dst 192.168.175.0/24
acl localdst dst 172.16.10.0/24
acl source_Vendas_443 src 192.168.50.3
acl dport_Vendas_443 port 443
tcp_outgoing_address 189.27.236.136 source_Vendas_443 dport_Vendas_443 !localdst
acl source_Devel443 src 192.168.150.177
acl source_Devel443 src 192.168.150.13
acl source_Devel443 src 192.168.150.8
acl source_Devel443 src 192.168.150.95
acl source_Devel443 src 192.168.150.196
acl dport_Devel443 port 443
tcp_outgoing_address 189.27.236.136 source_Devel443 dport_Devel443 !localdst
acl source_Wireless443 src 192.168.175.242
acl source_Wireless443 src 192.168.175.21
acl dport_Wireless443 port 443
tcp_outgoing_address 187.113.225.9 source_Wireless443
dport_Wireless443 !localdst
acl source_WiFi443 src 192.168.175.0/24
acl dport_WiFi443 port 443
tcp_outgoing_address 189.27.236.136 source_WiFi443 dport_WiFi443
!localdst
acl source_voip src 192.168.50.33
tcp_outgoing_address 187.113.225.9 source_voip !localdst
acl source_L4D2 src 192.168.100.78
tcp_outgoing_address 187.113.225.9 source_L4D2 !localdst
external_acl_type securegateway_cfs ipv4 %DST %PROTO %PORT
/usr/bin/squid_filter
acl grp_IDB src 192.168.100.0/24 192.168.150.0/24 192.168.175.0/24
172.16.10.0/24
acl cat_Teste external securegateway_cfs 09,0B,0E,10,12,19,56,58,5C
http_access deny cat_Teste grp_IDB
acl out_balance random 1/2
tcp_outgoing_address 187.113.225.9 out_balance !localdst
tcp_outgoing_address 189.27.236.136 out_balance !localdst
http_access allow localnet
http_access allow localhost
http_access deny all
pid_filename /var/run/squid.pid
half_closed_clients off
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_effective_user squid
cache_effective_group squid
cache_mem 8 MB
memory_pools off
workers 1
visible_hostname firewall.securegateway.localnet
coredump_dir none
access_log daemon:
logfile_daemon /usr/bin/squid_logger
cache_log /var/squid/logs/cache.log
http_port 3128
http_port 3129 intercept
qos_flows local-hit=0x1c

NB: when testing, I also disabled ACLs, extenal ACLs, Logging daemon,
but no sucess at all. Squid still crashes on the same place.

2014-04-03 19:12 GMT-03:00 Eliezer Croitoru elie...@ngtech.co.il:
 On 03/20/2014 05:42 PM, Cassiano Martin wrote:

 Squid is in transparent mode.

 tproxy or redirect targets on iptables?

 Eliezer


Re: [squid-users] squid cpu problem

2014-04-04 Thread a . afach

Dear all
i still have the CPU spikes even when i used 
disable-strict-error-checking without using Cflags


this is the gdb backtrace while the CPU spikes

0x0051b348 in linklistPush (L=0x11853e188, p=0xce6d4300) at 
list.cc:47

47  while (*L)
(gdb) backtrace
#0  0x0051b348 in linklistPush (L=0x11853e188, p=0xce6d4300) at 
list.cc:47

#1  0x005a70a1 in UFSStoreState::write (this=0xb3970e28,
buf=0x11fe69ca0 
!v\253r[/\307\232G\b\375`\237:\213\256^\335\373{\241%\232\363\021\071`\342\033\177a\202G\320{\323%\236K\342\243*\332\316\351\231=\360\370\313Ro=\317\262\243\315\027\351,\221\230\353Z\023\024q\QSC\036\214:M\242{@\351m\020\337Cw_\214\216\304\226\265\a\375\031\211\243V\222T\320\016\227\312-\211Sz\326^\346\230\251\327\222\n\373I\032\341\303==U\214\277\264\244\205\b1\346S=\230\215\204\245\254\312\223\066\336\230PpP\227\271\370\266;\362\226\242\036\225\235w\330\325\061\316{o_\364\021\062\351\376\062|\313\006`\357m\206FQ0\021\030C\224\004]\336\315\371\033h1\361\363\350d\366\066..., 
size=4096, aOffset=-1, free_func=0x5203b0 
memNodeWriteComplete(void*))

at ufs/store_io_ufs.cc:247
#2  0x00554ca0 in doPages (anEntry=optimized out) at 
store_swapout.cc:160

#3  StoreEntry::swapOut (this=0x372ca10) at store_swapout.cc:279
#4  0x0054c986 in StoreEntry::invokeHandlers (this=0x372ca10) 
at store_client.cc:714
#5  0x004dc1a7 in FwdState::complete (this=0xbb502b48) at 
forward.cc:341
#6  0x005579a5 in ServerStateData::completeForwarding 
(this=0xf8030588) at Server.cc:239
#7  0x005571bd in ServerStateData::serverComplete2 
(this=0xf8030588) at Server.cc:207
#8  0x004ff3dc in HttpStateData::processReplyBody 
(this=0xf8030588) at http.cc:1382
#9  0x004fd367 in HttpStateData::readReply (this=0xf8030588, 
io=...) at http.cc:1161
#10 0x00503156 in JobDialerHttpStateData::dial 
(this=0xde75ca50, call=...) at base/AsyncJobCalls.h:175
#11 0x00569ee4 in AsyncCall::make (this=0xde75ca20) at 
AsyncCall.cc:34
#12 0x0056cb76 in AsyncCallQueue::fireNext (this=optimized 
out) at AsyncCallQueue.cc:53
#13 0x0056ccf0 in AsyncCallQueue::fire (this=0x2586400) at 
AsyncCallQueue.cc:39
#14 0x004d385c in EventLoop::runOnce (this=0x7fffcb3518d0) at 
EventLoop.cc:130
#15 0x004d3938 in EventLoop::run (this=0x7fffcb3518d0) at 
EventLoop.cc:94
#16 0x0051d35b in SquidMain (argc=optimized out, 
argv=optimized out) at main.cc:1418
#17 0x0051dd83 in SquidMainSafe (argv=optimized out, 
argc=optimized out) at main.cc:1176

#18 main (argc=optimized out, argv=optimized out) at main.cc:1168


any idea about what's causing the cpu spike


On 2014-03-31 16:34, Amos Jeffries wrote:

On 2014-04-01 02:10, a.afach wrote:

Dear Eliezer
these are the configure options ...
configure options:  '--prefix=/usr/local/squid-3.1.19'
'--sysconfdir=/etc' '--sysconfdir=/etc/squid' '--localstatedir=/var'
'--enable-auth=basic,digest,ntlm' 
'--enable-removal-policies=lru,heap'

'--enable-digest-auth-helpers=password'
'--enable-basic-auth-helpers=PAM,getpwnam,NCSA,MSNT'
'--enable-external-acl-helpers=ip_user,session,unix_group'
'--enable-ntlm-auth-helpers=fakeauth'
'--enable-ident-lookups--enable-useragent-log'
'--enable-cache-digests' '--enable-delay-pools' 
'--enable-referer-log'

'--enable-arp-acl' '--with-pthreads' '--with-large-files'
'--enable-htcp' '--enable-carp' '--enable-follow-x-forwarded-for'
'--enable-snmp' '--enable-ssl' '--enable-storeio=ufs,diskd,aufs'
'--enable-async-io' '--enable-linux-netfilter' '--enable-epoll'
'--with-squid=/usr/squid-3.1.19' '--disable-ipv6' '--with-aio'
'--with-aio-threads=128' 'build_alias=x86_64-pc-linux-gnu'
'host_alias=x86_64-pc-linux-gnu' 'CC=x86_64-pc-linux-gnu-gcc'
'CFLAGS=-O2 -pipe -m64 -mtune=generic' 'LDFLAGS=-Wl,-O1
-Wl,--as-needed' 'CXXFLAGS=' '--cache-file=/dev/null' '--srcdir=.'



Some more reasons to upgrade:
 * --disable-strict-error-checking avoids issues on Gentoo with 
-Werror

 * CFLAGS affects the C compiler, not the C++ compiler. C compiler is
only used by Squid-3 to build some libraries.
 * current verified stable Gentoo Squid version is 3.3.8.
 * updating aything on Gentoo involves rebuilding a surprising number
of components from scratch. So when you get a difference like this it
really could be anywhere. Including buried in the compiler itself -
your flags are possibly changing optimization levels and CPU-specific
assembly instructions used by it.

Amos


[squid-users] how to dynamically reconfigure squid?

2014-04-04 Thread Waldemar Brodkorb
Hi Squid community,

we provide a Linux router with a sandwich setup using squid 3 and
dansguardian for german schools. The configuration of ACL's is
configured in a Windows ADS server and can be dynamically
reconfigured with a management application. When a teacher for
example configures to allow access to the internet with black
listing some sites, the management application connects to the
Linux router via secure shell and executes /etc/init.d/squid3
reload to make the changes an effect.

This worked fine for a long time with windows xp clients and
internet explorer 7/8 using NTLM authentication.

But nowadays Mozilla Firefox, Safari, Internet Explorer 9/10 and
Chrome is getting more in use. The first problem is that the static 
configuration
of 5 ntlm authentication helpers is a bit too small. Most of the
browsers trying to open 7-10 connections to the proxy in parallel
while surfing just one website. This kills squid with the too many
authentications error.

To fix this problem I updated the Linux router software
(Debian/Knoppix derivate) to use Squid 3.4.x which dynamically
starts more ntlm auth helpers when needed. This worked fine in our
tests.

Now comes the second problem, when the teacher reconfigures the 
proxy to close the allowed connections for one class, all opened
connections are still alive. I think the reason is that we use
the default persistent connections for server and client.

When we disable it, the access to the internet is directly closed,
but the entire performance of the proxy seems to be bad.

And it is no solution for any connections, which using SPDY.

What do you think? What might be a solution to this problem?
I can't restart squid when changing the ACL rules, because then
all users in the network would be disconnected.

I am out of ideas, any help is really appreciated.

best regards
Waldemar Brodkorb 


RE: [squid-users] how to dynamically reconfigure squid?

2014-04-04 Thread Rafael Akchurin
Hi Waldemar,

Offload filtering to external ICAP server that can be dynamically 
(re)configured to allow/block based on users authentication/IPs?
In that case teacher adjusted the ICAP server's config, leaving Squid's 
configuration intact. New requests through the same connections are blocked 
after switch.

Raf

-Original Message-
From: Waldemar Brodkorb [mailto:m...@waldemar-brodkorb.de] 
Sent: Friday, April 04, 2014 9:45 PM
To: squid-users@squid-cache.org
Subject: [squid-users] how to dynamically reconfigure squid?

Hi Squid community,

we provide a Linux router with a sandwich setup using squid 3 and dansguardian 
for german schools. The configuration of ACL's is configured in a Windows ADS 
server and can be dynamically reconfigured with a management application. When 
a teacher for example configures to allow access to the internet with black 
listing some sites, the management application connects to the Linux router via 
secure shell and executes /etc/init.d/squid3 reload to make the changes an 
effect.

This worked fine for a long time with windows xp clients and internet explorer 
7/8 using NTLM authentication.

But nowadays Mozilla Firefox, Safari, Internet Explorer 9/10 and Chrome is 
getting more in use. The first problem is that the static configuration of 5 
ntlm authentication helpers is a bit too small. Most of the browsers trying to 
open 7-10 connections to the proxy in parallel while surfing just one website. 
This kills squid with the too many authentications error.

To fix this problem I updated the Linux router software (Debian/Knoppix 
derivate) to use Squid 3.4.x which dynamically starts more ntlm auth helpers 
when needed. This worked fine in our tests.

Now comes the second problem, when the teacher reconfigures the proxy to close 
the allowed connections for one class, all opened connections are still alive. 
I think the reason is that we use the default persistent connections for server 
and client.

When we disable it, the access to the internet is directly closed, but the 
entire performance of the proxy seems to be bad.

And it is no solution for any connections, which using SPDY.

What do you think? What might be a solution to this problem?
I can't restart squid when changing the ACL rules, because then all users in 
the network would be disconnected.

I am out of ideas, any help is really appreciated.

best regards
Waldemar Brodkorb 


Re: [squid-users] how to dynamically reconfigure squid?

2014-04-04 Thread Amos Jeffries
On 5/04/2014 10:55 a.m., Rafael Akchurin wrote:
 Hi Waldemar,
 
 Offload filtering to external ICAP server that can be dynamically
 (re)configured to allow/block based on users authentication/IPs? In
 that case teacher adjusted the ICAP server's config, leaving Squid's
 configuration intact. New requests through the same connections are
 blocked after switch.
 

The same thing applies to Squid with a reconfigure. All *new* requests
are blocked but existing ones are completed.

 Raf
 
 -Original Message-
 From: Waldemar Brodkorb
 
 Hi Squid community,
 
 we provide a Linux router with a sandwich setup using squid 3 and
 dansguardian for german schools. The configuration of ACL's is
 configured in a Windows ADS server and can be dynamically
 reconfigured with a management application. When a teacher for
 example configures to allow access to the internet with black listing
 some sites, the management application connects to the Linux router
 via secure shell and executes /etc/init.d/squid3 reload to make the
 changes an effect.
 
 This worked fine for a long time with windows xp clients and internet
 explorer 7/8 using NTLM authentication.
 
 But nowadays Mozilla Firefox, Safari, Internet Explorer 9/10 and
 Chrome is getting more in use. The first problem is that the static
 configuration of 5 ntlm authentication helpers is a bit too small.
 Most of the browsers trying to open 7-10 connections to the proxy in
 parallel while surfing just one website. This kills squid with the
 too many authentications error.
 
 To fix this problem I updated the Linux router software
 (Debian/Knoppix derivate) to use Squid 3.4.x which dynamically starts
 more ntlm auth helpers when needed. This worked fine in our tests.
 
 Now comes the second problem, when the teacher reconfigures the proxy
 to close the allowed connections for one class, all opened
 connections are still alive. I think the reason is that we use the
 default persistent connections for server and client.
 
 When we disable it, the access to the internet is directly closed,
 but the entire performance of the proxy seems to be bad.
 
 And it is no solution for any connections, which using SPDY.


HTTPS and SPDY is becoming more of a problem since popular websites are
moving to use it and CONNECT tunnels wrap the entire session in HTTP as
a single request.

 What do you think? What might be a solution to this problem? I can't
 restart squid when changing the ACL rules, because then all users in
 the network would be disconnected.

You could set the request_timeout to be short. This would make the
CONNECT requests terminate after a few minutes.

You could also use SSL-bump feature in Squid. This has a double benefit
of allowing the control software acting on the HTTPS requests and
preventing SPDY etc. being used by the browser.

Amos