[squid-users] cache-peer and tls

2019-08-03 Thread Eugene M. Zheganin

Hello,


I'm using squid 4.6 and I need to TLS-encrypt the session to the parent 
proxy. I have in config:



cache_peer proxy.foo.bar parent 3129 3130 tls 
tls-cafile=/usr/local/etc/squid/certs/le.pem 
sslcert=/usr/local/etc/letsencrypt/live/vpn.enazadev.ru/cert.pem 
sslkey=/usr/local/etc/letsencrypt/live/vpn.enazadev.ru/privkey.pem 
sslflags=DONT_VERIFY_DOMAIN,DONT_VERIFY_PEER



But no matter what I'm doing, squid keeps telling in logs that he 
doesn't like the peer certificate:



2019/08/03 18:42:24 kid1| ERROR: negotiating TLS on FD 23: 
error:14090086:SSL routines:ssl3_get_server_certificate:certificate 
verify failed (1/-1/0)
2019/08/03 18:42:24 kid1| temporary disabling (Service Unavailable) 
digest from proxy.foo.bar


and then he's going directly bypassing the peer. :/


Is there any way to tell him that I don't care ?

I've also tried to actually tell him about the CA cert with 
tls-cafile=/usr/local/etc/squid/certs/le.pem above, this doesn't work 
either.



Thanks.

Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] iOS 10.x, https and squid

2016-11-01 Thread Eugene M. Zheganin

Hi.

Does anyone have issues with iOS 10.x devices connecting through proxy 
(3.5.x) to the https-enabled sites ? Because I do. Non-https sites work 
just fine, but https ones just stuck on loading. First I thought that 
this is a problem with sslBump and disabled it, but this didn't help. I 
got in access log this:


1478024222.324 48 192.168.243.10 TCP_DENIED/407 4388 CONNECT 
www.cisco.com:443 - HIER_NONE/- text/html
1478024222.373  0 192.168.243.10 TCP_DENIED/407 4649 CONNECT 
www.cisco.com:443 - HIER_NONE/- text/html
1478024222.468 53 192.168.243.10 TCP_TUNNEL/200 0 CONNECT 
www.cisco.com:443 emz HIER_DIRECT/2a02:26f0:18:185::90 -


and when requesting http version:

1478024355.685 69 192.168.243.10 TCP_MISS/200 14297 GET 
http://www.cisco.com/ emz HIER_DIRECT/2a02:26f0:18:19e::90 text/html
1478024355.885 47 192.168.243.10 TCP_MISS/304 335 GET 
http://www.cisco.com/etc/designs/cdc/clientlibs/responsive/css/cisco-sans.min.css 
emz HIER_DIRECT/2a02:26f0:18:19e::90 text/css
1478024355.910 45 192.168.243.10 TCP_REFRESH_UNMODIFIED/304 341 GET 
http://players.brightcove.net/1384193102001/NJgI8K0ie_default/index.min.js 
emz HIER_DIRECT/2.22.40.126 application/javascript
1478024355.942  0 192.168.243.10 TCP_DENIED/407 6611 GET 
http://www.cisco.com/etc/designs/catalog/ps/clientlib-all/custom-fonts/cisco-sans.min.css 
- HIER_NONE/- text/html
1478024355.969 60 192.168.243.10 TCP_MISS/304 335 GET 
http://www.cisco.com/etc/designs/catalog/ps/clientlib-all/css/cisco-sans.min.css 
emz HIER_DIRECT/2a02:26f0:18:19e::90 text/css


[...lots of other access stuff...]

Some may think "dude, you just misconfigured your squid". But the thing 
is, other browsers just work (and I don't have MacBook to test if 
laptops will), I have a couple of iPhones, they don't work. Funny thing: 
with disabled authentication (when my iphone IP is allowed) the browser 
on iOS loads https sites just fine.


Thanks.

Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] connections from particular users sometimes get stuck

2016-09-28 Thread Eugene M. Zheganin
Hi.

On 28.09.2016 01:36, Alex Rousskov wrote:
> On 09/27/2016 02:02 PM, Eugene M. Zheganin wrote:
>
>> I guess squid
>> didn't get a way to increase debug level on the fly ? 
> "squid -k debug" (or sending an equivalent signal) does that:
> http://wiki.squid-cache.org/SquidFaq/BugReporting#Detailed_Debug_Output
>
> You will not get ALL,9 this way, unfortunately, but ALL,7 might be enough.
>
>
I took the debug trace and both the tcpdump client-side and server-side
(towards the internet) capturea.
Since the debug log is way heavy, I decided to put all of the three
files on the web-server. Here they are:

Squid debug log (ALL,7):

http://zhegan.in/files/squid/cache.log.debug

tcpdump client-side capture (windump -s 0 -w
squid-stuck-reference-client.pcap -ni 1):

http://zhegan.in/files/squid/squid-stuck-reference-client.pcap

tcpdump server-side capture, towards the outer world, empty - obviously,
server didn't send anything outside (tcpdump -s 0 -w
squid-stuck-reference-server.pcap -ni vlan23 host 217.112.35.75):

http://zhegan.in/files/squid/squid-stuck-reference-server.pcap

Test sequence:

client - 192.168.3.215
squid - 192.168.3.1:3128
URL - http://www.ru/index.html

I requested a http://www.ru/index.html from a client machine Chrome. No
other applications were requesting this URL at this time there (however,
capture does contain a lot of traffic, including HTTP sessions). Then I
waited about a minute (loader in Chrome was spinning), and aborted both
captures, then aborted the request. The aborted request probably made it
to the squid log.

Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] connections from particular users sometimes get stuck

2016-09-27 Thread Eugene M. Zheganin

Hi.

On 28.09.2016 0:29, Alex Rousskov wrote:

Since you can reproduce this, I suggest collecting ALL,9 log for the
stuck master transaction:

http://wiki.squid-cache.org/SquidFaq/BugReporting#Debugging_a_single_transaction

If collecting a debugging trace is impossible for some reason, then
collect the corresponding TCP packets on the Squid to origin server link
and post actual packets (not screenshots of packet summaries) from both
connections. The debugging trace will most likely have the answer. The
packet trace might have the answer.

You may need to change user credentials for this test or after posting
the details requested above.

Well... I cannot reproduce it on purpose, I'm just saying it is 
self-reproducible for almost a year, in certain moments of time. 
Collecting a debug trace isn't hard by itself, but I'm pretty sure the 
restart will clear this state for a current machine (I guess squid 
didn't get a way to increase debug level on the fly ? at least I'm not 
aware of it; so I will need to restart it to set ALL,9), and I'll have 
to run with ALL,9 for quite some time, which is, obviously, not good for 
production, because it will create enormous amounts of logging in cache 
log. So I will post the tcpdump containing both exchanges, and if it 
will be still unclear I'll think about running in a debug mode.


Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] connections from particular users sometimes get stuck

2016-09-27 Thread Eugene M. Zheganin

Hi.

I have a weird problem. I run a squid cache 3.5.19 on FreeBSD/amd64, 
with about 300 active users, lots of authentication, external helpers 
(yeah, it's usually the place when one starts to post configs, but let 
me get to the point), and everything basically works just fine, but 
sometimes one particular user (don't know, may be it's one particular 
machine or some other entity) starts to have troubles. Usual trouble 
looks like the following:


- around 299 users are working and authenticatiing just fine

- one particular user starts experiencing connection stucking: his 
browser requests a web page, it starts to load and then some random 
object on it blocks indefinitely.


- this happens every time on one machine, for the time given. This 
machine is permanent for a given issue, until it's gone. Then it's some 
another machine, and I cannot figure out the pattern.


- this machine may be locked in this malfuctioning state for days. This 
state is usually cleared by the squid restart, or it may clear itself.


- after a month or so the issue appears on another machine. and it 
persists on a new machine for quite some time.


On a l3 level this looks simple: browser requests an object, gets 407 
answer, replies with proper credentials set and then this connection 
goes indefinitely into a keepalived state: the squid and the browser 
send keepalives to each other, but nothing happens other than 
keepalives. User sees the spinning loader on a browser tab, and some 
content inside the tab, depending on how many objects the browses has 
received. In the same time new connections to squid are opening from 
this machine just fine, and the basic connectivity is normal for both 
the squid and the troubled machine. Furthermore, I'm sure that this 
problem isn't caused by bottlenecks on the squid machine: because it 
this way all the users would have eventually this problem, not only one. 
In the same time these aren't bottlenecks on the user machine: while the 
browser is stuck, other applications are working fine. If I switch the 
proxy to a backup squid (on another server) this machine is able to 
browse the internet.


I really need to solve this, but I have no idea where to start. The 
error log show nothing suspicious.


The wireshark screen where the issue is isolated for one particular 
connection can be found here - 
https://gyazo.com/fdec1d9d7c31a75afc7d4676abb83d15 (it's really a simple 
picture: TCP connection establishing, then GET -> 407 -> GET and bunch 
of keepalives, not a rocket science).


Any ideas ?

Thanks.

Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] large downloads got interrupted

2016-08-11 Thread Eugene M. Zheganin
Hi.

On 30.06.16 17:19, Amos Jeffries wrote:
>
> Okay, I wasn't suggesting you post it here. Its likely to be too big for
> that.
>
> I would look for the messages about the large object, and its FD. Then,
> for anthing about why it was closed by Squid. Not sure what tha would be
> at this point though.
> There are some scripts in the Squid sources scripts/ directory that
> might help wade through the log. Or the grep tool.
>
>
I enabled logLevel 2 for all squid facilities, but so far I didn't
fugura out any pattern from log. The only thing I noticed - is that for
large download the Recv-Q value reported by the netstat for a particular
squid-to-server connection is extremely high, so is the Send-Q value for
a connection from squid to client. I don't know if it's a cause or a
consequence, but from my point of view this may indicate that buffers
are overflown for some reason, I think this may cause, in turn, RSTs and
connection closing - am I right ?. I still don't know whether it's a
squid fault of may be it's local OS misconfiguration.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] NOTICE: Authentication not applicable on intercepted requests.

2016-06-30 Thread Eugene M. Zheganin

Hi.

On 30.06.2016 17:04, Amos Jeffries wrote:

On 30/06/2016 9:21 p.m., Eugene M. Zheganin wrote:

Hi,

Could this message be moved on loglevel 2 instead of 1 ?
I think that this message does 95% of the logs of the intercept-enabled
caches with authentication.

At least some switch would be nice, to switch this off instead of
switching the while facility to 0.

This message only happens when your proxy is misconfigured.

Well, it may be.


Use a myportname ACL to prevent Squid attempting impossible things like
authentication on intercepted traffic.


Sorry, but I still didn't get the idea. I have one port that squid is 
configured to intercept traffic on, and another for plain proxy requests. How 
do I tell squid not to authenticate anyone on the intercept one ? From what I 
know, squid will send the authentication sequence as soon as it encounters the 
authentication-related ACL in the ACL list for the request given. Do have to 
add myportname ACL with non-intercepting port for all the occurences of the 
auth-enabled ACLs, or may be there's a simplier way ?

Thanks.
Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] NOTICE: Authentication not applicable on intercepted requests.

2016-06-30 Thread Eugene M. Zheganin
Hi,

Could this message be moved on loglevel 2 instead of 1 ?
I think that this message does 95% of the logs of the intercept-enabled
caches with authentication.

At least some switch would be nice, to switch this off instead of
switching the while facility to 0.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] large downloads got interrupted

2016-06-29 Thread Eugene M. Zheganin
Hi.

On 29.06.16 05:26, Amos Jeffries wrote:
> On 28/06/2016 8:46 p.m., Eugene M. Zheganin wrote:
>> Hi,
>>
>> recently I started to get the problem when large downloads via squid are
>> often interrupted. I tried to investigate it, but, to be honest, got
>> nowhere. However, I took two tcpdump captures, and it seems to me that
>> for some reason squid sends FIN to it's client and correctly closes the
>> connection (wget reports that connection is closed), and in the same
>> time for some reason it sends like tonns of RSTs towards the server. No
>> errors in logs are reported (at least on a  ALL,1 loglevel).
>>
> It sounds like a timeout or such has happened inside Squid. We'd need to
> see your squid.conf to see if that was it.
Well... it quite long, since it's at large production site. I guess you
don't need the acl and auth lines, so without them it's as follows
(nothing secret in them, just that they are really numerous):

===Cut===
# cat /usr/local/etc/squid/squid.conf | grep -v http_access | grep -v
acl | grep -v http_reply_access | egrep -v '^#' | egrep -v '^$'
visible_hostname proxy1.domain1.com
debug_options ALL,1
http_port [fd00::301]:3128 ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
http_port [fd00::316]:3128 ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
http_port 192.168.3.1:3128 ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
http_port 127.0.0.1:3128 ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
http_port 127.0.0.1:3129 intercept
http_port [::1]:3128
http_port [::1]:3129 intercept
https_port 127.0.0.1:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
https_port [::1]:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
icp_port 3130
dns_v4_first off
shutdown_lifetime 5 seconds
workers 2
no_cache deny QUERY
cache_mem 256 MB
cache_dir rock /var/squid/cache 1100
cache_access_log stdio:/var/log/squid/access.fifo
cache_log /var/log/squid/cache.log
cache_store_log none
cache_peer localhost parent 8118 0 no-query defaultauth_param negotiate
program /usr/local/libexec/squid/negotiate_wrapper_auth --ntlm
/usr/local/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --kerberos
/usr/local
authenticate_ip_ttl 60 seconds
positive_dns_ttl 20 minutes
negative_dns_ttl 120 seconds
negative_ttl 30 seconds
pid_filename /var/run/squid/squid.pid
ftp_user anonymous
ftp_passive on
ipcache_size 16384
fqdncache_size 16384
redirect_children 10
refresh_pattern -i . 0 20% 4320
sslcrtd_program /usr/local/libexec/squid/ssl_crtd -s /var/squid/ssl -M 4MB
sslcrtd_children 15
auth_param negotiate program
/usr/local/libexec/squid/negotiate_wrapper_auth --ntlm
/usr/local/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --kerberos
/usr/local/libexec/squid/negotiate_kerberos_auth -s
HTTP/proxy1.domain1@domain.com
auth_param negotiate children 40 startup=5 idle=5
auth_param negotiate keep_alive on
auth_param ntlm program /usr/local/bin/ntlm_auth -d 0
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 60
auth_param basic program /usr/local/libexec/squid/basic_pam_auth
auth_param basic children 35 startup=5 idle=2
auth_param basic realm Squid
auth_param basic credentialsttl 10 minute
auth_param basic casesensitive off
authenticate_ttl 10 minute
authenticate_cache_garbage_interval 10 minute
snmp_access allow fromintranet
snmp_access allow localhost
snmp_access deny all
snmp_port 340${process_number}
snmp_incoming_address 192.168.3.22
tcp_outgoing_address 192.168.3.22 intranet
tcp_outgoing_address fd00::316 intranet6
tcp_outgoing_address 86.109.196.3 ad-megafon
redirector_access deny localhost
redirector_access deny SSL_ports
icp_access allow children
icp_access deny all
always_direct deny fuck-the-system-dstdomain
always_direct deny fuck-the-system
always_direct deny onion
always_direct allow all
never_direct allow fuck-the-system-dstdomain
never_direct allow fuck-the-system
never_direct allow onion
never_direct deny all
miss_access allow manager
miss_access allow all
cache_mgr e...@domain1.com
cache_effective_user squid
cache_effective_group squid
sslproxy_cafile /usr/local/etc/squid/certs/ca.pem
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER
deny_info ERR_NO_BANNER banner
deny_info ERR_UNAUTHORIZED unauthori

[squid-users] large downloads got interrupted

2016-06-28 Thread Eugene M. Zheganin
Hi,

recently I started to get the problem when large downloads via squid are
often interrupted. I tried to investigate it, but, to be honest, got
nowhere. However, I took two tcpdump captures, and it seems to me that
for some reason squid sends FIN to it's client and correctly closes the
connection (wget reports that connection is closed), and in the same
time for some reason it sends like tonns of RSTs towards the server. No
errors in logs are reported (at least on a  ALL,1 loglevel).

Screenshots of wireshark interpreting the tcpdump capture are here:

Squid(2a00:7540:1::4) to target server(2a02:6b8::183):

http://static.enaza.ru/userupload/gyazo/e5b976bf6f3d0cb666f0d504de04.png
(here you can see that all of a sudden squid starts sending RSTs, that
come long way down the screen, then connection reestablishes (not on the
screenshot taken))

Squid(fd00::301) to client(fd00::73d):

http://static.enaza.ru/userupload/gyazo/ccf4982593dc6047edb5d734160e.png  (here
you can see the client connection got closed)
I'm open to any idea that will help me to get rid of this issue.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ext_kerberos_ldap_group_acl and Kerberos cache

2016-05-18 Thread Eugene M. Zheganin

Hi.

On 18.05.2016 16:29, Amos Jeffries wrote:


I don't know what you mean by "the main tree". But The feature you
describe does not qualify for adding to the 3.5 production release
series. The only features added to a series after is goes to "stable"
production releases are ones which resolve non-feature bugs or can be
done without affecting existing installations.
Well, you can treat kerberos cache in the kerberos group ACL helper as 
both. It doesn't affect current installations in any way: neither it 
doesn't change the configuration syntax, nor adds new caveats. In the 
same way it can be considered as a bugfix: as far as I know it was 
supposed to exist in the helper from the start, but was misimplemented. 
All it adds is the cache: it caches the credentials up to their TTL, 
which is defined by the ticket (not by squid, not by helper).

By changing the helper behaviour in all cases this clearly affects
existing installations. So only qualifies for including into the next
series, which is Squid-4.

It doesn't change helper behaviour, it fixes it.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ext_kerberos_ldap_group_acl and Kerberos cache

2016-05-17 Thread Eugene M. Zheganin
Hi.

I've just checked that squid 3.5.19 sources, and discovered the
following fact that is really disturbing:
(first some explanation)
Markus Moeller, the author of the external kerberos group helper, has
implemented the Kerberos credentials cache in the
ext_kerberos_ldap_group_acl  helper back in the 2014. The idea is to
cache the credentials inside the helper instance, so when it encounters
a request with user id and group that are already in the cache, the
helper can skip the kerberos initialization sequence for this set of
credentials. This cached version is times faster than original one, that
doesn't use the cache.

(now the disturbing fact)
Surprisingly, the cached version didn't make it to the main tree for 2
past years.
Could this situation be corrected please ?

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid, SMP and authentication and service regression over time

2016-05-16 Thread Eugene M. Zheganin

Hi.

I'm using squid for a long time, I'm using it to authenticate/authorize 
users accessing the Internet with LDAP in a Windows corporate 
enviromnent (Basic/NTLM/GSS-SPNEGO) and recently (about several months 
ago) I had to switch to the SMP scheme, because one process started to 
eat the whole core sometimes, thus bottlenecking users on it. Situation 
with CPU effectiveness improved, however I discovered several issues. 
The first I was aware of, it's the non-functional SNMP (since there's no 
solution, I just had to sacrifice it). But the second one is more 
disturbing. I discovered that after a several uptime (usually couple of 
weeks, a month at it's best) squid somehow degrades and stops 
authorizing users. I have about active 600 users on my biggest site 
(withount SNMP I'm not sure how many simultaneous users I got) but 
usually this starts like this: someone (this starts with one person) 
complains that he lost his access to the internet - not entirely, no. At 
first the access is very slow, and the victim has to wait several 
minutes for the page to load. Others are unaffected at this time. From 
time to time the victim is able to load one of two tabs in the browser, 
eventually, but at the end of the day this becomes unuseable, and my 
support has to come in. Then this gots escalated to me. First I was 
debugging various kerberos stuff, NTLM, victim's machine domain 
membership and so on. But today I managed to figure out that all I have 
to do is just restart squid, yeah (sounds silly, but I don't like to 
restart things, like in the "IT Crowd" TV Series, this is kinda last 
resort measure, when I'm desperate). If I'm stubborn enough to continue 
the investigation, soon I got 2 users complaining, then 3, then more. 
During previous outages eventually I used to restart squid (to change 
the domain controller in kerberos config, if I blame one; to disable the 
external Kerberos/LDAP helper connection pooling, if I blame one) - so 
each time there was a candidate to blame. But this time I just decided 
to restart squid, since I started to think it's the main reason, et 
voila. I should also mention that I run this AAA scheme in squid for 
years, and I didn't have this issue previously. I also have like dozen 
of other squids running same (very similar) config, - same AAA stuff - 
Basic/NTLM/GSS-SpNego, same AD group checking, but only for the 
different groups membership - and none of it has this issue. I'm 
thinking there's SMP involved, really.


I realize this is a poor problem report. "Something degrades, I restart 
squid, please help, I think it's SMP-related". But the thing is - I 
don't know where to start to narrow this stuff. If anyone's having a 
good idea please let me know.


Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Assign multiple IP Address to squid

2015-12-29 Thread Eugene M. Zheganin
Hi.

On 29.12.2015 17:05, Reet Vyas wrote:
> Hi
>
> I have working squid3.5.4 configuration with ssl bump, I am using this
> squid machine as router and have external IP to it and have a leased
> line connection but with leased line I have 10 extra IP address and I
> want to NAT those external ip to local ip on same network, like we do
> in our router, so that I can assign those IP ip my machines having
> webservers.
>
> Please suggest me way to configure it.
>
This has nothing to do with squid.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump adventures in enterprise production environment

2015-12-28 Thread Eugene M. Zheganin
Hi.

On 16.11.2015 0:39, Alex Rousskov wrote:
> On 11/15/2015 12:03 PM, Eugene M. Zheganin wrote:
>> It's not even a HTTPS, its a tunneled HTTP CONNECT. But
>> squid for some reason thinks there shoudl be a HTTPS inside.
> Hello Eugene,
>
>  Squid currently supports two kinds of CONNECT tunnels:
>
> 1. A regular opaque tunnel, as intended by HTTP specifications.
>
> 2. An inspected tunnel containing SSL/TLS-encrypted HTTP traffic.
>
> Opaque tunnels are the default. Optional SslBump-related features allow
> the admin to designate admin-selected CONNECT tunnels for HTTPS
> inspections (of various depth). This distinction explains why and when
> Squid expects "HTTPS inside".
>
> There is currently no decent support for inspecting CONNECT tunnels
> other than SSL/TLS-encrypted HTTP (i.e., HTTPS) tunnels.
>
> Splicing a tunnel at SslBump step1 converts a to-be-inspected tunnel
> into an opaque tunnel before inspection starts.
>
> The recently added on_unsupported_protocol directive can automatically
> convert being-inspected non-HTTPS tunnels into opaque ones in some
> common cases, but it needs more work to cover more cases.
>
>
> AFAICT, you assume that "splicing" turns off all tunnel inspection. This
> is correct for step1 (as I mentioned above). This is not correct for
> other steps because they happen after some inspection already took
> place. Inspection errors that on_unsupported_protocol cannot yet handle,
> may result in connection termination and other problems.
>
>
> If Squid behavior contradicts some of the above rules, it is probably a
> bug we should fix. Otherwise, it is likely to be a missing feature.
>
>
> Finally, if Squid kills your ICQ (non-HTTPS) client tunnels, you need to
> figure out whether those connections are inspected (i.e., go beyond
> SslBump step1). If they are inspected, then this is not a Squid bug but
> a misconfiguration (unless the ACL code itself is buggy!). If they are
> not inspected, then it is probably a Squid bug. I do not have enough
> information to distinguish between those cases, but I hope that others
> on the mailing list can guide you towards a resolution given the above
> information.
>

Thanks a lot for this explicit explanation.
I managed to solve the problem with ICQ using the information above, no
matter what port, 5190 or 443 it's tunneled into. Even
"on_unsupported_protocol" isn't needed, so the whole thing works just
fine on 3.5.x. In case someone will need this too, I decided to post my
config part:

#
# Minimum ICQ configuration,
# works for QIP 2012 and squid/ssl_bump, login.icq.com port should be
either 443 or 5190
#

acl icq dstdomain login.icq.com
acl icqport port 443
acl icqport port 5190

# mail.ru network where ICQ servers reside
acl icqip dst 178.237.16.0/20

acl step1 at_step SslBump1

#
# http_access part is needed; not shown here since it's ordinary, for
qip or web clients to work
#

# this should be somewhere near the top of the ssl_bump directives piece
ssl_bump splice step1 icq
ssl_bump splice step1 icqip icqport
[...other ssl_bump directives...]

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] sslBump, squid in transparent mode

2015-12-28 Thread Eugene M. Zheganin
Hi.

I'm still trying to figure out why I get certificate generated for IP
address instead of hostname when the HTTPS traffic is intercepted bu
sllBump-enable squid. I'm using iptables to do this:

rdr on $iifs inet proto tcp from 192.168.0.0/16 to ! port 443
-> 127.0.0.1 port 3131
rdr on vpn inet proto tcp from 192.168.0.0/16 to ! port 443 ->
127.0.0.1 port 3131

and the port is configured as follows:

https_port 127.0.0.1:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
https_port [::1]:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem

This way I'm getting a waring in browser (https://youtube.com is opened
in the example below):

===Cut===
youtube.com uses an invalid security certificate.

The certificate is not trusted because the issuer certificate is unknown.
The server might not be sending the appropriate intermediate certificates.
An additional root certificate may need to be imported.
The certificate is only valid for 173.194.71.91

(Error code: sec_error_unknown_issuer)
===Cut===

And the tcpdump capture clearly shows that client browser did sent an SNI:

https://gyazo.com/c1ba348fb4ee56c6c30f3e22ff9877f8

I'll apreciate any help.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid authentication mechs

2015-12-16 Thread Eugene M. Zheganin

Hi.

Is there a way to limit the number of available authentication 
mechanisms (for a client browser) basing on certain squid IP which this 
browser connects to, like, using http_port configuration directive ? For 
example this is needed when one need to allow the non-domain machines to 
pass through authentication/authorization checks using squid with 
full-fledged AD integraion (or Kerberos/NTLM, anyway), otherwise they 
are unable to do it. Once they were, for example using Chrome < 41, but 
since >41 Chrome has removed all the options to exclude certain 
authentication methods from it's CLI sequence (I still wander what a 
genious proposed this).


If not(and I believe there isn't) could this message be treated as a 
feature request ?


Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fwd: NTLM LDAP authentication problem

2015-11-16 Thread Eugene M. Zheganin
Hi,

On 16.11.2015 19:51, Matej Kotras wrote:
> Thank you for your response, as this is my first try with Squid, and
> fairly newb in Linux.
> I do not understand at all differences between basic/ntlm/gss-spnego
> auths so I will do my homework and read about them. I've managed to
> get this working after few weeks of "trial and error" method (I know,
> I know, but I gotta start somewhere rite) following multiple guides.
>
The usual issue with all those copy/paste tutorials is that they tend to
teach how to do everything at once, instead of moving from simple things
to more difficult ones. This order of simplicity/difficulty is the
following:

- adding Basic authentication, all authenticated users are authorized to
use proxy
- adding NTLM authentication, all authenticated users are authorized to
use proxy
- adding group-based authorization, authenticated users are authorized
to use proxy basing on the group membership, using simple helper like
squid_group_ldap
- adding GSS-SPNEGO authentication
- adding full-fledged GSS-SPNEGO group authorization helper.

You can try my article,
http://squidquotas.hq.norma.perm.ru/squid-auth.shtml. Though it's not
perfect and still lacks two last steps, at least it tries to follow that
approach.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Active Directory Authentication failing at the browser

2015-11-16 Thread Eugene M. Zheganin
Hi.

On 16.11.2015 18:46, dol...@ihcrc.org wrote:
>
> Squid Version:  Squid 3.4.8
>
> OS Version:  Debian 8 (8.2)
>
>  
>
> I have installed Squid on a server using Debian 8 and seem to have the
> basics operating, at least when I start the squid service, I have am
> no longer getting any error messages.  At this time, the goal is to
> authenticate users from Active Directory and log the user and the
> websites they are accessing.
>
>  
>
> The problem I am having is, when I set Firefox 35.0.1 on my Windows 7
> workstation to use the Squid proxy, I am getting the log in page
> (image below).
>
>  
>
> imap://e...@mail.norma.perm.ru:143/fetch%3EUID%3E/INBOX/maillists/squid-users%3E58459?header=quotebody=1.1.2=image001.png
>
>  
>
> I have tried entering my user name in various form EXAMPLE/USERID,
> USERID, EXAMPLE/ADMINISTRATOR, ADMINISTRATOR and the password and I
> have not had a successful at this time.
>
>  
>
> I have attached the squid.conf, smb.conf, krb5.conf, and access.log
> files for review.  If you would like to see the cache.log file, please
> contact me as the file is too large to include in this post.
>
>  
>
>
I suggest you first make Basic and NTLM working with active directory,
and only then, having these 2 schemes working, you move to the
GSS-SPNEGO scheme. This is because GSS-SPNEGO scheme is overcomplicated
and difficult to debug, as it uses lots of components and can fall apart
easily on any stage.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fwd: NTLM LDAP authentication problem

2015-11-16 Thread Eugene M. Zheganin
On 16.11.2015 14:29, Matej Kotras wrote:
> Hi guys
>
> I've managed squid to work with AD, and authorize users based on what
> AD group they are in. I use Squid-Analyzer for doing reports from
> access.log. I've found 2 anomalies with authorization so far. In
> access log, I see that user is authorized based on his PC name(not
> desired) and not on the user account name. I've just enabled debugging
> on negotiate wrapper, so I will monitor these logs also.
>
> But in the meantime, have you got any idea why could this happen ?
>
> *PC NAME AUTH:*
> 1447562119.348  0 10.13.34.31 TCP_DENIED/407 3834 CONNECT
> clients2.google.com:443  -
> HIER_NONE/- text/html
> 1447562119.374  2 10.13.34.31 TCP_DENIED/407 4094 CONNECT
> clients2.google.com:443  -
> HIER_NONE/- text/html
> 1447562239.350 119976 10.13.34.31 TCP_MISS/200   4200 CONNECT
> clients2.google.com:443  icz800639-03$
> HIER_DIRECT/173.194.116.231  -
>
> *USER NAME AUTH:*
> 1447562039.176  0 10.13.34.31 TCP_DENIED/407 3850 CONNECT
> lyncwebext.inventec.com:443  -
> HIER_NONE/- text/html
> 1447562039.215 27 10.13.34.31 TCP_DENIED/407 4110 CONNECT
> lyncwebext.inventec.com:443  -
> HIER_NONE/- text/html
> 1447562041.118   2702 10.13.34.31 TCP_MISS/200   6213 CONNECT
> lyncwebext.inventec.com:443 
> icz800639 HIER_DIRECT/10.8.100.165  -
Does't seem like you have working GSS-SPNEGO scheme. Unless you have
username fields in log with realm set which yyou didn't post here.

>
>
> *Squid.conf*
> #
> #Enable KERBEROS authentication#
> #
>
> auth_param negotiate program /usr/local/bin/negotiate_wrapper -d
> --ntlm /usr/bin/ntlm_auth --diagnostics
> --helper-protocol=squid-2.5-ntlmssp --domain=ICZ --kerberos
> /usr/lib64/squid/negotiate_kerberos_auth -s GSS_C_NO_NAME
> auth_param negotiate children 20 startup=0 idle=1
> auth_param negotiate keep_alive off
>
>
> #
> #Enable NTLM authentication#
> #
>
> #auth_param ntlm program /usr/bin/ntlm_auth --diagnostics
> --helper-protocol=squid-2.5-ntlmssp --domain=ICZ
> #auth_param ntlm children 10
> #auth_param ntlm keep_alive off
So you disable the explicit NTLM authentication. That's bad. This far
you only have GSS-SPNEGO failover to NTLM.
>
>
> #
> # ENABLE LDAP AUTH#
> #
>
> auth_param basic program /usr/lib64/squid/basic_ldap_auth -R -b
> "dc=icz,dc=inventec" -D squid@icz.inventec -W /etc/squid/ldappass.txt
> -f sAMAccountName=%s -h icz-dc-1.icz.inventec
> auth_param basic children 10
> auth_param basic realm Please enter user name to access the internet
> auth_param basic credentialsttl 1 hour
This is pure basic.
>
> external_acl_type ldap_group ttl=3600 negative_ttl=0 children-max=50
> children-startup=10  %LOGIN /usr/lib64/squid/ext_wbinfo_group_acl
>
The part with http_access is missing, it's hard to tell why you have
TCP_MISS for machine accounts.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump adventures in enterprise production environment

2015-11-15 Thread Eugene M. Zheganin
Hi.

On 16.11.2015 00:14, Yuri Voinov wrote:

> It's common knowledge. Squid is unable to pass an unknown protocol on
> the standard port. Consequently, the ability to proxy this protocol does
> not exist.
>
> If it was simply a tunneling ... It is not https. And not just
> HTTP-over-443. This is more complicated and very marginal protocol.
>
I'm really sorry to tell you that, but you are perfectly wrong. These
non-HTTPS tunnels have been working for years. And this isn't JTTPS
because of:

# openssl s_client -connect login.icq.com:443
CONNECTED(0003)
34379270680:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
protocol:/usr/src/secure/lib/libssl/../../../crypto/openssl/ssl/s23_clnt.c:782:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 297 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump adventures in enterprise production environment

2015-11-15 Thread Eugene M. Zheganin
Hi.

On 15.11.2015 0:43, Walter H. wrote:
> On 13.11.2015 14:53, Yuri Voinov wrote:
>> There is no solution for ICQ with Squid now.
>>
>> You can only bypass proxying for ICQ clients.
> from where do the ICQ clients get the trusted root certificates?
> maybe this is the problem, that e.g. the squid CA cert is only 
> installed in FF
> and nowhere else ...
From nowhere. It's not even a HTTPS, its a tunneled HTTP CONNECT. But
squid for some reason thinks there shoudl be a HTTPS inside.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump adventures in enterprise production environment

2015-11-14 Thread Eugene M. Zheganin
Hi.

On 13.11.2015 18:53, Yuri Voinov wrote:
> There is no solution for ICQ with Squid now.
>
> You can only bypass proxying for ICQ clients.
>
There is: I can disable sslBump, and I did it already. It doesn't look
production-ready anyway.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] sslBump adventures in enterprise production environment

2015-11-13 Thread Eugene M. Zheganin
Hi.

Today I discovered that a bunch of old legacy ICQ clients that some
people till use have lost the ability to use HTTP CONNECT tunneling with
sslBump. No matter what I tried to allow direct splicing for them, all
was useless:

- arranging them by dst ACL, and splicing that ACL
- arranging them by ssl::server_name ACL, and splicing it

So I had to turn of sslBumping. Looks like it somehow interferes with
HTTP CONNECT even when splicing it.
Last version of sslBump part in the config was looking like that:


acl icqssl ssl::server_name login.icq.com
acl icqssl ssl::server_name go.icq.com
acl icqssl ssl::server_name ars.oscar.aol.com
acl icqssl ssl::server_name webim.qip.ru
acl icqssl ssl::server_name cb.icq.com
acl icqssl ssl::server_name wlogin.icq.com
acl icqssl ssl::server_name storage.qip.ru
acl icqssl ssl::server_name new.qip.ru

acl icqlogin dst 178.237.20.58
acl icqlogin dst 178.237.19.84
acl icqlogin dst 94.100.186.23

ssl_bump splice children
ssl_bump splice sbol
ssl_bump splice icqlogin
ssl_bump splice icqssl icqport
ssl_bump splice icqproxy icqport

ssl_bump bump interceptedssl

ssl_bump peek step1
ssl_bump bump unauthorized
ssl_bump bump entertainmentssl
ssl_bump splice all

I'm not sure that ICQ clients use TLS, but in my previous experience
they were configured to use proxy, and to connect through proxy to the
login.icq.com host on port 443.
Sample log for unsuccessful attempts:

1447400500.311 21 192.168.2.117 TAG_NONE/503 0 CONNECT
login.icq.com:443 solodnikova_k HIER_NONE/- -
1447400560.301 23 192.168.2.117 TAG_NONE/503 0 CONNECT
login.icq.com:443 solodnikova_k HIER_NONE/- -
1447400624.832359 192.168.2.117 TCP_TUNNEL/200 0 CONNECT
login.icq.com:443 solodnikova_k HIER_DIRECT/178.237.20.58 -
1447400631.038108 192.168.2.117 TCP_TUNNEL/200 0 CONNECT
login.icq.com:443 solodnikova_k HIER_DIRECT/178.237.20.58 -

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] sslBump and intercept

2015-11-12 Thread Eugene M. Zheganin
Hi.

This question is unrelated directly to my yesterday's one.

I decided to intercept the HTTPS traffic on my production squids from
proxy-unware clients to be able to tell them there's a proxy and they
should configure one.
So I'm doing it like (the process of forwarding using FreeBSD pf is not
shown here):

===Cut===
acl unauthorized proxy_auth stringthatwillnevermatch
acl step1 at_step sslBump1

https_port 127.0.0.1:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
https_port [::1]:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem

ssl_bump peek step1
ssl_bump bump unauthorized
ssl_bump splice all
===Cut===

Almost everything works, except that squid for some reason is generating
certificates in this case for IP addresses, not names, so the browser
shows a warning abount certificate being valid only for IP, and not name.

Am I doing something wrong ?

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump and intercept

2015-11-12 Thread Eugene M. Zheganin
Hi.

On 12.11.2015 17:04, Steve Hill wrote:
>
> proxy_auth won't work on intercepted traffic and will therefore always
> return false, so as far as I can see you're always going to peek and
> then splice.  i.e. you're never going to bump, so squid should never
> be generating a forged certificate.
Yup, I know that, and my fault is that I forgot to mention it, and to
explain that this sample config contains parts that handle user
authentication. So, yes, I'm aware that intercepted SSL traffic will
look to squid like anonymous, and that's the idea.
>
> You say that Squid _is_ generating a forged certificate, so something
> else is going on to cause it to do that.  My first guess is that Squid
> is generating some kind of error page due to some http_access rules
> which you haven't listed, and is therefore bumping.
This is exactly what's happening.
>
> Two possibilities spring to mind for the certificate being for the IP
> address rather than for the name:
> 1. The browser isn't bothering to include an SNI in the SSL handshake
> (use wireshark to confirm).  In this case, Squid has no way to know
> what name to stick in the cert, so will just use the IP instead.
> 2. The bumping is happening in step 1 instead of step 2 for some
> reason.  See:  http://bugs.squid-cache.org/show_bug.cgi?id=4327
Thanks, I'll try to investigate.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump and intercept

2015-11-12 Thread Eugene M. Zheganin
Hi,

On 12.11.2015 17:48, Yuri Voinov wrote:

> More probably this is bug
> http://bugs.squid-cache.org/show_bug.cgi?id=4188.
>
Page said it's fixed, and applied to 3.5. If it's already in 3.5.11,
then it's not it - I just tested 3.5.11, and the behavior is the same.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] sslBump somehow interferes with authentication

2015-11-11 Thread Eugene M. Zheganin
Hi.

I have configured simple ssl peek/splice on squid 3.5.10 for some simple
cases, but in my production, where configs are complicated, it doesn't
work as expected - somehow it interferes with authentication.

Suppose we have a config like:

===Cut===
acl freetime time MTWHF 18:00-24:00

acl foo dst 192.168.0.0/16
acl bar dstdomain .bar.tld

acl users proxy_auth steve
acl users proxy_auth mike
acl users proxy_auth bob

acl unauthorized proxy_auth stringthatwillnevermatch

acl block dstdomain "block.acl"
acl blockssl ssl::server_name "block.acl"

http_access allow foo
http_access allow bar

http_access deny unauthorized

http_access allow blockssl users freetime
http_access allow block users freetime
http_access deny blockssl users
http_access deny block users
http_access allow users
http_access deny all
===Cut===

This is a part of an actually working config (with some local names
modification, just to read it easily). This config is straightforward:
- foo and bar are allowed without authentication
- then an explicit authentication occurs ('http_access deny
unauthorized' looks redundant, and yes, the config will be work without
it, but the thing is that this ACL 'unauthorized' is used to display a
specific deny_info page for the users who failed to authorize).
- it allows to browse some usually blocked sites at some amounts of time
called 'freetime'.
- this config is sslBump-ready, a 'blockssl' ACL exists, which matches
site names on SNI.

Now I'm adding sslBump:

===Cut===
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump blockssl
ssl_bump splice all
===Cut===

As soon as I add sslBump, everything that is bumped, starts to be
blocking by 'http_access deny unauthorized' (everything that's spliced
works as intended). And I completely cannot understand why. Yes, I can
remove this line, but this way I'm loosing deny_info for specific cases
when someone fails to authorize, and plus - without sslBump it was
working, right ? Please help me understand this and solve the issue.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump somehow interferes with authentication

2015-11-11 Thread Eugene M. Zheganin
Hi.

On 11.11.2015 23:44, Amos Jeffries wrote:
> Proxy-authentication cannot be performed on MITM'd traffic. That
> includes SSL-bump decrypted messages.
>
> However, unlike the other methods SSL-bump CONNECT wrapper messages in
> explicit-proxy traffic can be authenticated and their credentials
> inherited by the messages decrypted. Squid should be doing that. But
> again cannot do it for the fake/synthetic ones it generates itself on
> intercepted port 443 traffic.
>
> So the question becomes, why are foo and bar ACLs not matching?
>  http_access rules are applied separately to the CONNECT wrapper message
> and to the decrypted non-CONNECT HTTP message(s).
>
>
Yeah, completely my fault - I forgot to tell what URL user is trying to
browse and what matches when.
Once again.

===Cut===
acl freetime time MTWHF 18:00-24:00

acl foo dst 192.168.0.0/16
acl bar dstdomain .bar.tld

acl users proxy_auth steve
acl users proxy_auth mike
acl users proxy_auth bob

acl unauthorized proxy_auth stringthatwillnevermatch

acl block dstdomain "block.acl"
acl blockssl ssl::server_name "block.acl"

http_access allow foo
http_access allow bar

http_access deny unauthorized

http_access allow blockssl users freetime
http_access allow block users freetime
http_access deny blockssl users
http_access deny block users
http_access allow users
http_access deny all
===Cut===

So, the user starts it's browser and opens the URL 'https://someurl'.
And this URL matches both 'block' and 'blockssl' ACLs, one I created for
you know... usual matching and one - for sslBump, since dstdomain ACLs
cannot work there. So, the main idea here is to actually show some
information to the user, when he's trying to visit some blocked site via
TLS and that site isn't allowed - because all the user sees in such
situation are various browser-depending error pages, like "Proxy server
refusing connections" (Firefox) or some other brief error (cannot
remember it exactly)  in Chrome - so user thinks it's technical error
and starts bothering tech support. Can this goal be achieved for a
configuration with user authentication ? ACL 'foo' and ACL 'bar' don't
match 'somesite' because they are created to match some traffic that is
allowed to all proxy users, regardless of their authentication, and I
listed these ACLs here to give proper representation of my ACL structure
- there's a part without authentication, and there's a part with.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump somehow interferes with authentication

2015-11-11 Thread Eugene M. Zheganin
Hi.

On 12.11.2015 0:06, Eugene M. Zheganin wrote:
> So, the user starts it's browser and opens the URL 'https://someurl'.
> And this URL matches both 'block' and 'blockssl' ACLs, one I created for
> you know... usual matching and one - for sslBump, since dstdomain ACLs
> cannot work there. So, the main idea here is to actually show some
> information to the user, when he's trying to visit some blocked site via
> TLS and that site isn't allowed - because all the user sees in such
> situation are various browser-depending error pages, like "Proxy server
> refusing connections" (Firefox) or some other brief error (cannot
> remember it exactly)  in Chrome - so user thinks it's technical error
> and starts bothering tech support. Can this goal be achieved for a
> configuration with user authentication ? ACL 'foo' and ACL 'bar' don't
> match 'somesite' because they are created to match some traffic that is
> allowed to all proxy users, regardless of their authentication, and I
> listed these ACLs here to give proper representation of my ACL structure
> - there's a part without authentication, and there's a part with.
>
Follow-up: the traffic isn't intercepted proxy traffic, it's a traffic
between a browser and a proxy, configured in that browser. If I remove
the line

http_access deny unauthorized

I'm receiving an sslBumped traffic from the sites that match the
'blockssl' ACL, and this traffic goes through the authentication chain.
The question is - why this line above makes the whole scheme to fall apart.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] mmap() in squid

2015-03-27 Thread Eugene M. Zheganin
Hi.

Squid uses mmap() call from 3.4.x, and mmap() on FreeBSD it has one
specific flag - MAP_NOSYNC, which prevents dirtied pages from being
flushed on disk:

MAP_NOSYNCCauses data dirtied via this VM map to be flushed to
   physical media only when necessary (usually by the
   pager) rather than gratuitously.  Typically this pre-
   vents the update daemons from flushing pages dirtied
   through such maps and thus allows efficient
sharing of
   memory across unassociated processes using a file-
   backed shared memory map.  Without this option any VM
   pages you dirty may be flushed to disk every so often
   (every 30-60 seconds usually) which can create
perfor-
   mance problems if you do not need that to occur (such
   as when you are using shared file-backed mmap regions
   for IPC purposes).  Note that VM/file system
coherency
   is maintained whether you use MAP_NOSYNC or not. 
This
   option is not portable across UNIX platforms (yet),
   though some may implement the same behavior by
default.

   WARNING!  Extending a file with ftruncate(2),
thus cre-
   ating a big hole, and then filling the hole by
modify-
   ing a shared mmap() can lead to severe file
fragmenta-
   tion.  In order to avoid such fragmentation you
should
   always pre-allocate the file's backing store by
   write()ing zero's into the newly extended area
prior to
   modifying the area via your mmap().  The
fragmentation
   problem is especially sensitive to MAP_NOSYNC pages,
   because pages may be flushed to disk in a totally
ran-
   dom order.

   The same applies when using MAP_NOSYNC to implement a
   file-based shared memory store.  It is
recommended that
   you create the backing store by write()ing zero's to
   the backing file rather than ftruncate()ing it.  You
   can test file fragmentation by observing the KB/t
   (kilobytes per transfer) results from an ``iostat 1''
   while reading a large file sequentially, e.g. using
   ``dd if=filename of=/dev/null bs=32k''.

   The fsync(2) system call will flush all dirty
data and
   metadata associated with a file, including dirty
NOSYNC
   VM data, to physical media.  The sync(8) command and
   sync(2) system call generally do not flush dirty
NOSYNC
   VM data.  The msync(2) system call is obsolete since
   BSD implements a coherent file system buffer cache.
   However, it may be used to associate dirty VM pages
   with file system buffers and thus cause them to be
   flushed to physical media sooner rather than later.

Last year there was an issue with PostgreSQL, which laso started to use
mmap() in it's 9.3 release, and it had a huge regression issue on
FreeBSD. One of the measures to fight this regression (but not the only)
was adding MAP_NOSYNC to the postgresql port. So I decided to do the
same for my local squid. I created a patch, where both of two
occurencies of mmap() were supplied with this flag. I'm using squid
3.4.x patched this way about a half-a-year. Couple of days ago I sent
this patch to the FreeBSD ports system, and squid port maintainer asks
me if I'm sure squid on FreeBSD does need this. Since I'm not a skilled
programmer (though I think using mmap() with MAP_NOSYNC is a good
thing), I decided to ask here - is this flag worth bothering, since
squid isn't a database engine ?

Thanks.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid SMP and SNMP

2015-03-19 Thread Eugene M. Zheganin
Hi.

On 18.03.2015 19:02, Amos Jeffries wrote:
 Process kid3 (SMP coordinator) is attempting to respond.

 Since you configured:
   snmp_port 340${process_number}

 and the coordinator is process number 3 I think it will be using port
 3403 for that response.


Nobody is listening on these ports:

[root@taiga:local/squidquotas]# netstat -an | grep udp | grep
340  
udp46  0  0 *.3401 *.*   
udp46  0  0 *.3402 *.*   
[root@taiga:local/squidquotas]#

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid SMP and SNMP

2015-03-18 Thread Eugene M. Zheganin
Hi.

I'm gathering statistics from squid using SNMP. When I use single
process everything is fine, but when it comes to multiple workers - SNMP
doesn't work - I got timeout when trying to read data with snmpwalk.

I'm using the following tweak:

snmp_port 340${process_number}

both workers bind on ports 3401 and 3402 indeed, but then I got this
timeout.
Does anyone have a success story about squid SMP and SNMP ?

I wrote a message about this problem about a year or so, it was 3.3.x,
but situation didn't change.
Should I report this as a bug ?

Thanks.
Eugene.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid SMP and SNMP

2015-03-18 Thread Eugene M. Zheganin
Hi.

On 18.03.2015 16:04, Amos Jeffries wrote:

 SNMP is on the list of SMP-aware features.

 The worker receiving the SNMP request will contact other workers to
 fetch the data for producing the SNMP response. This may take some time.

Yeah, but it seems like it doesn't happen. Plus, I'm getting the errors
in the cache.log on each attempt:

[root@taiga:etc/squid]# snmpwalk localhost:3402
1.3.6.1.4.1.3495.1.2.1.0 
Timeout: No Response from localhost:3402

and in the log:

2015/03/18 18:48:26 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:46682: (22) Invalid argument
2015/03/18 18:48:49 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:50 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:51 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:52 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:53 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:54 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument

Thanks.
Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64

2015-01-15 Thread Eugene M. Zheganin
Hi.

On 12.01.2015 19:06, Amos Jeffries wrote:

 I am confident that those types of leaks do not exist at al in Squid 3.4.

 These rounds of mmory exhaustion problems are caused by pseudo-leaks,
 where Squid incorrectly holds onto memory (has not forgotten it
 though) far longer than it should be.

Could you please clarify for me what is the Long Strings pool and how
can I manage it's size ?
After start the largest consuming pool is the mem_node one, but it
usually stops increasing after a few days (somewhere around the
cache_memory border - don't know if it's it, or just a coincedence).
Long Strings, however, keep raising and raising, and after some days
it becomes the largest one.

I'm using the following settings:
cache_mem 512 MB
cache_dir diskd /var/squid/cache 1100 16 256

after few days SNMP reports that the clients amount is around 1700.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64

2015-01-12 Thread Eugene M. Zheganin
Hi.

On 09.01.2015 06:12, Amos Jeffries wrote:
 Grand total:
   = 9.5 GB of RAM just for Squid.

 .. then there is whatever memory the helper programs, other software
 on the server and operating system all need.

I'm now also having a strong impression that squid is leaking memory.
Now, when 3.4.x is able to handle hundreds of users during several hours
I notice that it's memory usage is constantly increasing. My patience
always ends at the point of 1.5 Gigs memory usage, where server memory
starts to be exhausted (squid is running with lots of other stuff) and I
restart it. This is happening on exactly the same config the 3.3.13 was
running, so ... I have cache_mem set to 512 megs, diskd, medium sized
cache_dir and lots of users. Is something changed drastically in 3.4.x
comparing to the 3.3.13, or is it, as it seems, a memory leak ?

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 3.3.x - 3.4.x: huge performance regression

2015-01-12 Thread Eugene M. Zheganin
Hi.

On 12.01.2015 16:03, Eugene M. Zheganin wrote:
 Hi.

 Just to point this out in the correct thread - to all the people who
 replied here - Steve Hill has provided a patch for a 3.4.x that solves
 the most performance degradation issue. 3.4.x is still performing poorly
 comparing to the 3.3.x branch, but I guess this is due to major code
 changes. As of now my largest production installation (1.2K clients,
 300-400 active usernames) is running 3.4.9.
... and massively leaking, yeah.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64

2015-01-12 Thread Eugene M. Zheganin
Hi.

On 12.01.2015 16:41, Eugene M. Zheganin wrote:
 I'm now also having a strong impression that squid is leaking memory.
 Now, when 3.4.x is able to handle hundreds of users during several
 hours I notice that it's memory usage is constantly increasing. My
 patience always ends at the point of 1.5 Gigs memory usage, where
 server memory starts to be exhausted (squid is running with lots of
 other stuff) and I restart it. This is happening on exactly the same
 config the 3.3.13 was running, so ... I have cache_mem set to 512
 megs, diskd, medium sized cache_dir and lots of users. Is something
 changed drastically in 3.4.x comparing to the 3.3.13, or is it, as it
 seems, a memory leak ?
Squid 3.4 on FreeBSD is by default compiling with the
--enable-debug-cbdata option and when 45th log selector is at it's
default 1, cache.log is filling with CBData memory leaking alarms. Here
is the list for the last 40 minutes, sorted with the occurrence count:

104136 Checklist.cc:160
81438 Checklist.cc:187
177226 Checklist.cc:320
84861 Checklist.cc:45
89151 CommCalls.cc:21
22069 DiskIO/DiskDaemon/DiskdIOStrategy.cc:353
 120 UserRequest.cc:166
  29 UserRequest.cc:172
55814 clientStream.cc:235
5966 client_side_reply.cc:93
4516 client_side_request.cc:134
5568 dns_internal.cc:1131
4859 dns_internal.cc:1140
  86 event.cc:90
7770 external_acl.cc:1426
1548 fqdncache.cc:340
7467 helper.cc:856
39905 ipcache.cc:353
11880 store.cc:1611
181959 store_client.cc:154
256951 store_client.cc:337
6835 ufs/UFSStoreState.cc:333

are those all false alarms ?

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] 3.3.x - 3.4.x: huge performance regression

2014-10-22 Thread Eugene M. Zheganin
Hi.

I was using the 3.4.x branch for quite some time, it was working just
fine on small installations.
Yesterday I upgraded my largest cache installation from 3.3.13 to 3.4.8
(same config, diskd, NTLM/GSS-SPNEGO auth helpers, external helpers).
Today morning I noticed that squid is spiking to 100% of CPU and almost
isn't serving any traffic. Restart didn't help, squid is serving pages
while continuing to consume CPU, load grows, until it's at 100%, and
after some time my users are unable to open any page from Internet. This
is sad, so I downgraded to 3.3.13. CPU consumption went back to 20-35%
and everything is back to normal.

In order to understand what's happening I did some dtrace profiling to
see what is squid busy with, taking the consideration, that measuring
the same amount of connect()/socket() syscalls should give same amount
of squid work, but the results were totally different on one number of
such syscalls.

Anyone to comment ?

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] assertion failed: lm_request-waiting

2014-10-21 Thread Eugene M. Zheganin

Hi.

Is someone getting this too ? I get this with sad regularity:

# grep lm_request /var/log/squid/cache.log
2014/10/06 14:32:12 kid1| assertion failed: UserRequest.cc:229: 
lm_request-waiting
2014/10/07 16:06:10 kid1| assertion failed: UserRequest.cc:229: 
lm_request-waiting
2014/10/16 16:28:48 kid1| assertion failed: UserRequest.cc:229: 
lm_request-waiting
2014/10/17 14:32:34 kid1| assertion failed: UserRequest.cc:229: 
lm_request-waiting
2014/10/17 14:33:09 kid1| assertion failed: UserRequest.cc:229: 
lm_request-waiting
2014/10/21 12:25:18 kid1| assertion failed: UserRequest.cc:229: 
lm_request-waiting


each time squid crashes.
I filed a http://bugs.squid-cache.org/show_bug.cgi?id=4104, but noone 
got interesed.
I accept, this happens only on one of many installations. Probably 
someone knows a workaround ?


Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid, Kerberos and FireFox (Was: Re: leaking memory in squid 3.4.8 and 3.4.7.)

2014-10-19 Thread Eugene M. Zheganin
Hi.

On 19.10.2014 13:32, Victor Sudakov wrote:

 Hopefully I can interest our Windows admin to enable Kerberos event
 logging per KB262177.

 But for the present I have found an ugly workaround. In squid's keytab, I
 created another principal called 'squiduser' with the same hex key and
 kvno as that of the principal 'HTTP/proxy.sibptus.transneft.ru.'

(This may sound like a dumb question, but anyway) Did you initially map
any AD user to the SPN with a hostname that clients know your proxy under ?

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid, Kerberos and FireFox (Was: Re: leaking memory in squid 3.4.8 and 3.4.7.)

2014-10-16 Thread Eugene M. Zheganin
Hi.

On 17.10.2014 11:02, Victor Sudakov wrote:

 I am attaching a traffic dump.

 Please look at Frame No. 36, where a ticket is requested for
 HTTP/proxy.sibptus.transneft.ru, and then at Frame No. 39, where
 the ticket is granted, but for the wrong principal name.

The thing is, valid exchange should not and does not contain the
KRB5KRB_AP_ERR_MODIFIED error, and yours does. This indicates something
is wrong between these two hosts (as I understand, 10.14.134.4 is a
Windows Server, and .122 is a workstation). You need to investigate on
your DC what's happening, Probably these are the etype errors (may be
not). If your DC is really w2k (not w2k3 or w2k8) and the workstation is
of different generation, this can happen. Also, lots of howtos spread
around the Internet, make an engineer believe that he should kreate the
keytab with only one encryption type for squid, insted kreating the
keytab with all of available on the DC ciphers, This can also lead to
complicated situations.

There's also a decent article there:
http://blogs.technet.com/b/askds/archive/2008/06/11/kerberos-authentication-problems-service-principal-name-spn-issues-part-3.aspx

Could help you as it did help me one day.

Eugene.
//
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL bumping (again)

2014-07-13 Thread Eugene M. Zheganin
Hi.

On 12.07.2014 14:16, Amos Jeffries wrote:

 Sounds like http://bugs.squid-cache.org/show_bug.cgi?id=3966

 PS. 3.3 series will never get this fix. It is on the TODO list for a
 3.4.7 porting attempt, but the volatile SSL-bump related infrastructure
 in Squid in recent years makes it unlikely to succeed.


Thanks, I applied the patch, but for some reason neither the original
patch, nor modified one doesn't work for me (and I'm sure I did apply
the patch, because the additional code is present in the gadgets.cc).
Still getting same error.

Can someone confirm that the patch still fixes this ?

Eugene.


[squid-users] SSL bumping (again)

2014-07-12 Thread Eugene M. Zheganin
Hi.

Squid-3.3.11
FreeBSD 10.0-STABLE

I've set up SSL bumping in order to deal with file uploading (actually
to block file uploading for certain groups of users) via HTTPS.
It works just fine for most of the HTTPS enabled sites, but with some
Google sites I have a problem - browsers (FF for example) display an
error - www.youtube.com uses an invalid security certificate. The
certificate does not come from a trusted source. (Error code:
sec_error_inadequate_key_usage). Chrome also displays an error, but in
case with Crome it's undistinguisheable from the usual error when the CA
certificate if out of trust list.  This happens on the most of the
google .com domains, but not on all of them - for example google.ru
opens just fine using HTTPS. I've installed custom squid CA certificate
into the browser's white list for sure, but anyway there's no button I
understand the risk so this error is about something else. Are those
some google tricks, perhaps caused by some extensions like SPDY or is
this about my setup ?

Thanks.
Eugene.


Re: [squid-users] Re: squid_kerb_group (again)

2013-12-29 Thread Eugene M. Zheganin
Hi.

On 29.12.2013 18:59, Markus Moeller wrote:
 I setup a virtual machine with freebsd 10-RC3

 $ uname -a
 FreeBSD freebsd 10.0-RC3 FreeBSD 10.0-RC3 #0 r259778: Mon Dec 23
 23:27:58 UTC 2013
 r...@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

 the attached packages and compiled squid trunk.

 Although squid does not fully compiled (SQUID_BSDNET_INCLUDES needs to
 change include order) and fails in the base code with

 [...]
^

 the helpers compile fine and when I run ext_kerberos_ldap_group_acl 
 it works with the MEMORY cache.

Yeah, I agree - I myself have a bunch of squids on FreeBSD
10.0-WHATEVER, and most of them work fine, except this one.

I think openldap libraries lack the error handling output, basically
they do two kinds of messages I did this and Oops, something has gone
wrong. I spend serveral hours googling my problem and came to the
conclusion above. I will ask in their mailing list.

Thanks.
Eugene.


Re: [squid-users] Re: squid_kerb_group (again)

2013-12-26 Thread Eugene M. Zheganin
Hi.

On 24.12.2013 20:39, Markus Moeller wrote:


  Could you tell me which OS , kerberos, ldap and sasl version you use ?


It's

FreeBSD 10.0-BETA2 amd64
Heimdal Kerberos 1.5.2
cyrus-sasl 2.1.26
openldap-sasl-client-2.4.38

last two are from FreeBSD ports, -sasl- means it's compiled with
--with-cyrus-sasl.

Thanks.
Eugene.


[squid-users] squid_kerb_group (again)

2013-12-23 Thread Eugene M. Zheganin
Hi.

squid 3.3.11
FreeBSD 10.x

I'm fighting squid_kerb_group, sometimes it may become tricky. Here's
where I'm stuck at:

I'm launching this:

===Cut===
KRB5_KTNAME=/usr/local/etc/squid/squid.keytab
export KRB5_KTNAME

/usr/local/libexec/squid/ext_kerberos_ldap_group_acl \
-a \
-m 16 \
-i \
-ddd \
-D NORMA.COM \
-b cn=Users,dc=norma,dc=com \
-S hq-gc.norma@norma.com \
-u proxy2 \
-p XXX \
-N soft...@norma.com \
-g Internet Users - Proxy2@
===Cut===

and getting this:

===Cut===
./squid_kerb_group.sh
kerberos_ldap_group.cc(338): pid=90134 :2013/12/24 01:32:25|
kerberos_ldap_group: INFO: Starting version 1.3.0sq
support_group.cc(372): pid=90134 :2013/12/24 01:32:25|
kerberos_ldap_group: INFO: Group list Internet Users - Proxy2@
support_group.cc(437): pid=90134 :2013/12/24 01:32:25|
kerberos_ldap_group: INFO: Group Internet Users - Proxy2  Domain
support_netbios.cc(74): pid=90134 :2013/12/24 01:32:25|
kerberos_ldap_group: DEBUG: Netbios list soft...@norma.com
support_netbios.cc(147): pid=90134 :2013/12/24 01:32:25|
kerberos_ldap_group: DEBUG: Netbios name SOFTLAB  Domain NORMA.COM
support_lserver.cc(73): pid=90134 :2013/12/24 01:32:25|
kerberos_ldap_group: DEBUG: ldap server list hq-gc.norma@norma.com
support_lserver.cc(137): pid=90134 :2013/12/24 01:32:25|
kerberos_ldap_group: DEBUG: ldap server hq-gc.norma.com Domain NORMA.COM
emz
kerberos_ldap_group.cc(430): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: INFO: Got User: emz set default domain: NORMA.COM
kerberos_ldap_group.cc(435): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: INFO: Got User: emz Domain: NORMA.COM
support_member.cc(55): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: User domain loop: group@domain Internet
Users - Proxy2@
support_member.cc(83): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Default domain loop: group@domain Internet
Users - Proxy2@
support_member.cc(85): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Found group@domain Internet Users - Proxy2@
support_ldap.cc(810): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Setup Kerberos credential cache
support_krb5.cc(91): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Get default keytab file name
support_krb5.cc(97): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Got default keytab file name
/usr/local/etc/squid/squid.keytab
support_krb5.cc(111): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Get principal name from keytab
/usr/local/etc/squid/squid.keytab
support_krb5.cc(119): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Keytab entry has realm name: NORMA.COM
support_krb5.cc(133): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Found principal name:
HTTP/proxy2.norma@norma.com
support_krb5.cc(174): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Set credential cache to MEMORY:squid_ldap_90134
support_krb5.cc(267): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Got principal name
HTTP/proxy2.norma@norma.com
support_krb5.cc(311): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Stored credentials
support_ldap.cc(839): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Initialise ldap connection
support_ldap.cc(845): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Canonicalise ldap server name for domain
NORMA.COM
support_resolv.cc(245): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Ldap server loop: lserver@domain
hq-gc.norma@norma.com
support_resolv.cc(247): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Found lserver@domain hq-gc.norma@norma.com
support_resolv.cc(441): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Sorted ldap server names for domain NORMA.COM:
support_resolv.cc(443): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Host: hq-gc.norma.com Port: -1 Priority: -2
Weight: -2
support_ldap.cc(854): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Setting up connection to ldap server
hq-gc.norma.com:389
support_ldap.cc(865): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Bind to ldap server with SASL/GSSAPI
support_sasl.cc(274): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: ERROR: ldap_sasl_interactive_bind_s error: Local error
support_ldap.cc(869): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: ERROR: Error while binding to ldap server with
SASL/GSSAPI: Local error
support_ldap.cc(891): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Error during initialisation of ldap
connection: No error: 0
support_ldap.cc(951): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: DEBUG: Error during initialisation of ldap
connection: No error: 0
support_member.cc(96): pid=90134 :2013/12/24 01:32:26|
kerberos_ldap_group: INFO: User emz is not member of group@domain
Internet Users - Proxy2@
support_member.cc(111): 

Re: [squid-users] Re: squid_kerb_group (again)

2013-12-23 Thread Eugene M. Zheganin
Hi.

On 23.12.2013 22:39, Markus Moeller wrote:
 Hi Eugene,

  I can only guess that the memory cache is not working.  Can you
 change in include/autoconf.h

 /* Define if kerberos has MEMORY: cache support */
 #define HAVE_KRB5_MEMORY_CACHE 1

 to

 #undef HAVE_KRB5_MEMORY_CACHE

 and recompile ?

Wow, it started to work, thanks.
Will I have some performance penalties from this ?

What could I do to investigate this issue and probably fix it ?

Thanks.
Eugene.


Re: [squid-users] Re: squid 3.3.x and machines that aren't domain members

2013-12-10 Thread Eugene M. Zheganin
Hi.

On 23.07.2013 07:50, Brendan Kearney wrote:

 your home machine, is it part of the domain that the work proxies are
 authenticating against?  You would never be able to retrieve a kerberos
 ticket from the domain to use for authentication to the proxies if your
 home machine is not part of the domain.  as for ntlm, you should be able
 to use the proxies if they force auth and support ntlm.  you may need to
 configure your browser to use integrated windows authentication.  IE vs
 Firefox have different configs that have to be setup for each to work
 with proxies that force authentication.

 you may need to turn integrated windows authentication off too, in the
 case where you are not part of the domain.  otherwise the user bob
 with a password of blah in the workgroup kitchen PC will be
 presenting his creds to the proxies and will never be allowed to browse.

 from the errors, it seems that no ticket is presented by your client.  i
 dont see anything about ntlm.  you may have fallen into the valid
 failure scenario, where the proxy and browser both support and agree to
 NEGOTIATE / Kerberos auth, but your client cannot supply valid
 credentials (in the form of a kerberos ticket), and therefore you are
 not authenticated and not allowed to surf.  you do not fall through to
 the next auth type supported because the agreed upon auth method
 returned an appropriate failure.

 to get past that, and use an alternate auth method, such as ntlm, you
 need to configure your browser to not use kerberos auth.  again, IE and
 Firefox will do be different in how you configure that.

So, about this problem.

Does anyone have a working method of authorizing Windows browsers on
such a proxy ? I can easily install another, just for machines that
aren't joined domain, but I kinda dislike this solution. Okkam's razor,
you know this stuff. Furthermore, I'm upgrading my old 3.2 squids to
3.3, and I like the way 3.3 is working, except this thing.

I tried to play with FF's options,. but didn't succeed - squid keeps
rejecting the authentication. I have basic auth also running, and, if
Escape is pressed on a NTLM/SPNEGO popup, a basic auth popup appears,
but FF for some reason still tried to authenticate using NTLM/SPNEGO.

Thanks.
Eugene.


Re: [squid-users] Re: Re: ext_kerberos_ldap_group_acl vs ext_ldap_group_acl

2013-09-04 Thread Eugene M. Zheganin
Hi.

On 04.09.2013 11:01, Markus Moeller wrote:

 Are you still interested in tcpdump captures you mentioned in previous
 letter ?


 Yes I would still like to see it.

(looks like for some reason mailing list tracker ate this message - my
relay says it's send, but it doesn't appear in the mailing list,
probably because of the URLs it was marked as spam, so here's the copy
I'm sending to you directly.)

Here's the pcap capture:
http://unix.zhegan.in/files/ext_kerberos_ldap_group_acl.pcap
Console log for the exchange:
http://unix.zhegan.in/files/ext_kerberos_ldap_group_acl.txt

The capture contains network exchange from the following sequence of
actions:

- tcpdump was started as 'tcpdump -s 0 -w
ext_kerberos_ldap_group_acl.pcap -ni vlan1 port 53 or port 389 or port 88'
- helper was started in shell, arguments:

/usr/local/libexec/squid/ext_kerberos_ldap_group_acl \
-i \
-a \
-m 16 \
-d \
-D NORMA.COM \
-b cn=Users,dc=norma,dc=com \
-u proxy5-backup \
-p  \
-N soft...@norma.com \
-S hq-gc.norma@norma.com

- line 'emz Internet%20Users%20-%20Proxy1' was typed 5 times (5 'OK'
answers were received).
- helper was stopped
- tcpdump was stopped

From my point of view the initial pause and the subsequent ones are the
same.

Addresses:

192.168.13.3 - the address of a machine where the helper was ran
192.168.3.45 - one of the AD controllers

The machine was idle for the time of the experiment (this is a backup
gateway with VRRP, in inactive state).
This machine has a named ran, and it's resolver uses it via lo0
interface, so no DNS exchange can be seen, as all of the answers were
cached by named.
If seeing DNS exchange is vital for understanding the pause, I can
probably recapture the exchange using external DNS.

Eugene.


[squid-users] ext_kerberos_ldap_group_acl vs ext_ldap_group_acl

2013-09-03 Thread Eugene M. Zheganin
Hi.

I moved almost all of my squid to authentication schemes using
ext_kerberos_ldap_group_acl, and, though they do work OK, I'm not
entirely happy with their performance. ext_ldap_group_acl is like speed
of light fast comparing to ext_kerberos_ldap_group_acl. The most lag
(around 0.5 sec) happens, from my observation, between these two lines:

[...]
support_krb5.cc(267): pid=53166 :2013/09/03 18:52:45|
kerberos_ldap_group: DEBUG: Got principal name
HTTP/proxy1.norma@norma.com
support_krb5.cc(311): pid=53166 :2013/09/03 18:52:46|
kerberos_ldap_group: DEBUG: Stored credentials
[...]

Is there any way to speed this up ? I've reread the documentation, but
without result. Is there any cache that could be used ?
I understand that kerberos group helper is way more complicated than the
pure ldap one, but still, having this pause on each group membership
checking is sad.

Thanks.
Eugene.


Re: [squid-users] Re: ext_kerberos_ldap_group_acl vs ext_ldap_group_acl

2013-09-03 Thread Eugene M. Zheganin
Hi.

On 04.09.2013 01:42, Markus Moeller wrote:

   Do you work in a Windows environemnt with AD as kdc ?   I have a new
 method in my squid 3.4 patch (see squid dev list) which uses the Group
 Information MS is putting in the ticket. This would eliminate the ldap
 lookup completely.

I do. This is awesome ! Thanks a lot.

Are you still interested in tcpdump captures you mentioned in previous
letter ?

P.S. I found this message. Will the new helper protocol recognize nested
groups ?

Eugene.


Re: [squid-users] FreeBSD

2013-08-16 Thread Eugene M. Zheganin
Hi.

On 15.08.2013 06:11, Amos Jeffries wrote:
 On 2013-08-14 13:21, Reginaldo Giovane Guaitanele wrote:
 Não estou conseguindo usar o tproxy no freebsd.

 alguem sabe quais regras usar no ipfw ou pf pra funcionar com tproxy?

 TPROXY on FreeBSD requires Squid-3.4 beta version and IPFW tool
 (FreeBSD version of PF does not support TPROXY).
 I'm not sure which rules IPFW requires, TPROXY implementations tend to
 use the wording divert to describe it separately from NAT
 'redirect'/'forward' rules.

Does it ? Actually, I was using it, I guess it's only FreeBSD 10 PF
doesn't support tproxy right now.
I'm still using the interception on my 8.x FreeBSDs running with PF.

Eugene.


[squid-users] samba4 and ntlm_auth - debugging

2013-08-11 Thread Eugene M. Zheganin

Hi.

Recently I stepped on a bug in ntlm_auth helper from samba4 suite, guys 
in the samba team confirmed the possible bug with string formatting and 
possibly a missing '\0' delimiter at some point and requested more info, 
but in the same time they seem not being in the mood of explaining how 
to use ntlm_auth with two protocols - 
squid-2.5-ntlmssp/ntlmssp-client-1. The only thing I understood - is 
that using these two protocols it's possible to debug the authentication 
sequence, but I lack the documentation. I hope you guys could point me 
at right direction.


Thanks.
Eugene.


Re: [squid-users] Re: squid_kerb_ldap - Could not set LDAP_OPT_X_SASL_SECPROPS

2013-07-23 Thread Eugene M. Zheganin
Hi.

Lol, saw this message today while fighting exactly the same trouble. I
guess Anton already resolved this situation, but for future reference I
decided to leave a trail in the archives: this message can be caused (it
mostly probable is caused by, but still, there can be another reasons)
by openldap-client being compiled without SASL. And this can happen if a
portupgrade/portmaster has been used to install it, because
net/openldapXX-sasl-client port is sort of a holy grail for portmaster
tool and similar one (this is a metaport, and after an option [x] SASL
has been removed in the main port, this is now a total mess) - always
been searched for, but never found.


On 24.11.2012 18:31, Markus Moeller wrote:

 Hi

   I assume you use openldap on your freebsd build. Can you try from
 the command line:

 #  kinit -kt /usr/local/etc/HTTP.keytab
 HTTP/proxy.m-tisiz.local@M-TISIZ.LOCAL
 #  ldapsearch -d 999 -H ldap://pollux.m-tisiz.local:389 -Y GSSAPI -O
 maxssf=56 -b dc=M-TISIZ,dc=LOCAL -s sub (samaccountname=antec)

 and send me the output ?

 Regards
 Markus


 Подшивалов Антон supp...@murmansk-tisiz.ru wrote in message
 news:95378ca7accc17ee30ecf07a71c9b...@murmansk-tisiz.ru...
 Hello!
 I use:
 proxy# uname -a
 FreeBSD proxy.m-tisiz.local 8.3-RELEASE-p1 FreeBSD 8.3-RELEASE-p1 #0:
 Wed May 23 22:56:59 MSK 2012
 ant@freebsd.m-tisiz.local:/usr/obj/usr/src/sys/AnteC_kernel  i386

 I try to authenticate squid user by Active Directory. But have some
 error when use  squid_kerb_ldap external helper:

 proxy# /usr/local/libexec/squid/squid_kerb_ldap -d -D M-TISIZ.LOCAL
 -g inet_users@
 2012/11/23 16:04:20| squid_kerb_ldap: Starting version 1.2.2
 2012/11/23 16:04:20| squid_kerb_ldap: Group list inet_users@
 2012/11/23 16:04:20| squid_kerb_ldap: Group inet_users  Domain
 2012/11/23 16:04:20| squid_kerb_ldap: Netbios list NULL
 2012/11/23 16:04:20| squid_kerb_ldap: No netbios names defined.
 2012/11/23 16:04:20| squid_kerb_ldap: ldap server list NULL
 2012/11/23 16:04:20| squid_kerb_ldap: No ldap servers defined.
 antec
 2012/11/23 16:04:23| squid_kerb_ldap: Got User: antec set default
 domain: M-TISIZ.LOCAL
 2012/11/23 16:04:23| squid_kerb_ldap: Got User: antec Domain:
 M-TISIZ.LOCAL
 2012/11/23 16:04:23| squid_kerb_ldap: User domain loop: group@domain
 inet_users@
 2012/11/23 16:04:23| squid_kerb_ldap: Default domain loop:
 group@domain inet_users@
 2012/11/23 16:04:23| squid_kerb_ldap: Found group@domain inet_users@
 2012/11/23 16:04:23| squid_kerb_ldap: Setup Kerberos credential cache
 2012/11/23 16:04:23| squid_kerb_ldap: Get default keytab file name
 2012/11/23 16:04:23| squid_kerb_ldap: Got default keytab file name
 /usr/local/etc/HTTP.keytab
 2012/11/23 16:04:23| squid_kerb_ldap: Get principal name from keytab
 /usr/local/etc/HTTP.keytab
 2012/11/23 16:04:23| squid_kerb_ldap: Keytab entry has realm name:
 M-TISIZ.LOCAL
 2012/11/23 16:04:23| squid_kerb_ldap: Found principal name:
 HTTP/proxy.m-tisiz.local@M-TISIZ.LOCAL
 2012/11/23 16:04:23| squid_kerb_ldap: Set credential cache to
 MEMORY:squid_ldap_16670
 2012/11/23 16:04:23| squid_kerb_ldap: Got principal name
 HTTP/proxy.m-tisiz.local@M-TISIZ.LOCAL
 2012/11/23 16:04:23| squid_kerb_ldap: Stored credentials
 2012/11/23 16:04:23| squid_kerb_ldap: Initialise ldap connection
 2012/11/23 16:04:23| squid_kerb_ldap: Canonicalise ldap server name
 for domain M-TISIZ.LOCAL
 2012/11/23 16:04:23| squid_kerb_ldap: Resolved SRV
 _ldap._tcp.M-TISIZ.LOCAL record to altair.m-tisiz.local
 2012/11/23 16:04:23| squid_kerb_ldap: Resolved SRV
 _ldap._tcp.M-TISIZ.LOCAL record to pollux.m-tisiz.local
 2012/11/23 16:04:23| squid_kerb_ldap: Resolved address 1 of
 M-TISIZ.LOCAL to altair.m-tisiz.local
 2012/11/23 16:04:23| squid_kerb_ldap: Resolved address 2 of
 M-TISIZ.LOCAL to pollux.m-tisiz.local
 2012/11/23 16:04:23| squid_kerb_ldap: Resolved address 3 of
 M-TISIZ.LOCAL to altair.m-tisiz.local
 2012/11/23 16:04:23| squid_kerb_ldap: Resolved address 4 of
 M-TISIZ.LOCAL to pollux.m-tisiz.local
 2012/11/23 16:04:23| squid_kerb_ldap: Resolved address 5 of
 M-TISIZ.LOCAL to altair.m-tisiz.local
 2012/11/23 16:04:23| squid_kerb_ldap: Resolved address 6 of
 M-TISIZ.LOCAL to pollux.m-tisiz.local
 2012/11/23 16:04:23| squid_kerb_ldap: Adding M-TISIZ.LOCAL to list
 2012/11/23 16:04:23| squid_kerb_ldap: Sorted ldap server names for
 domain M-TISIZ.LOCAL:
 2012/11/23 16:04:23| squid_kerb_ldap: Host: pollux.m-tisiz.local
 Port: 389 Priority: 0 Weight: 100
 2012/11/23 16:04:23| squid_kerb_ldap: Host: altair.m-tisiz.local
 Port: 389 Priority: 0 Weight: 100
 2012/11/23 16:04:23| squid_kerb_ldap: Host: M-TISIZ.LOCAL Port: -1
 Priority: -2 Weight: -2
 2012/11/23 16:04:23| squid_kerb_ldap: Setting up connection to ldap
 server pollux.m-tisiz.local:389
 2012/11/23 16:04:23| squid_kerb_ldap: Bind to ldap server with
 SASL/GSSAPI
 2012/11/23 16:04:23| squid_kerb_ldap: Could not set
 LDAP_OPT_X_SASL_SECPROPS: maxssf=56: Can't contact LDAP server
 2012/11/23 16:04:23| squid_kerb_ldap: Error while 

[squid-users] squid 3.3.x and machines that aren't domain members

2013-07-22 Thread Eugene M. Zheganin

Hi.

I'm still getting issues with squid 3.3.x. :) I don't want to misreport 
any of the issues, thus making the developers to do some extra work, 
instead of just answering in the maillist, so I decided to ask here first.
(Once again: I use squid in the corporate AD environment, lots of domain 
controllers, ldap, all the stuff). Everything is fine about domain 
members and everything is fine about basic auth from various software 
running on these domain members machines. But. I have a home machine, 
and it seems like there's no way of letting it throught the VPNed 
proxies: they refuse to authenticate this machine. I tried to use a 
SPNEGO/NTLM proxy with a kerberos_ldap_group helper, and I tried 
different proxy with NTLM auth and the good ol squid_ldap_group helper.  
I tried chrome/FF, they behave identically. On the SPNEGO/NTLM proxy I'm 
getting (lots of these):


===Cut===
2013/07/23 00:40:18| negotiate_wrapper: Got 'YR 
YIGeBgYrBgEFBQKggZMwgZCgGjAYBgorBgEEAYI3AgIeBgorBgEEAYI3AgIKonIEcE5FR09FWFRTAABgcN05UVacRCxcGu+HRovNodg9kxO5KzyBFBwm/EkKTwW6F/WWzC6Av1PmXT1oFIorlAAAYAEAAEVyfDIyRYtIv9kqa6BepAo=' 
from squid (length: 219).
2013/07/23 00:40:18| negotiate_wrapper: Decode 
'YIGeBgYrBgEFBQKggZMwgZCgGjAYBgorBgEEAYI3AgIeBgorBgEEAYI3AgIKonIEcE5FR09FWFRTAABgcN05UVacRCxcGu+HRovNodg9kxO5KzyBFBwm/EkKTwW6F/WWzC6Av1PmXT1oFIorlAAAYAEAAEVyfDIyRYtIv9kqa6BepAo=' 
(decoded length: 161).

2013/07/23 00:40:18| negotiate_wrapper: received Kerberos token
negotiate_kerberos_auth.cc(315): pid=95629 :2013/07/23 00:40:18| 
negotiate_kerberos_auth: DEBUG: Got 'YR 
YIGeBgYrBgEFBQKggZMwgZCgGjAYBgorBgEEAYI3AgIeBgorBgEEAYI3AgIKonIEcE5FR09FWFRTAABgcN05UVacRCxcGu+HRovNodg9kxO5KzyBFBwm/EkKTwW6F/WWzC6Av1PmXT1oFIorlAAAYAEAAEVyfDIyRYtIv9kqa6BepAo=' 
from squid (length: 219).
negotiate_kerberos_auth.cc(378): pid=95629 :2013/07/23 00:40:18| 
negotiate_kerberos_auth: DEBUG: Decode 
'YIGeBgYrBgEFBQKggZMwgZCgGjAYBgorBgEEAYI3AgIeBgorBgEEAYI3AgIKonIEcE5FR09FWFRTAABgcN05UVacRCxcGu+HRovNodg9kxO5KzyBFBwm/EkKTwW6F/WWzC6Av1PmXT1oFIorlAAAYAEAAEVyfDIyRYtIv9kqa6BepAo=' 
(decoded length: 161).
negotiate_kerberos_auth.cc(200): pid=95629 :2013/07/23 00:40:18| 
negotiate_kerberos_auth: ERROR: gss_acquire_cred() failed:  No 
credentials were supplied, or the credentials were unavailable or 
inaccessible.. unknown mech-code 0 for mech unknown
2013/07/23 00:40:18| negotiate_wrapper: Return 'BH gss_acquire_cred() 
failed:  No credentials were supplied, or the credentials were 
unavailable or inaccessible.. unknown mech-code 0 for mech unknown'
2013/07/23 00:40:18 kid1| ERROR: Negotiate Authentication validating 
user. Error returned 'BH gss_acquire_cred() failed:  No credentials were 
supplied, or the credentials were unavailable or inaccessible.. unknown 
mech-code 0 for mech unknown'

===Cut===

In the wireshark I see that NTLM/SPNEGO authentication is running and 
the client machine is sending authentication back to the proxy, but for 
some reason squid doesn't think they are valid, so squid just answers 
with 407.

Is this a bug, or, again, some misconfiguration ?

Thanks.
Eugene.


[squid-users] squid 3.3.x, SPNEGO and hostnames

2013-07-19 Thread Eugene M. Zheganin
Hi.

I'm moving some of my caches to 3.3.x (from 3.1.x and 3.2.x).
I'm using SPNEGO on some (along with kerberos_ldap_group helper).
I noticed an important behaviour change comparing to the 3.1.x and 3.2.x:
- squid 3.3.x requires the visible_hostname to be set to the kerberos
ticket principal he's using for SPNEGO
- squid 3.3.x requires the hostname of the proxy in the browser to be set
- squid 3.3.x requires the hostname of the proxy in the browser to be
exactly the same as kerberos principal in the ticket

If any of these conditions aren't met, the authentication fails. If
these conditions are met, the authentication functions as in the
previous versions (my caches are running same configs with minor
alterations). But I find those requirements a bit uncomfortable.

When SPNEGO isn't used, these requirements aren't needed.

Is this a feature or I'm hitting some bug ?

Thanks.
Eugene.


Re: [squid-users] squid 3.3.x, SPNEGO and hostnames

2013-07-19 Thread Eugene M. Zheganin
Hi.

On 19.07.2013 18:01, Amos Jeffries wrote:

 The change appearing between 3.2 to 3.3 would seem to eliminate
 HTTP/1.1 behaviour (in both Squid and Browser) and Squid code
 behaviour as a reason - there were no significant changes to Negotiate
 3.2-3.3, just between 3.1 and 3.2. Leaving changes to the helpers and
 their libraries as possible causes. Or changes in your specific
 configurations.

Thanks again, man.
Yeah, seems like this is samba adding it's entropy. I found that samba
3.6.x (which I've upgraded this particular server to along with
upgrading squid) isn't working on my configuration at all (even winbindd
crashes). So I returned 3.5.x, et voila, squid 3.3.x works like a charm.

Thanks.
Eugene.


Re: [squid-users] squid 3.2 and POST

2013-07-16 Thread Eugene M. Zheganin
Hi.

On 16.07.2013 10:30, Amos Jeffries wrote:

 Nasty as they are, the above are the perfectly normal working NTLM
 behaviour. If your traces are showing something else going on *by
 Squid* then you have a bug. We have indeed found a few such bugs in
 Squid NTLM and persistence handling, once you have confirmed that it
 is a bug and not just one of the above working NTLM problems please
 repeat the test using the latest 3.3 release and if possible the
 latest 3.HEAD daily bundle to see if it is one we found and fixed
 already. http://wiki.squid-cache.org/SquidFaq/BugReporting has more on
 the process of reporting.

 NP: There was one bug fixed in 3.3.1 related to HTTP/1.1 keep-alive
 which was showing up in some NTLM clients and is possibly seen in 3.2
 still. The Squid-3.3 patch can be found here
 http://www.squid-cache.org/Versions/v3/3.3/changesets/squid-3-10728.patch
 although the preferred action is of course to upgrade to latest 3.3.
 Squid is on a release-often cycle now so the 3.2-3.3 changes are
 quite small compared to previous version differences (much safer for
 production servers to do than ever before).


 PS. I expect to have some time in the next few weeks and will be
 looking into similar issues with another client if you need a
 developer to take a closer look at fixing it and can pay for
 development support time. Contact me privately about support contracts
 please. Of course with any luck it will be that bug 2936 or another
 unidentified issue fixed in 3.3.


Thanks a lot for a detailed explanation, Amos ! And yeah, I tried the
new 3.3.x branch and it's working for me there.

Thanks again.
Eugene.



Re: [squid-users] Advice: ntlm_auth from samba4 or negotiate_wrapper ?

2013-07-16 Thread Eugene M. Zheganin
Hi.

On 15.07.2013 23:02, Michele Bergonzoni wrote:

 I did a few tests with ntlm_auth from samba4, and it seems to work,
 with some residual problems with firefox and PCs not joined in the
 domain, and an extra authentication popup at the beginning from IE.

 I didn't get to the point of having a working negotiate_wrapper /
 squid_kerb_auth config, being still confusing about hostnames,
 principals, redundancy, failover, ntlm fallback with winbindd.

Actually, you should implement all the schemes - NTLM/SPNEGO/Basic for
some obvious reasons:

- in a corporate environment there will be definitely machines which
switch from Negotiate to NTLM, so you have to handle both
- you can leave only NTLM (and Basic), but this becomes more and more
outdated
- there will be tons of software that can perform only basic
authentication, like various IMs and third-party software
- there will be some software that claims it's capable of NTLM but in
fact it will have only basic
- so far I'm using PAM to handle Basic auth and to reroute it back in
winbind
- squid has a bunch of great helpers that work with AD, and the most
cool and modern one is the external kerberos group helper, which
supports nested groups (thanks, Markus !)

I don't have digest auth in my environment, and for past 13 years I
don't see why I should.

Eugene.


[squid-users] squid 3.2 and POST

2013-07-15 Thread Eugene M. Zheganin
Hi.

I use caches in a corporate environment and their most purpose is
authorization and accounting, so I use various AD-authorization schemes.
Recently I switched most of my proxies to squid 3.2.x, and got a
problem. The problem appears on various upload sites across the internet
(you know, like depositfiles and so on - the sites that hols user's
data). When user tries to upload a file, such a site and a user's
browser exchange series of requests and replies, for example
GETGET/OPTIONS/POST, and squid serves each request after he issues a 407
header to a client browser, and a browser, in it's turn, resends the
request with a proxy authentication token. Everything is fine when a
files is relatively small, but when user tries to send large file (I
don't know where the border starts, for example 700 Kbytes is okay, but
17 megabytes is not) squid, for some reason, doesn't send the 407 header
after first POST from a browser which starts the upload of an actual
file (I short words - first large POST isn't answered by squid and
isn't served). I captured the whole sequence with tcpdump and examined
it with wireshark.

What can be a problem here ? I tried to switch off the keepalives from
SPNEGO/NTLM schemes I'm using but this didn't help.

Thanks.
Eugene.



Re: [squid-users] Squid 3.2, multiple workers, SNMP (and a bit of IPv6)

2013-04-02 Thread Eugene M. Zheganin
Hi.

On 02.04.2013 14:30, Amos Jeffries wrote:

 No, unfortuately nobody is working on that part yet.

 As a workaround you should be able to retrieve SNMP information
 per-worker by using ${process_number} in the snmp_port directive to
 assign each worker a unique port for SNMP contact.


Is it worth reporting in the bugzilla ? (Or maby this is a well known
planned to fix issue and the report will just add unnecessary escalation ?)
For example I can live for now with one worker, even on my most crowded
productions.

Thanks.
Eugene.


Re: [squid-users] Squid 3.2, multiple workers, SNMP (and a bit of IPv6)

2013-04-01 Thread Eugene M. Zheganin
Hi.

On 18.01.2013 06:42, Panagiotis Christias wrote:

 In both cases, every SNMP adds two or new stale entries (filedescriptors,
 sockets or whatever) as reported by lsof:

 # lsof -i -nP | egrep 'PID|:163$'   
 COMMAND  PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
 squid   5007 squid   15u  IPv4 0xfe0006b14790  0t0  UDP 127.0.0.1:163
 squid   5008 squid   15u  IPv4 0xfe0006b14790  0t0  UDP 127.0.0.1:163
 squid   5008 squid   19u  IPv4 0xfe0006b14790  0t0  UDP 127.0.0.1:163
 squid   5008 squid   41u  IPv4 0xfe0006b14790  0t0  UDP 127.0.0.1:163
 (and the list gets longer and longer as our monitoring systems keep
 quering squid..).

 Everything (SNMP-related) works correctly when we use just one worker
 but currently this is no option since a single squid 3.2 worker seems
 to be unable to handle as many requests as squid 2.7 (in our case at
 least).

 Any feedback would be greatly appreciated.

Yeah, recently I discovered that I have exactly same issue with 3.2.9.
Furthermore, I can say that I even have more workers than intended:

22972  ??  S  0:02.92 (squid-2) -f /usr/local/etc/squid/squid.conf
(squid)
23146  ??  S  0:02.44 (squid-1) -f /usr/local/etc/squid/squid.conf
(squid)
23237  ??  S  0:00.74 (squid-coord-3) -f
/usr/local/etc/squid/squid.conf (squid)
31139  ??  R  0:00.11 (squid-1) -f /usr/local/etc/squid/squid.conf
(squid)
31192  ??  R  0:00.01 (squid-1) -f /usr/local/etc/squid/squid.conf
(squid)
31193  ??  R  0:00.00 (squid-1) -f /usr/local/etc/squid/squid.conf
(squid)

Some of they don't die after squid -k kill, I have to kill them
explicitely with -9.
Is that normal ? Right now I decided to switch back to non-SMP.
And yeah, I just don't like this situation with thousands of open FDs,
wich doesn't happen with one worker.

Is this resolved in 3.3.x ?

Thanks.
Eugene.


[squid-users] squid/SMP

2013-03-21 Thread Eugene M. Zheganin

Hi.

I'm using squid 3.2.6 on a FreeBSD and today I tried to use it's SMP 
feature. I've added 'workers 2' in it's configuration file, checked the 
permission on localstatedir and ran it.

I got on start

FATAL: kid2 registration timed out

and then looks like coordinator started to try to restart kids, but 
unsuccessfully. Obviously, no client requests were served at that time.
I checked the localstatedir and saw 3 sockets, one from coordinator and 
two from kids - so I'm sure permissions are ok.


What can I do to debug this feature ?
I understand 3.2.x is no longer supported and I need to use 3.3.x, but 
right now I'm stuck to FreeBSD ports on my production, and there's no 
3.3.[ in it yet; will try to build 3.3.[ realease on a test machine.


Thanks.
Eugene.


Re: [squid-users] squid/SMP

2013-03-21 Thread Eugene M. Zheganin

Hi.

On 21.03.2013 17:01, Adam W. Dace wrote:

I had this exact problem on a different platform, Mac OS X.

You probably want to use sysctl to increase the OS-default limits on
Unix Domain Sockets.
They're mentioned at the bottom of the squid Wiki page here:
http://wiki.squid-cache.org/Features/SmpScale

Please mail the list if you don't mind once you try that, I then ran
into a different problem but most likely FreeBSD isn't affected.


Thanks a lot, this helped. Seems to be working after that; at least I 
got no complains yet.


Eugene.


[squid-users] squid and unauthorized clients rate-blocking

2013-03-13 Thread Eugene M. Zheganin

Hi.

I use squid mostly for internet access authorization in corporate 
network. I have a problem. Let's suppose some foobar company has 
developed a proxy-unaware update mechanism using HTTP to update their 
software. Or some internet company wrote a javascript that does execute 
outside proxy context in a browser. Such things can produce a massive 
amount of GET requests which squid answers with HTTP/407. Massive like 
thousands per seconds from just one machine. In the same time, being 
explicitly blocked with HTTP/403 answers, this madness stops. So, is 
there a mechanism that I could use for, like, send 403 after exceeding 
some rate to a client ? Or rate-block some acls ? Or something similar ? 
Because right now I just block these machines using a packet filter, 
because this entire thing just eats my CPUs.


Thanks.
Eugene.


Re: [squid-users] Squid with external auth problems

2013-03-13 Thread Eugene M. Zheganin

Hi.

On 13.03.2013 23:46, husam.shabeeb wrote:

  I will give u the best solution for your case I hope it will help you
Don't use squid for authentication use Mikrotik pope or hotspot , also you
can use radius server + Mikrotik and let the squid handle the cache and web
filtering .

Zomg, hillarious. I use squid in a large corporate network for 13 years 
now, with Active Directory and lots of various stuff.
Don't use necrotik where you're supposed to use squid, because necrotik 
it's Arm/Mips-based, it's not powerful by design, it's not scalable, 
it's soho. Use squid. It's scalable, customizable, very good supported 
and it's a clear and a reliable solution. Even large vendors like Cisco 
and IronPort use squid in their equipment.


I'll tell you even more: don't use necrotik at all.

Eugene.


Re: [squid-users] Squid with external auth problems

2013-03-13 Thread Eugene M. Zheganin

Hi.

On 14.03.2013 0:41, husam.shabeeb wrote:

Dear ,
First its Mikrotik not nikrotik , second
You should read the idea.  it's not soho
You can check this one from here
http://routerboard.com/CCR1036-12G-4S
also if you use radius , or cisco access server for auth theyare solution's
you have to read and pick the best for you that depend on how much client
you trying to serve !!

I'm really glad for nekrotik, but, with all my respect, if, for example, 
my company will make a zillion-core router and claim it carrier-grade on 
it's web page, it won't magically become one. Same thing applies to this 
Latvian vendor.
So far I've never heard about their equipment using on carrier-grade 
level. May be I will (I hope I won't live that long or at least I will 
be lucky enough not to be such an ISP customer), bit still using their 
stuff isn't a common practice.
Anyway, radius and other AAA stuff has nothing to do when we talk about 
browsers and AD integration. Squid supports any of the authorization 
methods privided by the AD; though MS IAS is one of them (simply because 
it's integrated and it shares the same user directory), using radius to 
authenticate web-browsers is weird.


Eugene.


[squid-users] squid and linefeeds

2012-09-04 Thread Eugene M. Zheganin

Hi.

Squid 3.1.12/FreeBSD 9.1-PRERELEASE/amd64

I have a weird problem downloading the text files (not sure with all of 
them or not) from an FTP through squid.
When I download them through squid they have DOS line feeds in 'em 
(symbols commonly represented in terminals line ^M, I think everyone 
seen such things in text files from non-Unices). When I download the 
same file without squid it's fine.


Binary files aren't affected.

Why is that ?



# wget -O - 
ftp://ftp.FreeBSD.org/pub/FreeBSD/ports/distfiles/portmaster-3.13.13.tar.gz.asc 
| sha256
--2012-09-04 14:13:48-- 
ftp://ftp.freebsd.org/pub/FreeBSD/ports/distfiles/portmaster-3.13.13.tar.gz.asc
Resolving proxy.norma.perm.ru (proxy.norma.perm.ru)... fd00::301, 
192.168.3.1
Connecting to proxy.norma.perm.ru 
(proxy.norma.perm.ru)|fd00::301|:3128... connected.

Proxy request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
Saving to: `STDOUT'

[ = ] 499 --.-K/s   in 0s

2012-09-04 14:13:48 (19.4 MB/s) - written to stdout [499]

739d3272669cf4cfbff94cef53822830477e922e87b57ea563c86543d11bcb3c

# wget --proxy=off -O - 
ftp://ftp.FreeBSD.org/pub/FreeBSD/ports/distfiles/portmaster-3.13.13.tar.gz.asc 
| sha256
--2012-09-04 14:14:04-- 
ftp://ftp.freebsd.org/pub/FreeBSD/ports/distfiles/portmaster-3.13.13.tar.gz.asc

   = `-'
Resolving ftp.freebsd.org (ftp.freebsd.org)... 2001:4f8:0:2::e, 
2001:6c8:130:800::4, 204.152.184.73, ...
Connecting to ftp.freebsd.org (ftp.freebsd.org)|2001:4f8:0:2::e|:21... 
connected.

Logging in as anonymous ... Logged in!
== SYST ... done.== PWD ... done.
== TYPE I ... done.  == CWD (1) /pub/FreeBSD/ports/distfiles ... done.
== SIZE portmaster-3.13.13.tar.gz.asc ... 488
== EPSV ... done.== RETR portmaster-3.13.13.tar.gz.asc ... done.
Length: 488 (unauthoritative)

100%[==] 
488 --.-K/s   in 0.01s


2012-09-04 14:14:07 (39.6 KB/s) - written to stdout [488]

27143e3e2ff2f03e745b1c26ad1791eace1d88ee3ca5f61df1f45730ed5dfb23


# squid -v
Squid Cache: Version 3.1.12
configure options:  '--with-default-user=squid' 
'--bindir=/usr/local/sbin' '--sbindir=/usr/local/sbin' 
'--datadir=/usr/local/etc/squid' '--libexecdir=/usr/local/libexec/squid' 
'--localstatedir=/var/squid' '--sysconfdir=/usr/local/etc/squid' 
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid/squid.pid' 
'--enable-removal-policies=lru heap' '--disable-linux-netfilter' 
'--disable-linux-tproxy' '--disable-epoll' '--disable-translation' 
'--enable-auth=basic digest negotiate ntlm' 
'--enable-basic-auth-helpers=DB NCSA PAM MSNT SMB squid_radius_auth LDAP 
SASL YP' '--enable-digest-auth-helpers=password ldap' 
'--enable-external-acl-helpers=ip_user session unix_group wbinfo_group 
ldap_group' '--enable-ntlm-auth-helpers=smb_lm' 
'--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-storeio=ufs 
diskd aufs' '--enable-disk-io=AIO Blocking DiskDaemon DiskThreads' 
'--enable-delay-pools' '--disable-wccp' '--enable-wccpv2' 
'--enable-arp-acl' '--enable-pf-transparent' '--disable-ecap' 
'--disable-loadable-modules' '--enable-icap-client' '--enable-kqueue' 
'--with-large-files' '--disable-optimizations' '--prefix=/usr/local' 
'--mandir=/usr/local/man' '--infodir=/usr/local/info/' 
'--build=amd64-portbld-freebsd9.0' 
'build_alias=amd64-portbld-freebsd9.0' 'CC=cc' 'CFLAGS=-pipe 
-I/usr/local/include -I/usr/local/include  -g -DLDAP_DEPRECATED' 
'LDFLAGS= -L/usr/local/lib -L/usr/local/lib' 
'CPPFLAGS=-I/usr/local/include' 'CXX=c++' 'CXXFLAGS=-pipe 
-I/usr/local/include -I/usr/local/include -g -DLDAP_DEPRECATED' 
'CPP=cpp' --with-squid=/usr/ports/www/squid31/work/squid-3.1.12


# grep -i ftp /usr/local/etc/squid/squid.conf
ftp_user anonymous
ftp_passive on
acl FTP proto FTP


Thanks, guys.
Eugene.


[squid-users] auth acl, combining and matching

2012-08-14 Thread Eugene M. Zheganin

Hi.

Since I always receive comprehensive answers here I decided to ask about 
one more long existed problem.


I use squids in corporate environment along with traffic quotas and 
custom deny info pages. Yeah, flatrated internet came long ago in Russia 
too, but my supervisors think that limiting the traffic is still an 
effective way of fighting slackers.


So, the goal is to show a page 'you're not authorized' to unauthorized 
users (bad username/password pair, or no username, or intercepted 
traffic), 'this is denied' page on some restricted URLs, and mostly - 
'you're out of traffic' to users with no traffic left. Here I step on 
one thing that is keeping me away from that. Imagine I have similar config:


acl unauthorized proxy_auth -
acl no-traffic-left external self-written-script
acl allowed-users external some-LDAP-checking
acl some-other-users external some-LDAP-checking

http_access deny unauthorized
http_access deny no-traffic-left
http_access allow allowed-users
http_access deny all

deny_info NOTRAFFIC no-traffic-left
deny_info UNAUTHORIZED unauthorized
deny_info NOACCESS all

So, to the actual point. I will simply describe how it does work from my 
experience. So, imagine user 'foobar' is trying to get the access. It 
matches both the no-traffic-left and the allowed-users ACLs. Futhermore, 
allowed-users is a group of users. In a configuration above, when squid 
will receive the 'foobar' username on the 'http_access deny 
no-traffic-left' line, it won't block the foobar user, but instead it 
will reprompt for the credentials. So, in order to actually block users 
like foobar, I need to say something about src, like this:


http_access deny unauthorized all

This way squid will immidiately block such users. But, here the problem 
comes: last matching ACL will be 'all', so I'm unable to tell users with 
no traffic why exaclty they are blocked. I tried the way


http_access deny all unauthorized

But it works the same way as the line without 'all', - it keeps 
reprompting for the passwords. It looks like 'hey, do you know some 
other password, so I can grant you an access ?'. Is there any 
possibility of ... in the term of packet filters, say to squid 'block it 
immidiately' ? The way 'quick' works in pf, or, if you prefer, the same 
way the 'L' flag works in apache's mod_rewrite ? I mean, I need a 
mechanism of saying that this rule should actually be the last if it 
matches. And the other question - I have a feeling that this happens 
only if a username matches more that one proxy_auth ACL. For example 
this doesn't happen to the user '-', or any other fake user (I was using 
for a long time the fake username to represent the entity without 
credentials).


Thanks.
Eugene.


[squid-users] squid and authentication

2012-08-09 Thread Eugene M. Zheganin

Hi.

I'm using squid for more than 10 years for now.
I wrote a couple of articles about it.

But there are still some basic things about it that I don't understand.
Or, I don't know, some things about proxy authentication.
I know I will look silly, but I still decided to ask.
I decided to ask here, not because I'm sure it's a squid issue (I guess 
it's not) but because I think you guys have answered a lot of stupid 
questions why my authentication does'nt work.


So. I imagine I have set up some authentication schemes. Basic, NTLM, 
doesn't matter.
Imagine I have mozilla on some UNIX operating system. I launch it, I see 
that it's NTLM since it doesn't show the realm (and basic of course 
does) then I enter my credentials (I guess it's okay for unix, as 
mozilla on windows domain machine doesn't ask for it, so it must be some 
issue in NTLM/mozilla/samba or whatever), then it's okay until some 
point. But sooner or later Firefox (and Mozilla previously) will reask 
about my credentials. This happens a lot on UNIX OSes, and mostly with 
Mozilla. This happens though with Chrome, but not that often.


What is it ? How long the credentials do stay in squid's cache ? I know 
about 'credentialsttl' for basic scheme, but there's no such option for 
NTLM. I've read the RFC 2617 and I dumped the HTTP sessions of client 
browsers with my proxy, but I didn't find the answer on a question why 
the authentication popup reappears - the RFC says nothing about 
reasking or keeping the explicit cache. One more question - why the 
browser cannot simply and silently resend the authentication, - all the 
browsers I've seen show the authentication popup again, so I think this 
is some common approach and not the browser developer conspiracy.


Thanks.
Eugene.


Re: [squid-users] squid_ldap_group (Group into Group)

2012-08-09 Thread Eugene M. Zheganin

Hi.

On 10.08.2012 01:10, Rickifer Barros wrote:

Hi squid users,

I have a question about the helper squid_ldap_group whose don't find
in the internet. I'm testing it and I noticed that it don't recognize
groups inside group, but only read users inside group.

The command I'm using is like this: external_acl_type AD_GROUP %LOGIN
/usr/lib/squid3/squid_ldap_group -R -P -b dc=domain,dc=yyy -D
cn=user,dc=domain,dc=yyy -w password -f
((objectclass=person)(sAMAccountName=%v)(memberof=cn=%a,ou=example,dc=domain,dc=yyy))
-h yyy.yyy.yyy.yyy

Is there a way to squid_ldap_group to read the groups into the other group?


Afaik, the only way to let the squid know about nested groups is to use 
a squid_kerb_ldap instead of the squid_ldap_group.


Eugene.


Re: [squid-users] Custom error page for an acl

2012-08-08 Thread Eugene M. Zheganin

Hi.

On 08.08.2012 13:35, a bv wrote:

I would like to write an acl on squid for the  block the users access
to the internal domain and LAN from squid. for this i guess acl  dst
will help me but i also like to have a custom error page for this acl
. How can i easily do that ?


I really think it would be better to block the entire access from the 
outer world to your squid on your firewall. :)


Eugene.


Re: [squid-users] Re: Re: Re: Re: Re: Re: Re: squid_ldap_group against nested groups/Ous

2012-08-06 Thread Eugene M. Zheganin

Hi.

On 03.08.2012 04:02, Markus Moeller wrote:

Hi Eugene,

 What do you suggest squid_kerb_ldap should do to make it simpler for 
you ?



From my point of view (though I understand that it's only my opinion, 
and different technical difficulties may arise), it would be best if 
squid_kerb_ldap would work just in the same way that squid_group_ldap 
did - accepting both the login and the group name on it's standard 
input. This way the config file would be way shorter and the number of 
the helpers required would be minimal.


Thanks.
Eugene.


Re: [squid-users] Re: Re: Re: Re: Re: Re: Re: Re: squid_ldap_group against nested groups/Ous

2012-08-06 Thread Eugene M. Zheganin

Hi.

On 06.08.2012 16:48, Markus Moeller wrote:

Hi Eugene,

  How would a squid_group_ldap  line look like ?  From where would the 
group name come from ?  I could try to add this feature.




That would be awesome.

squid_group_ldap is expecting to see the username along with the group 
name to check the membership in on it's stdin. It looks the same way 
your helper works, just with a group name. In the same time, in a config 
file you could describe the group directly, or supply a filename which 
contains the group name (I prefer a filename, for example).


My squid.conf prior to using squid_kerb_ldap helper used to look like:

===Cut===
external_acl_type   ldap_group  ttl=60 negative_ttl=60 
children=40 %LOGIN \

/usr/local/libexec/squid/squid_ldap_group \
-b cn=Users,dc=norma,dc=com \
-f 
((cn=%g)(member=%u)(objectClass=group)) \

-F sAMAccountname=%s \
-D 
cn=dca,cn=Users,dc=norma,dc=com \
-W 
/usr/local/etc/squid/ad.passwd -h hq-gc.norma.com -v 3 -p 389



acl ad-internet-users   externalldap_group 
/usr/local/etc/squid/ad-internet-users.acl
acl ad-privileged   externalldap_group 
/usr/local/etc/squid/ad-privileged-users.acl
acl ad-icq-only externalldap_group 
/usr/local/etc/squid/ad-icq-only.acl
acl ad-no-icq   externalldap_group 
/usr/local/etc/squid/ad-no-icq.acl
acl kontur-clients  externalldap_group 
/usr/local/etc/squid/kontur-clients.acl
acl ad-no-pictures  externalldap_group 
/usr/local/etc/squid/ad-no-pictures.acl
acl ad-personnel-only   externalldap_group 
/usr/local/etc/squid/ad-personnel-only.acl
acl ad-mdm  externalldap_group 
/usr/local/etc/squid/ad-internet-users-mdm.acl
acl ad-sber externalldap_group 
/usr/local/etc/squid/ad-internet-users-sber.acl
acl ad-e5   externalldap_group 
/usr/local/etc/squid/ad-e5.acl
acl ad-raiffeisen   externalldap_group 
/usr/local/etc/squid/ad-raiffeisen.acl

===Cut===

Thanks.
Eugene.



Re: [squid-users] Re: Re: Re: Re: Re: Re: squid_ldap_group against nested groups/Ous

2012-08-02 Thread Eugene M. Zheganin

Hi.

On 01.08.2012 23:02, Markus Moeller wrote:

Hi Eugene,

  Are all 12 groups for the same control ?  If  so you can  use -g 
Group1:Group2:Group3:.


No, I map them to different acls, and then those acls are used to 
restrict various levels of the access.


Like:

(it was)
external_acl_type ldap_group [...]

acl ad-internet-users  external ldap_group 
/usr/local/etc/squid/ad-internet-users.acl
acl ad-privileged external ldap_group 
/usr/local/etc/squid/ad-privileged-users.acl

acl ad-icq-only external ldap_group /usr/local/etc/squid/ad-icq-only.acl
acl ad-no-icq external ldap_group /usr/local/etc/squid/ad-no-icq.acl

http_access allow ad-internet-users something
http_access deny ad-internet-users something1
http_access allow ad-privileges something1

and so on.

Eugene.


Re: [squid-users] Re: Re: Re: Re: Re: squid_ldap_group against nested groups/Ous

2012-08-01 Thread Eugene M. Zheganin

Hi.

One more question - is there any way to parametrize the group name, so 
it will be able not to pass it to the command-line arguments of the helper ?
Right now I have 12 groups from AD with different settings, and I have 
to run 12 classes of external acls. With squid_ldap_group I used to run 
only one class, and it was way more handy.


Thanks.
Eugene.


Re: [squid-users] Re: Re: Re: Re: squid_ldap_group against nested groups/Ous

2012-07-30 Thread Eugene M. Zheganin

Hi, guys.

Hi, Markus. :)

I'm this weird guy that asks every 2 years about squid_kerb_ldap and 
then falls back to his letargic sleep. :)
But it's not because I lose interest, but because of the time, and 
because of the old decent authorization schemes on my squids that still 
work fine even with Windows 7.


But, last time I once again decided to setup the nested groups and 
GSS-SPNEGO.

negotiate_wrapper works just fine, thanks again.

So, to refresh your memory, last time :) I got this problem: inability 
to bind to LDAP server.

I have an AD domain and a bunch of controllers.

Some of my thoughts I described below, but first the output.
The debug output looks like (fresh one, and sorry for the 
pseudographics, but it's a real output):

===Cut===
[emz@wizard:/usr/local/etc/squid]# ./squid_kerb_group.sh
2012/07/31 01:27:12| squid_kerb_ldap: Starting version 1.2.2
2012/07/31 01:27:12| squid_kerb_ldap: Group list Internet Users - Proxy1@
2012/07/31 01:27:12| squid_kerb_ldap: Group Internet Users - Proxy1 Domain
2012/07/31 01:27:12| squid_kerb_ldap: Netbios list soft...@norma.com
2012/07/31 01:27:12| squid_kerb_ldap: Netbios name SOFTLAB  Domain NORMA.COM
2012/07/31 01:27:12| squid_kerb_ldap: ldap server list NULL
2012/07/31 01:27:12| squid_kerb_ldap: No ldap servers defined.
emz
2012/07/31 01:27:52| squid_kerb_ldap: Got User: emz set default domain: 
NORMA.COM

2012/07/31 01:27:52| squid_kerb_ldap: Got User: emz Domain: NORMA.COM
2012/07/31 01:27:52| squid_kerb_ldap: User domain loop: group@domain 
Internet Users - Proxy1@
2012/07/31 01:27:52| squid_kerb_ldap: Default domain loop: group@domain 
Internet Users - Proxy1@
2012/07/31 01:27:52| squid_kerb_ldap: Found group@domain Internet Users 
- Proxy1@

2012/07/31 01:27:52| squid_kerb_ldap: Setup Kerberos credential cache
2012/07/31 01:27:52| squid_kerb_ldap: Get default keytab file name
2012/07/31 01:27:52| squid_kerb_ldap: Got default keytab file name 
/usr/local/etc/squid/HTTP.keytab
2012/07/31 01:27:52| squid_kerb_ldap: Get principal name from keytab 
/usr/local/etc/squid/HTTP.keytab

2012/07/31 01:27:52| squid_kerb_ldap: Keytab entry has realm name: NORMA.COM
2012/07/31 01:27:52| squid_kerb_ldap: Found principal name: 
HTTP/proxy-wizard.norma.c...@norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Set credential cache to 
MEMORY:squid_ldap_19356
2012/07/31 01:27:52| squid_kerb_ldap: Got principal name 
HTTP/proxy-wizard.norma.c...@norma.com

2012/07/31 01:27:52| squid_kerb_ldap: Stored credentials
2012/07/31 01:27:52| squid_kerb_ldap: Initialise ldap connection
2012/07/31 01:27:52| squid_kerb_ldap: Canonicalise ldap server name for 
domain NORMA.COM
2012/07/31 01:27:52| squid_kerb_ldap: Resolved SRV _ldap._tcp.NORMA.COM 
record to spb-dc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved SRV _ldap._tcp.NORMA.COM 
record to spb-gc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved SRV _ldap._tcp.NORMA.COM 
record to sad-srv.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved SRV _ldap._tcp.NORMA.COM 
record to hq-dc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved SRV _ldap._tcp.NORMA.COM 
record to hq-gc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved SRV _ldap._tcp.NORMA.COM 
record to nb-dc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved SRV _ldap._tcp.NORMA.COM 
record to mos-dc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved SRV _ldap._tcp.NORMA.COM 
record to sam-dc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 1 of NORMA.COM to 
hq-gc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 2 of NORMA.COM to 
fd00::322
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 3 of NORMA.COM to 
hq-gc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 4 of NORMA.COM to 
fd00::322
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 5 of NORMA.COM to 
hq-gc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 6 of NORMA.COM to 
fd00::322
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 7 of NORMA.COM to 
hq-gc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 8 of NORMA.COM to 
hq-dc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 9 of NORMA.COM to 
hq-dc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 10 of NORMA.COM 
to hq-gc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 11 of NORMA.COM 
to hq-dc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 12 of NORMA.COM 
to hq-gc.norma.com
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 13 of NORMA.COM 
to 192.168.92.189
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 14 of NORMA.COM 
to 192.168.92.189
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 15 of NORMA.COM 
to 192.168.92.189
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 16 of NORMA.COM 
to 192.168.173.3
2012/07/31 01:27:52| squid_kerb_ldap: Resolved address 17 of NORMA.COM 
to 192.168.180.26
2012/07/31 01:27:52| 

Re: [squid-users] Re: Re: Re: Re: Re: squid_ldap_group against nested groups/Ous

2012-07-30 Thread Eugene M. Zheganin

Hi.

On 31.07.2012 04:54, Markus Moeller wrote:

Hi Eugene,

  For squid_kerb_ldap to work with automatic ldap server detection you 
need to setup your DNS correctly. All SRV records must be hostnames 
(not IPs as in your cases some are).  The the hostname will be 
resolved in an IP and back into a hostname to eliminated CNAMEs. For 
the final hostnames a ldap/hostname principal must exist. e.g   
TEST.com a CNAME resolves into 192.1.1.1 which resolves in server1.com 
which means a ldap/server1.com principal must exits.


Thanks for a clear explanation, now I see why it doesn't work. And I was 
able to fix the binding to some particular DCs.
But I think (it's only my imo though) that circular resolving to 
eliminate CNAMEs is a bit complicated: reverse zones aren't needed even 
for an AD domain to work properly.


Thanks for your help and for your helper.
Eugene.


Re: [squid-users] 2 questions about proxy hierarchy

2012-06-08 Thread Eugene M. Zheganin

On 07.06.2012 17:19, Amos Jeffries wrote:

On 7/06/2012 10:48 p.m., Eugene M. Zheganin wrote:

Hi.

Happy world ipv6 launch day to everybody. :P

squid/3.1.12, FreeBSD 8.x|9.x

a) why when using ipv6 address, like fd00::316 or, no matter 
[fd00::316], squid detects that configuration file is corrent, but 
then I'm immidiately getting TCP connection to fd00::316/3128 
failed, but, in the same time, I CAN telnet to the fd00::316 port 
3128 ?


More to the point *what* is failing to connect to your Squid and from 
where?

Assume I get one parent squid A, and one child squid, B.
B's IP (v6) is in the parent's config file, like

acl children src fd00::d02/128

B's config has the line like

cache_peer fd00::316 3128 3130 default

or

cache_peer [fd00:316] 3128 3130 default

and some lines like

always_direct deny some-domain-based-acl
always_direct allow all
never_direct allow some-domain-based-acl
never_direct deny all

In this situation I'm starting to get these lines on B:

TCP connection to fd00::316/3128 failed

In the same time I can connect from B's console (telnet fd00::316 3128).




b) more long story. I think I'm just not getting something. From the 
2.6 version I always had to use no-query option for parents. That is 
because the cold 

ZOMG. Did I write this ? I mean 'children'.
squid is always detecting the parent proxy as dead, but the weird 
thing is, it's still capable of communicating to them. I mean I see 
udp/3130 packets in tcpdump going both ways, and UDP_HIT/000 
UDP_MISS/000 ICP queries from children on parent proxies.

Why is that ?


bugs?
That's too easy to explain. I think the situation when I discover and 
report this bug first time for like 10 years is quite impossible.


Eugene.


[squid-users] 2 questions about proxy hierarchy

2012-06-07 Thread Eugene M. Zheganin

Hi.

Happy world ipv6 launch day to everybody. :P

squid/3.1.12, FreeBSD 8.x|9.x

a) why when using ipv6 address, like fd00::316 or, no matter 
[fd00::316], squid detects that configuration file is corrent, but then 
I'm immidiately getting TCP connection to fd00::316/3128 failed, but, 
in the same time, I CAN telnet to the fd00::316 port 3128 ?


b) more long story. I think I'm just not getting something. From the 2.6 
version I always had to use no-query option for parents. That is because 
the cold squid is always detecting the parent proxy as dead, but the 
weird thing is, it's still capable of communicating to them. I mean I 
see udp/3130 packets in tcpdump going both ways, and UDP_HIT/000 
UDP_MISS/000 ICP queries from children on parent proxies.

Why is that ?

Thanks.
Eugene.


[squid-users] squid 3.1 and HTTPS (and probably ipv6)

2012-03-13 Thread Eugene M. Zheganin

Hi.

I'm using squid 3.1.x on FreeBSD. Squid is built from ports.

Recently I was hit by a weird issue: my users cannot open HTTPS pages. 
This is not something constant - if they hit the F5 button in browser, 
the pages are loading, sometimes showing the message like 'Unable to 
connect. Firefox can't establish a connection to the server at 
access.ripe.net.' (for example. most of them are using FF). In the same 
time plain HTTP pages are working fine.


I did some investigation and it appears like squid really thinks it 
cannot connect to HTTPS-enabled web server:


===Cut===
2012/03/13 14:08:39.661| ACL::ChecklistMatches: result for 'all' is 1
2012/03/13 14:08:39.661| ACLList::matches: result is true
2012/03/13 14:08:39.661| aclmatchAclList: 0x285e4810 returning true (AND 
list satisfied)
2012/03/13 14:08:39.661| ACLChecklist::markFinished: 0x285e4810 
checklist processing finished
2012/03/13 14:08:39.661| ACLChecklist::check: 0x285e4810 match found, 
calling back with 1

2012/03/13 14:08:39.661| ACLChecklist::checkCallback: 0x285e4810 answer=1
2012/03/13 14:08:39.661| peerCheckAlwaysDirectDone: 1
2012/03/13 14:08:39.661| peerSelectFoo: 'CONNECT access.ripe.net'
2012/03/13 14:08:39.661| peerSelectFoo: direct = DIRECT_YES
2012/03/13 14:08:39.661| The AsyncCall SomeCommConnectHandler 
constructed, this=0x286e6740 [call1916]
2012/03/13 14:08:39.661| commConnectStart: FD 14, cb 0x286e6740*1, 
access.ripe.net:443
2012/03/13 14:08:39.661| The AsyncCall SomeCloseHandler constructed, 
this=0x2956c2c0 [call1917]

2012/03/13 14:08:39.661| ipcache_nbgethostbyname: Name 'access.ripe.net'.
2012/03/13 14:08:39.661| ipcache_nbgethostbyname: HIT for 'access.ripe.net'
2012/03/13 14:08:39.662| ipcacheMarkBadAddr: access.ripe.net 
[2001:67c:2e8:22::c100:685]:443
2012/03/13 14:08:39.662| ipcacheCycleAddr: access.ripe.net now at 
193.0.6.133 (2 of 2)

2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14-16 : family=28
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14-16 : family=28
2012/03/13 14:08:39.662| FilledChecklist.cc(168) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x285e4810

2012/03/13 14:08:39.662| ACLChecklist::~ACLChecklist: destroyed 0x285e4810
2012/03/13 14:08:39.662| FilledChecklist.cc(168) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x285e4910

2012/03/13 14:08:39.662| ACLChecklist::~ACLChecklist: destroyed 0x285e4910
2012/03/13 14:08:39.662| The AsyncCall SomeCommReadHandler constructed, 
this=0x28ce9100 [call1918]
2012/03/13 14:08:39.662| leaving SomeCommReadHandler(FD 150, 
data=0x286b6710, size=4, buf=0x28d1e000)

2012/03/13 14:08:39.662| ipcache_nbgethostbyname: Name 'access.ripe.net'.
2012/03/13 14:08:39.662| ipcache_nbgethostbyname: HIT for 'access.ripe.net'
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14-16 : family=28
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14-16 : family=28
2012/03/13 14:08:39.662| ipcacheCycleAddr: access.ripe.net now at 
193.0.6.133 (2 of 2)

2012/03/13 14:08:39.662| ipcache_nbgethostbyname: Name 'access.ripe.net'.
2012/03/13 14:08:39.662| ipcache_nbgethostbyname: HIT for 'access.ripe.net'
2012/03/13 14:08:39.662| ipcacheMarkAllGood: Changing ALL 
access.ripe.net addrs to OK (1/2 bad)

2012/03/13 14:08:39.662| commConnectCallback: FD 14
2012/03/13 14:08:39.662| comm.cc(1195) commSetTimeout: FD 14 timeout -1
2012/03/13 14:08:39.662| comm.cc(1206) commSetTimeout: FD 14 timeout -1
2012/03/13 14:08:39.662| comm.cc(934) will call 
SomeCommConnectHandler(FD 14, errno=22, flag=-8, data=0x28f6bdd0, ) 
[call1916]

2012/03/13 14:08:39.662| commConnectFree: FD 14
2012/03/13 14:08:39.662| entering SomeCommConnectHandler(FD 14, 
errno=22, flag=-8, data=0x28f6bdd0, )
2012/03/13 14:08:39.662| AsyncCall.cc(32) make: make call 
SomeCommConnectHandler [call1916]

2012/03/13 14:08:39.662| errorSend: FD 12, err=0x28f995d0
2012/03/13 14:08:39.662| errorpage.cc(1051) BuildContent: No existing 
error page language negotiated for ERR_CONNECT_FAIL. Using default error 
file.

===Cut==

But why ? I did some telnetting from this server to the 
access.ripe.net:443, and it succeeded like 10 from 10 times (squid error 
rate is far more frequent). The only thing that bothers me is that 
telnet also first tries ipv6 too, but then switches to the ipv4, and 
connects.


Now some suggestions (probably a shot in the dark). This only happens on 
an ipv6-enabled machines, but without actual ipv6 connectivity (no ipv6 
default route or no public ipv6 address, for example I have unique-local 
addresses for the testing purposes). In the same time this issue can be 
easily solved by restoring the ipv6 connectivity to the outer world. So, 
can it be some dual-stack behavior bug ? Or is it 'by design' ? Do I 
need to report it ?


Thanks.
Eugene.


[squid-users] squid_ldap_group false negatives

2011-12-07 Thread Eugene M. Zheganin

Hi.

I'm using the squid_ldap_group external ACL to control AD users access 
to the Internet.
Recently I got a problem: on some machines squid_ldap_group gives false 
negative result.


Consider using emz is a member of 'Internet Users - Crystal' (and ofc 
he's never removed).


It looks like:

===Cut===
2011/12/06 13:49:30.255| ACLChecklist::preCheck: 0x802797a18 checking 
'http_access allow ad-internet-users'

2011/12/06 13:49:30.255| ACLList::matches: checking ad-internet-users
2011/12/06 13:49:30.255| ACL::checklistMatches: checking 'ad-internet-users'
2011/12/06 13:49:30.255| aclMatchExternal: ldap_group(emz 
Internet%20Users%20-%20Crystal) = lookup needed
2011/12/06 13:49:30.255| aclMatchExternal: emz 
Internet%20Users%20-%20Crystal: entry=@0, age=0
2011/12/06 13:49:30.255| aclMatchExternal: emz 
Internet%20Users%20-%20Crystal: queueing a call.
2011/12/06 13:49:30.255| aclMatchExternal: emz 
Internet%20Users%20-%20Crystal: return -1.
2011/12/06 13:49:30.255| ACL::ChecklistMatches: result for 
'ad-internet-users' is -1

2011/12/06 13:49:30.255| ACLList::matches: result is false
2011/12/06 13:49:30.255| aclmatchAclList: 0x802797a18 returning false 
(AND list entry failed to match)

===Cut===

This happens like one in 30-50 times, making it not that serious; but 
it's still a problem.


However, running squid_ldap_group in a shell-script separately from 
squid, I cannot reproduce this bug. Can it be because of the fact that 
squid caches the results from helpers ?


I can also tell that this is happening only on squids  3.1.12, because 
I have a couple of machines with 3.1.12 and 3.1.11 and I don't have this 
issue with them.


Is there any way to further localize this issue, before filling a bug 
report ?


Thanks.

Eugene.


Re: [squid-users] squid_ldap_group false negatives

2011-12-07 Thread Eugene M. Zheganin

Hi.

On 07.12.2011 17:44, Amos Jeffries wrote:
Minor bug, the bracketed () message is wrong about the state. It is 
actually still waiting for the lookup to complete.


What you should find is that some unknown time later (helper response 
delay, maybe up to 50-100 milliseconds?) you get another mention of 
testing checklist 0x802797a18. That will have a second allow/skip 
action response to this test followed by any continuing steps the ACL 
lookups may have done.



Okay. You were of course right, I did found that ACL finally matched.
So may be this post isn't at all about the subject. I remember thet in 
2.6 there was an explicit message about the reason of allowing or 
disallowing a request, it sounded like 'The request of Foo/Bar was 
allowed|denied because it matched the ACL name'. It looks like in 
3.x there's no such explicit message, is it ? May be there's a similar 
message, so can you please point to it, so I can debug further ?


Thanks.

Eugene.


[squid-users] Select loop Error. Retry 1

2011-05-14 Thread Eugene M. Zheganin

Hi.

I'm using squid 3.1.12 on FreeBSD 8.x.

What does the message in subject mean ?
I receive this message when I add authentication helpers and/or external 
acl helpers.
After start squid works fine. But on first reconfiguring it writes this 
message down and after this it's frozen. The Internet access to the 
clients is cut off, but the program itself still works - it can be 
reconfigured or stopped.


Here's the example of a piece of my config on one of my production proxies.
If I enable any of the commented out pieces - I get this issue.

===Cut===
#auth_param negotiate program /usr/local/libexec/squid/squid_kerb_auth
#auth_param negotiate children 15
#auth_param negotiate keep_alive on

auth_param ntlm program /usr/local/bin/ntlm_auth -d 0 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 35

auth_param basic program /usr/local/libexec/squid/pam_auth
auth_param basic children 35
auth_param basic realm Squid
auth_param basic credentialsttl 1 minute
auth_param basic casesensitive off

authenticate_ttl 1 minute
authenticate_cache_garbage_interval 1 minute

acl bsd_machinessrc 
/usr/local/etc/squid/bsd_machines.acl
acl win_machinessrc 
/usr/local/etc/squid/windows_machines.acl


external_acl_type   ldap_group  ttl=60 negative_ttl=60 
children=40 %LOGIN \

/usr/local/libexec/squid/squid_ldap_group \
-b 
cn=Users,dc=norma,dc=com \
-f 
((cn=%g)(member=%u)(objectClass=group)) \
-F 
sAMAccountname=%s \
-D 
cn=dca,cn=Users,dc=norma,dc=com \
-W 
/usr/local/etc/squid/ad.passwd -h hq-gc.norma.com -v 3 -p 389


#external_acl_type  squid_kerb_ldap ttl=3600 negative_ttl=60 
children=40 %LOGIN \
#   
/usr/local/libexec/squid/squid_kerb_ldap \
#   -b 
cn=Users,dc=norma,dc=com \
#   -g Internet 
Users - Proxy1at -N SOFTLABatNORMA.COM

===Cut===

Thanks.
Eugene.


[squid-users] squid_kerb_auth and famous 'BH received ,type 1 NTLM token`

2011-05-13 Thread Eugene M. Zheganin

Hi.

I wanted to ask is there any progress or solution/workaround to this 
problem ?


Once per 3-4  months I'm trying to deploy a negotiate authentication 
scheme; the majority of clients works just fine, but some of the clients 
(and each time these are some important ones) start to sending NTLM 
tokens instead of negotiate ones. About a year ago Markus told that he's 
on the way to squid_nego_auth helpers, but, as far as I understand, 
there was some serious problems.


Can I offer some help ? My skills in C are low, and my knowledge of 
NTLM/Kerberos is even lower, so I can provide only testing/debugging 
help, but I can do this in harsh environment of hundreds of clients. :P


Eugene.


Re: [squid-users] 'squid -k reconfigure' and connectivity breaking

2011-04-22 Thread Eugene M. Zheganin

Hi.

On 20.04.2011 13:48, Helmut Hullen wrote:

# stat swap.state
95 4012314 -rw-r- 1 squid squid 16326000 10203960 Apr 19
14:02:21 2011 Apr 20 10:53:45 2011 Apr 20 10:53:45 2011 Apr 19
14:02:21 2011 16384 19968 0 swap.state

What about deleting the old cache and restarting with

 squid -z
Yeah. Thanks. Actually, I should think about this myself and it was 
really stupid not to recreate the cache after upgrade between major 
versions.

It helped against LFS v1.

But reconfigure still takes 4 seconds between first Closing HTTP 
connection and final Ready to serve requests.

I really hope this soft reconfiguring will be implemented in close future.

Thanks.
Eugene.


Re: [squid-users] 'squid -k reconfigure' and connectivity breaking

2011-04-19 Thread Eugene M. Zheganin

Hi.

On 18.04.2011 13:47, Amos Jeffries wrote:

The behave identically in this regard. I suspect something is causing
3.1 to resume service much slower than 2.7 did. Which particular 3.1
release is doing this?

That's a 3.1.12 right now.


Anyway, is there a way to do a 'soft reconfiguration' ? Without closing
HTTP/ICP/SNMP connections (or at least not breaking client


Sadly not yet. We are working towards it for future releases.

Good news.


If that is true, then I suspect you are using one of the early 3.1
releases with broken LFS support. Or something is breaking/corrupting
the swap.state journal during a reconfigure.
 Does your cache.log contain a warning about version 1 LFS detected
or mention a DIRTY load during reconfigure? (may need ALL,1 debug level).

Actually it says this all the time:

Version 1 of swap file with LFS support detected...

# grep Version 1 of swap file with LFS support detected cache.log  | wc -l
 100

# stat swap.state
95 4012314 -rw-r- 1 squid squid 16326000 10203960 Apr 19 14:02:21 
2011 Apr 20 10:53:45 2011 Apr 20 10:53:45 2011 Apr 19 14:02:21 
2011 16384 19968 0 swap.state


cache_dir:

cache_dir ufs /usr/local/squid/cache 1100 16 256

And the solution is... ?

Thanks.
Eugene.


[squid-users] 'squid -k reconfigure' and connectivity breaking

2011-04-18 Thread Eugene M. Zheganin

Hi.

Around 6 months ago I switched from 2.7 to 3.1 for its IPv6.
I may be wrong, but after that I noticed that 'squid -k reconfigure' (I 
use my own custom quota manager, which web-interface issues reconfigure 
request when quotas are changed) now breaks existing connections and 
reopens listening sockets (and it says that in its cache.log). During 
this socket reopening a packet can be received from browser and if there 
is no listening socket on the server, a client then receives RST from 
operating system network's stack and then its browser shows 'The browser 
is configured with proxy which is refusing connections'. And this is 
sad, user start to think that this is a crash and starts ticketing my 
support staff.


Is this a 3.x-only behaviour or was 2.7 behaving identically ?
Anyway, is there a way to do a 'soft reconfiguration' ? Without closing 
HTTP/ICP/SNMP connections (or at least not breaking client 
connectivity), like 'apachectl graceful' does ? (I know that apache 
project actually has no relation to squid, but I like the idea and the 
implementation of this 'graceful' restart).
At this time it looks like '-k reconfigure' is just quite similar to 
fast '-k kill' and restart.


Thanks.
Eugene.


Re: [squid-users] pam_auth pam_end()

2011-04-09 Thread Eugene M. Zheganin

Hi.

On 15.03.2011 16:54, Amos Jeffries wrote:


Start with the -d option.
 Then add/update debug() lines to any place that looks useful. I'm 
interested in making the debug helpful so patches for that are welcome 
upstream.
 debug() operates identical to printf() but sends the result to the 
helper channel for Squid cache.log.


FWIW, I think adding pam_strerror() results into both of the WARNING: 
messages with that text should be enough to point at the actual problem.


Well... I did all of that (and it didn't help). By the way, debug seems 
to be a macro, rather than a squid channel logging function (could it be 
even possible ? main part of squid 3.x is written in C++ and the helper 
part - in C). Anyway, may be it's time to describe my problem, rather 
than to describe the solution as I see it. :)


Okay, the problem description: as I said I have a proxy. That's the 
company main proxy, and the wpad for the network of more than 2K 
machines points at it. So, during the weekdays I have loads of requests 
from all sorts of clients, most of them remains blocked, but all of the 
basic authentication requests are handled by pam_auth. I have 35 
simultaneously running pam_auth processes. During load peaks I ususally 
have 3-5 (sometimes even more) pam_auth processes that eat 100% of the 
both CPUs all together. I used to think that those are the processes 
that squid failed to release. But, when I kill some of it to release the 
CPUs from unnecessary load, squid complains in its log like that:


WARNING: basicauthenticator #8 (FD 93) exited

It's obvious that I'm wrong and this isn't the helper squid cannot 
release, but this is the actually running helper. So the questions are


- why only small parts of basic helpers are affected with such load ?
- why such load even exists ? when I kill affected processes squid 
continues to run without influencing its clients for some time. Then the 
load appears again.

- and, of course, what can be done to solve this.

I had a look at the code of the helper - it seems to be very 
straightforward and simple, so I don't see how such a simple code can 
eat CPU.


The basic helper config is:

auth_param basic program /usr/local/libexec/squid/pam_auth
auth_param basic children 35
auth_param basic realm Squid[Kamtelecom]
auth_param basic credentialsttl 1 minute
auth_param basic casesensitive off

and the pam config for the squid service name is:

authsufficient  pam_unix.so no_warn
authsufficient  /usr/local/lib/pam_winbind.so   
try_first_pass
authsufficient  pam_krb5.so no_warn 
try_first_pass


authrequiredpam_deny.so no_warn


(yup, I use the AD authentication scheme).


Thanks.
Eugene.


Re: [squid-users] pam_auth pam_end()

2011-04-09 Thread Eugene M. Zheganin

On 09.04.2011 19:50, Amos Jeffries wrote:

- why such load even exists ? when I kill affected processes squid
continues to run without influencing its clients for some time. Then the
load appears again.


That is unclear. It could be anything from that being the actual 
request load, to a config design problem causing unnecessary calls to 
the auth helpers, to a problem in PAM dong a lot of extra work for 
nothing.
Well, you told earlier that under heavy load first few helpers receive 
the majority of work. Lets assume I have 5 helpers that eat CPU, as it 
really happens sometimes. In the next moment I kill them (I do this 
rather often). Considering the  assumption that CPU load is caused by 
actual needs, such as repeating authentication, not some 'stucking' in 
the PAM framework or helper code, and in the same time - low probability 
of such load to end in the exact same moment when I kill helpers, it has 
to continue, and next bunch of helpers should receive this load and 
start to eat CPU. In reality that doesn't happen, CPU becomes idle.




The basic helper config is:

auth_param basic program /usr/local/libexec/squid/pam_auth
auth_param basic children 35
auth_param basic realm Squid[Kamtelecom]
auth_param basic credentialsttl 1 minute


60 seconds between checks with the PAM helper will raise load. On 
small networks with few clients this is not a problem, but larger ones 
it could be.



auth_param basic casesensitive off

and the pam config for the squid service name is:

auth sufficient pam_unix.so no_warn
auth sufficient /usr/local/lib/pam_winbind.so try_first_pass
auth sufficient pam_krb5.so no_warn try_first_pass

auth required pam_deny.so no_warn



I don't believe pam_winbind or pam_krb5 will work with this config 
using Basic auth. They are for NTLM and Negotiate auth respectively.
So, then the pam_unix.so should work. But I don't have 2K AD users on 
any of these FreeBSD, I have like 30 local users. Actually I'm not that 
sure about pam_winbind.so, but pam_krb5.so definitely can process 
plaintext passwords. As kinit does. I suppose pam_winbind.so is also 
able to handle plaintext passwords, just by the fact that wbinfo can.


Thanks.
Eugene.


[squid-users] pam_auth pam_end()

2011-03-15 Thread Eugene M. Zheganin

 Hi.

I'm running squid of different versions on my FreeBSD boxes (8.x, 
i386/amd64).
I'm also using pam_auth to authenticate users against local (pam_unix) 
and kerberos security databases.


Regardless of the arch and version, I have a couple of the boxes that 
periodically fail to release pam_auth.
For example, I had this situation on 2.7 and I'm currently having it on 
a 3.1 box. On the version 2.7 squid complains about in in its log, 
saying 'WARNING: failed to release PAM authenticator'. 3.1 does this no 
more, but the problem persists. How can I debug/solve this problem ? I 
see the only possibility - adding a pam_strerror() after pam_end() to 
see what is really happening, but may be I'm reinventing the wheel, and 
the solution is already known.


Thanks.
Eugene.


Re: [squid-users] Re: Re: Re: squid_ldap_group against nested groups/Ous

2010-11-13 Thread Eugene M. Zheganin

 Hi.

On 05.11.2010 21:01, Markus Moeller wrote:

Hi

 I get the same successful results on 64 bit FreeBSD 8.0.

$ uname -a
FreeBSD freebsd-80-64.freebsd.home 8.0-RELEASE FreeBSD 8.0-RELEASE #0: 
Sat Nov 21 15:02:08 UTC 2009 
r...@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64


$ ldd squid_kerb_ldap
squid_kerb_ldap:
   libgssapi.so.10 = /usr/lib/libgssapi.so.10 (0x800652000)
   libheimntlm.so.10 = /usr/lib/libheimntlm.so.10 (0x80075b000)
   libkrb5.so.10 = /usr/lib/libkrb5.so.10 (0x80086)
   libhx509.so.10 = /usr/lib/libhx509.so.10 (0x8009cd000)
   libcom_err.so.5 = /usr/lib/libcom_err.so.5 (0x800b0c000)
   libcrypto.so.6 = /lib/libcrypto.so.6 (0x800c0e000)
   libasn1.so.10 = /usr/lib/libasn1.so.10 (0x800ea6000)
   libroken.so.10 = /usr/lib/libroken.so.10 (0x801025000)
   libcrypt.so.5 = /lib/libcrypt.so.5 (0x801136000)
   libldap-2.4.so.7 = /usr/local/lib/libldap-2.4.so.7 (0x80124f000)
   liblber-2.4.so.7 = /usr/local/lib/liblber-2.4.so.7 (0x80139)
   libc.so.7 = /lib/libc.so.7 (0x80149d000)
   libsasl2.so.2 = /usr/local/lib/libsasl2.so.2 (0x8016d7000)
   libssl.so.6 = /usr/lib/libssl.so.6 (0x8017ef000)

Is it possible  that you have another kerberos package installed ? How 
does your ldd look ? I installed  a standard freebsd 8.0 84 bit plus 
ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/8.0-RELEASE/packages/net/openldap-sasl-client-2.4.18.tbz 
for ldap with sasl support.


First of all, sorry for a delayed answer, I'm not of that kind of 
persons that ask for help and never read answers. I had a couple of 
harsh weeks with crashes and late working. :)


Yes, I have multiple krb5 installations on machines where the build 
didn't succeed due to incompatible types, you were right. Also I have 
updated the production proxy that was on FreeBSD 7.2 to 8.1 (and had a 
harsh week due to wonderful em(4) issue, fixed in -STABLE), but now the 
building on this machine is fine, except one warning that can be easily 
fixed by removing -Werror (once again, why -Werror ?).


If you're interested the warning is about:

[...]
gcc -DHAVE_CONFIG_H -I.-I/usr/include  -I/usr/local/include  -g -O2 
-Wall -Wno-unknown-pragmas -Wextra -Wcomment -Wpointer-arith 
-Wcast-align -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes 
-Wmissing-declarations -Wdeclaration-after-statement -Wshadow -MT 
support_group.o -MD -MP -MF .deps/support_group.Tpo -c -o 
support_group.o support_group.c

support_group.c: In function 'utf8dup':
support_group.c:43: warning: declaration of 'dup' shadows a global 
declaration

/usr/include/unistd.h:330: warning: shadowed declaration is here
[...]

So, the build succeed, helper doesn't crash on startup, but now I have 
problems connecting to ldap servers.
I saw in your reply that you are using the KDC on a SuSe linux. I'm 
using KDC on Windows 2003/2008, and it does work just perfect with 
squid_ldap_group (but I really miss nested groups :)).


Debug looks like:

===Cut===
# ./squid_kerb_group.sh
2010/11/13 14:26:21| squid_kerb_ldap: Starting version 1.2.1a
2010/11/13 14:26:21| squid_kerb_ldap: Group list 
Internet%20Users%20-%20Proxy1@
2010/11/13 14:26:21| squid_kerb_ldap: Group 
Internet%20Users%20-%20Proxy1  Domain

2010/11/13 14:26:21| squid_kerb_ldap: Netbios list soft...@norma.com
2010/11/13 14:26:21| squid_kerb_ldap: Netbios name SOFTLAB  Domain NORMA.COM
e...@norma.com
2010/11/13 14:26:25| squid_kerb_ldap: Got User: emz Domain: NORMA.COM
2010/11/13 14:26:25| squid_kerb_ldap: User domain loop: gr...@domain 
Internet%20Users%20-%20Proxy1@
2010/11/13 14:26:25| squid_kerb_ldap: Default domain loop: gr...@domain 
Internet%20Users%20-%20Proxy1@
2010/11/13 14:26:25| squid_kerb_ldap: Found gr...@domain 
Internet%20Users%20-%20Proxy1@

2010/11/13 14:26:25| squid_kerb_ldap: Setup Kerberos credential cache
2010/11/13 14:26:25| squid_kerb_ldap: Get default keytab file name
2010/11/13 14:26:25| squid_kerb_ldap: Got default keytab file name 
/usr/local/etc/squid/HTTP.keytab
2010/11/13 14:26:25| squid_kerb_ldap: Get principal name from keytab 
/usr/local/etc/squid/HTTP.keytab

2010/11/13 14:26:25| squid_kerb_ldap: Keytab entry has realm name: NORMA.COM
2010/11/13 14:26:25| squid_kerb_ldap: Found principal name: 
HTTP/proxy-wizard.norma.c...@norma.com
2010/11/13 14:26:25| squid_kerb_ldap: Set credential cache to 
MEMORY:squid_ldap_17129
2010/11/13 14:26:25| squid_kerb_ldap: Got principal name 
HTTP/proxy-wizard.norma.c...@norma.com

2010/11/13 14:26:26| squid_kerb_ldap: Stored credentials
2010/11/13 14:26:26| squid_kerb_ldap: Initialise ldap connection
2010/11/13 14:26:26| squid_kerb_ldap: Canonicalise ldap server name for 
domain NORMA.COM
2010/11/13 14:26:26| squid_kerb_ldap: Resolved SRV _ldap._tcp.NORMA.COM 
record to spb-dc.norma.com
2010/11/13 14:26:26| squid_kerb_ldap: Resolved SRV _ldap._tcp.NORMA.COM 
record to sad-srv.norma.com
2010/11/13 14:26:26| squid_kerb_ldap: Resolved SRV _ldap._tcp.NORMA.COM 
record to 

Re: [squid-users] Re: Re: squid_ldap_group against nested groups/Ous

2010-10-31 Thread Eugene M. Zheganin

 Hi.

On 30.10.2010 00:14, Markus Moeller wrote:

Hi,

 I have now a 64bit freebsd box and can not replicate the error. Also 
the compile error I got where only a symbol problem dup in 
support_group and the sasl prototype error.


Yeah, I agree, on fresh 8.1 installation it does compile (with -Werror 
commented out).

On non-fresh 8.0/7.x it doesn't.

8.0 has heimdal 1.1.0 and 7.x has 0.6.3; however the symptoms are the same.

Is there something I can do to narrow the scope or the supposed decision 
is upgrade everywhere to 8.1 ?


Thanks.
Eugene.



Re: [squid-users] Re: squid_ldap_group against nested groups/Ous

2010-10-25 Thread Eugene M. Zheganin

 Hi.

On 07.12.2008 18:09, Markus Moeller wrote:
I did implement recursive group search in squid_kerb_ldap at 
http://sourceforge.net/project/showfiles.php?group_id=196348.




Actually this is a very interesting helper, and I would like ti use it 
on my production squids, 'cause my engineers are tired of managing 
hundreds of users instead of a dozen of groups.


I downloaded it, but I had a bunch of problems with it.

If this isn't the appropriate maillist to discuss this helper, then just 
stop at this point, and I'm sorry for this post.



My target system is FreeBSD 8.0-RELASE-p2/amd64. It has heimdal 1.0.1 
Kerberos V in the base system.


a) First of all,  1.2.1a fails to build:

===Code===
cc1: warnings being treated as errors
support_krb5.c: In function 'krb5_create_cache':
support_krb5.c:117: warning: format '%s' expects type 'char *', but 
argument 5 has type 'krb5_data'

support_krb5.c:122: error: incompatible type for argument 2 of 'strcasecmp'
support_krb5.c:251: error: incompatible type for argument 1 of 'strlen'
support_krb5.c:252: error: incompatible type for argument 1 of 'strlen'
support_krb5.c:252: warning: format '%s' expects type 'char *', but 
argument 5 has type 'krb5_data'
support_krb5.c:252: warning: format '%s' expects type 'char *', but 
argument 5 has type 'krb5_data'

*** Error code 1

Stop in /usr/home/emz/squid_kerb_ldap/1/squid_kerb_ldap-1.2.1a.
*** Error code 1

Stop in /usr/home/emz/squid_kerb_ldap/1/squid_kerb_ldap-1.2.1a.
*** Error code 1

Stop in /usr/home/emz/squid_kerb_ldap/1/squid_kerb_ldap-1.2.1a.
===Cut===

This can be fixed, as all of these errors are caused by the fact that 
entry.principal-realm is a structure, and the code expect it to be char 
*, so it's pretty obvious that char * has to be here, and krb5_data.data 
is the only thing that appears to be char; so I changed 
entry.principal-realm to entry.principal-realm.data. I had one more 
problem with -Werror switch:


===Cut===
cc1: warnings being treated as errors
In file included from support_sasl.c:30:
/usr/local/include/sasl/sasl.h:349: warning: function declaration isn't 
a prototype

===Cut===

Since my C skills are considerably low, I simply remowed -Werror switch 
and uild succeeded.


b) then it fails to run, crashing at keytab parsing. So may be things 
aren't that obvious and I failed to do the proper fixing:


===Cut===
%./squid_kerb_ldap -b cn=Users,dc=norma,dc=com -g Internal Users - 
Crystal@ -u dca -p sabbracadabra -N soft...@norma.com -d -i

2010/10/26 10:50:05| squid_kerb_ldap: Starting version 1.2.1a
2010/10/26 10:50:05| squid_kerb_ldap: Group list Internal Users - Crystal@
2010/10/26 10:50:05| squid_kerb_ldap: Group Internal Users - Crystal  
Domain

2010/10/26 10:50:05| squid_kerb_ldap: Netbios list soft...@norma.com
2010/10/26 10:50:05| squid_kerb_ldap: Netbios name SOFTLAB  Domain NORMA.COM
e...@norma.com
2010/10/26 10:50:10| squid_kerb_ldap: Got User: emz Domain: NORMA.COM
2010/10/26 10:50:10| squid_kerb_ldap: User domain loop: gr...@domain 
Internal Users - Crystal@
2010/10/26 10:50:10| squid_kerb_ldap: Default domain loop: gr...@domain 
Internal Users - Crystal@
2010/10/26 10:50:10| squid_kerb_ldap: Found gr...@domain Internal Users 
- Crystal@

2010/10/26 10:50:10| squid_kerb_ldap: Setup Kerberos credential cache
2010/10/26 10:50:10| squid_kerb_ldap: Get default keytab file name
2010/10/26 10:50:10| squid_kerb_ldap: Got default keytab file name 
/usr/local/etc/squid/squid.keytab
2010/10/26 10:50:10| squid_kerb_ldap: Get principal name from keytab 
/usr/local/etc/squid/squid.keytab

Ошибка адресации на шине(core dumped)
===Cut===

Stacktrace:

===Cut===
# gdb squid_kerb_ldap squid_kerb_ldap.core
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain 
conditions.

Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as amd64-marcel-freebsd...
Core was generated by `squid_kerb_ldap'.
Program terminated with signal 10, Bus error.
Reading symbols from /usr/lib/libgssapi.so.10...done.
Loaded symbols for /usr/lib/libgssapi.so.10
Reading symbols from /usr/lib/libheimntlm.so.10...done.
Loaded symbols for /usr/lib/libheimntlm.so.10
Reading symbols from /usr/lib/libkrb5.so.10...done.
Loaded symbols for /usr/lib/libkrb5.so.10
Reading symbols from /usr/lib/libhx509.so.10...done.
Loaded symbols for /usr/lib/libhx509.so.10
Reading symbols from /usr/lib/libcom_err.so.5...done.
Loaded symbols for /usr/lib/libcom_err.so.5
Reading symbols from /lib/libcrypto.so.6...done.
Loaded symbols for /lib/libcrypto.so.6
Reading symbols from /usr/lib/libasn1.so.10...done.
Loaded symbols for /usr/lib/libasn1.so.10
Reading symbols from /usr/lib/libroken.so.10...done.
Loaded symbols for /usr/lib/libroken.so.10
Reading symbols from /lib/libcrypt.so.5...done.
Loaded symbols 

Re: [squid-users] Active/Backup Squid cluster

2010-06-22 Thread Eugene M. Zheganin

Hi.

On 22.06.2010 15:01, Henrik Nordström wrote:

So your CARP setup forgot to monitor the status of Squid when
determining which node should be master.
   
So, in general case, life is more complicated than one box - one 
service model.
Actually I have 2 boxes with N services. And their state can be pretty 
easy N running on A and M running on B, with P total services.


Eugene.


[squid-users] swap.state eating the entire slice

2010-06-21 Thread Eugene M. Zheganin

Hi.

I'm using squid caches since long time for now, I have production caches 
running 2.7.x, 3.0.x and 3.1.x.
About a year/year and a half ago I started to encounter a problem when 
squid eats the entire slice for it's swap.state file.
I still cannot localize this problem, the only thing I know - that this 
is somehow connected to squid restarts. For example on servers with long 
uptime this doesn't happen at all. But I have a bunch of branch servers, 
that are shut down for the nighttime. Almost 2-3 times per month I have 
this problem. The cache_dir size on those servers is like 1-2 gigs (I 
use squid mostly because of it powerful authorization capabilities), but 
the swap.state during some bad conditions can eat entire slice, no 
matter how big it is - 40 Gigs, 80 Gigs, on one server I saw swap.state 
of 120 Gigs. After it eats out the space it crashes.


Does anyone see similar issue ? Is this a configuration problem or a 
squid issue ?


Thanks.
Eugene.


Re: [squid-users] Active/Backup Squid cluster

2010-06-21 Thread Eugene M. Zheganin

Hi.

21.06.2010 18:08, Nick Cairncross wrote:

Using the config tool of the proxies, you set the priority of each 'home' VIP 
as 100 and the other site as 50. This means they act on each site, servicing 
requests etc. However, should one proxy fail I can raise the priority of the 
other so that it also hosts the VIP of the broken proxy and takes over.

All this is a long way round to saying I can flip my users to whatever proxy I 
want, take one out of commission etc and it works nicely. I'd like to use 
something similar is Squid. The added complication is that I use Kerberos 
authentication, which is dependent on host name. I can't quite see a way to 
achieve what I want yet.

   
I run a couple of squid installations where I use FreeBSD carp(4) 
interface to backup squid proxy in case of hardware or network failure. 
However, this doesn't solve the service outage, which I have to handle 
manually, for example raising the priority on the backup node.
Linux has carp implementation too, so if this method of failover is 
acceptable ... :)


I use NTLM/Basic authentication in these installations.

Eugene.


Re: [squid-users] Active/Backup Squid cluster

2010-06-21 Thread Eugene M. Zheganin

Hi.

On 21.06.2010 23:12, Henrik Nordström wrote:

However, this doesn't solve the service outage, which I have to handle
manually, for example raising the priority on the backup node.
 

What kind of service failure do you need manual action?
   

In this case - squid crash.

heartbeat is using a CARP like mechanism for communicating which node is
the primary.
   
Last time I saw heartbeat - it was using some script stuff to set/unset 
ip aliases on a node interface. This is kinda... weird.


Eugene.


[squid-users] squid 3.1 and error_directory

2010-02-08 Thread Eugene M. Zheganin

Hi.

Recently I decided to look on 3.1 branch on my test proxy. Everything 
seems to work fine, but I'm stuck with the problem with the error messages.
Whatever I do with the error_directory/error_default_language settings 
(leaving 'em commented out, or setting 'em to something) in my browser I 
see corrupted symbols. These are neither latin, nor cyrillic. They do 
look like it is UTF-8 treated like Cp1251, for example. Changing 
encoding of the page in browser doesn't help.

And the charset in meta/ tag of such page is always us-ascii (why ?).

How can I make pages be displayed at least in english ? I thought that 
this can be achieved by setting error_default_language to en, but I was 
wrong again.


I thought I am familiar with squid error directory and creating my own 
templates for 2.x/3.0 branches, but definitely I'm not with the 3.1


Thanks.



  1   2   >