[squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-28 Thread Makson
Amos Jeffries wrote
 2) explicit hostname serverb.domain:9443. I find it highly unlikely
 that you will be finding server A being requested for URLs at that
 hostname.

We now have the public URL for app.domain set to servera.domain.


Amos Jeffries wrote
 1) https:// on the URLs. Squid is not suposed to be sending these over
 un-encrypted peer connections. I dont recall any explicit prevention of
 that, but there might be.

A little progress finally, we have two types of clients for our app server,
one is web browser, and the other is eclipse, for the same request, server B
will try to query server A ONLY if the request is sent by web browser, i
tried to look into the log file in server A, no difference between URLs for
the requests sent by these two types of clients, strange?

# record for request sent by web browser in server B
1406539824.298  3 172.17.210.5 TCP_MISS/200 3736 GET
https://servera.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
- SIBLING_HIT/172.17.192.33 application/octet-stream

# record for request sent by eclipse in server B
1406540067.167409 172.17.210.5 TCP_MISS/200 3670 GET
https://servera.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
- FIRSTUP_PARENT/172.17.96.148 application/octet-stream




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Sibling-cache-peer-for-a-HTTPS-reverse-proxy-tp4667011p4667076.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-28 Thread Amos Jeffries
On 28/07/2014 9:37 p.m., Makson wrote:
 Amos Jeffries wrote
 2) explicit hostname serverb.domain:9443. I find it highly unlikely
 that you will be finding server A being requested for URLs at that
 hostname.
 
 We now have the public URL for app.domain set to servera.domain.
 
 
 Amos Jeffries wrote
 1) https:// on the URLs. Squid is not suposed to be sending these over
 un-encrypted peer connections. I dont recall any explicit prevention of
 that, but there might be.
 
 A little progress finally, we have two types of clients for our app server,
 one is web browser, and the other is eclipse, for the same request, server B
 will try to query server A ONLY if the request is sent by web browser, i
 tried to look into the log file in server A, no difference between URLs for
 the requests sent by these two types of clients, strange?
 
 # record for request sent by web browser in server B
 1406539824.298  3 172.17.210.5 TCP_MISS/200 3736 GET
 https://servera.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
 - SIBLING_HIT/172.17.192.33 application/octet-stream
 
 # record for request sent by eclipse in server B
 1406540067.167409 172.17.210.5 TCP_MISS/200 3670 GET
 https://servera.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
 - FIRSTUP_PARENT/172.17.96.148 application/octet-stream
 

Excellent.

Would you be able to show the HTTP request coming from each of those
celints, and the HTTP reply coming from the origin parent server?
 debug_options 11,2 will log the necessary details in the current squid
releases. Older Squid require tcpdump -s0 to capture them all.


Amos


[squid-users] https url filter issue

2014-07-28 Thread Sucheta Joshi



Hi,

Our client is using Squid proxy.  We need to do following configurations in
Squid Proxy.  We are using SquidGard UI to configure this.

Block facebook and linkedin main sites but allow access to some of the
facebook and Linkedin URL’s based on certain keywords.    While doing this
settings it url_regex worked for http access, but when we tested same for
https it gives webpage not found.

Need input on this.

Thanks  Regards,
Sucheta Joshi
Technical Lead | RippleHire
+ 91 9960618324   | sucheta.jo...@ripplehire.com




[squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-28 Thread Makson
# request sent by web browser
GET
/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
HTTP/1.1^M
Host: servera.domain:9443^M
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101
Firefox/30.0^M
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8^M
Accept-Language: en-US,en;q=0.5^M
Accept-Encoding: gzip, deflate^M
Cookie: JazzFormAuth=Form;
LtpaToken2=Qs4sUFYCnMUEuCqSz3B0I+4LsGmHSjfqLfKJMlEUiuR8GTzSXXfgORRoqCCejdRxs53LZSEUsR/d7vqeALUsftR4yRZo6WfjzmCrvlc33hgVOMRoefiWzl1aPwbqLV71aLJFjzyuKkm/Niq37vC9M6Q+gjXPXbHxC3hv7SKIdloDJ0qPC6MNAKXbD+r4jAhQFp8STGXPSGy4QL2WMb6Q4536jo7Hzx8EtXsenxunMfeLCR+Y9HsXZuIjdQpNxoVyFfyQhnDQWUtaXYO5uo1iIDCWf/wPdIBwFxWcTMvYx2x4O2Y9uush0Twv3UBbTRT9mespX+RTJUDALjZpHWzZBAcHuU20V2EISX/LvPMwxN9OtcE3dI1B0xG7YRUZnCaVYUJysoTu4sNXZDsoaIxECeCxISfUwrJE1U6+h3lAZaImkX/RtE8rmOvV9PWrCiU2rHgn1qMsDYAU+vKTIf88R4VKxytRdpTVvYlJ1zCtRilxqrrUb+rpvB/5p78pDftRNK7TSaUoQrIedxK+sRIfyiYTzl+Kp1vlDT/D6yOgtmWVvPz2rTfmerP/+azlt2fbVDbKjfhzBkDu5Cr/V5ElWiE3nNS9q0kPig9sg+8EnuXgTeChEc7Kq29GODCKcYCwplwimSskpQfDPGwqsDMaSVrp4db7ySe/Vyn1YXYDCdsBxxY4EqpZeMf1oYSTsUdN;
JSESSIONID=DQE4xgyrKu2kZO9uoMLhcbH:-1^M
Connection: keep-alive

# reply
2014/07/28 18:12:26.786 kid1| http.cc(749) processReplyHeader: HTTP Server
local=172.17.192.145:45830 remote=172.17.192.33:3128 FD 16 flags=1
2014/07/28 18:12:26.786 kid1| http.cc(750) processReplyHeader: HTTP Server
REPLY:
-
HTTP/1.1 200 OK^M
X-Powered-By: Servlet/3.0^M
Content-Disposition: attachment^M
Last-Modified: Thu, 01 Jan 1970 00:00:00 GMT^M
Date: Mon, 28 Jul 2014 06:36:46 GMT^M
Accept-Ranges: bytes^M
Expires: Thu, 23 Jul 2015 06:36:46 GMT^M
Cache-Control: public, s-maxage=31104000^M
ETag: immutable^M
Content-Type: application/octet-stream^M
Content-Length: 3214^M
Content-Language: en-US^M
Age: 12941^M
X-Cache: HIT from hz-rtc2^M
Via: 1.1 hz-rtc2 (squid/3.4.5)^M
Connection: keep-alive^M

# request sent by eclipse
GET
/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
HTTP/1.1^M
http.useragent:
com.ibm.team.filesystem.client.internal.content.FileContentManager^M
Accept: text/json^M
Accept: */*^M
Accept-Charset: UTF-8^M
Accept-Language: en-US^M
X-com-ibm-team-userid: scm^M
Authorization: jauth user_token=bc5dd3c38d2649aebec661960a8ce1f7^M
X-com-ibm-team-configuration-versions:
com.ibm.team.jazz.foundation=4.0.6,com.ibm.team.rtc=4.0.6^M
User-Agent: Jakarta Commons-HttpClient/3.1^M
Host: servera.domain:9443^M
Cookie: JazzFormAuth=Form^M
Cookie: WASReqURL=^M
Cookie:
LtpaToken2=Qs4sUFYCnMUEuCqSz3B0I+4LsGmHSjfqLfKJMlEUiuR8GTzSXXfgORRoqCCejdRxs53LZSEUsR/d7vqeALUsftR4yRZo6WfjzmCrvlc33hgVOMRoefiWzl1aPwbqLV71aLJFjzyuKkm/Niq37vC9M6Q+gjXPXbHxC3hv7SKIdloDJ0qPC6MNAKXbD+r4jAhQFp8STGXPSGy4QL2WMb6Q4536jo7Hzx8EtXsenxunMfeLCR+Y9HsXZuIjdQpNxoVyFfyQhnDQWUtaXYO5uo1iIDCWf/wPdIBwFxWcTMvYx2x4O2Y9uush0Twv3UBbTRT9mespX+RTJUDALjZpHWzZBAcHuU20V2EISX/LvPMwxN9OtcE3dI1B0xG7YRUZnCaVYUJysoTu4sNXZDsoaIxECeCxISfUwrJE1U6+h3lAZaImkX/RtE8rmOvV9PWrCiU2rHgn1qMsDYAU+vKTIf88R4VKxytRdpTVvYlJ1zCtRilxqrrUb+rpvB/5p78pDftRNK7TSaUoQrIedxK+sRIfyiYTzl+Kp1vlDT/D6yOgtmWVvPz2rTfmerP/+azlt2fbVDbKjfhzBkDu5Cr/V5ElWiE3nNS9q0kPig9sg+8EnuXgTeChEc7Kq29GODCKcYCwplwimSskpQfDPGwqsDMaSVrp4db7ySe/Vyn1YXYDCdsBxxY4EqpZeMf1oYSTsUdN^M
Cookie: JSESSIONID=oaEG-yS1jUba1OZcOixIntE:-1^M

# reply
2014/07/28 18:16:37.052 kid1| http.cc(749) processReplyHeader: HTTP Server
local=172.17.192.145:33970 remote=172.17.96.148:9443 FD 16 flags=1
2014/07/28 18:16:37.052 kid1| http.cc(750) processReplyHeader: HTTP Server
REPLY:
-
HTTP/1.1 200 OK^M
X-Powered-By: Servlet/3.0^M
Content-Disposition: attachment^M
Last-Modified: Thu, 01 Jan 1970 00:00:00 GMT^M
Date: Mon, 28 Jul 2014 10:16:37 GMT^M
Accept-Ranges: bytes^M
Expires: Thu, 23 Jul 2015 10:16:37 GMT^M
Cache-Control: public, s-maxage=31104000^M
ETag: immutable^M
Content-Type: application/octet-stream^M
Content-Length: 3214^M
Content-Language: en-US^M




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Sibling-cache-peer-for-a-HTTPS-reverse-proxy-tp4667011p4667079.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-28 Thread Amos Jeffries
I was looking for Vary headers from the origin server. But none visible.

Instead I see

1) broken cacheability headers.
The Expires: header says (Date: + 360 days), and s-maxage says 360days
BUT ... Last-Modified says 1970. So Last-Modified + s-maxage is already
expired.
  NP: this is not breaking Squid which still (incorrectly) uses Expires
header in preference to s-maxage. But when we fix that bug this server
will start to MISS constantly.


2) Authorization: header from eclipse.
 Server-authenticated requests can receive cached content but require
revalidation to the server to confirm that the content is legit for this
user. The server is responding with a whole new response object (200)
where I would expect a 304.
 Does the matching HTTP Server REQUEST to the parent peer for the
eclipse transaction contain an If-Modified-Since and/or If-Match header?

Amos


AW: [squid-users] Squid 3.4 very high cpu usage

2014-07-28 Thread Rietzler, Markus (RZF, SG 324 / RIETZLER_SOFTWARE)
did you use external auth helpers?
we have (still) the problem that with squid 3.4.x squid will go up to 99% cpu 
usage.
when we deactivate external auth helpers squid will stay around 20%.
we have to switch back to squid 3.2.11 which works without problems.

today we did a test with the latest release 3.4.6.

markus

-Ursprüngliche Nachricht-
Von: Igor Novgorodov [mailto:i...@novg.net] 
Gesendet: Dienstag, 15. Juli 2014 18:35
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] Squid 3.4 very high cpu usage

delay_pools are not used at all (delay_access 1 deny all), i'll try to 
remove them completely,
but as 3.3 works fine i doubt that'll help.

On 15.07.2014 19:01, FredB wrote:
 Try without delay_pool or at least without CONNECT method and delay_pool

 Fred



Re: [squid-users] https url filter issue

2014-07-28 Thread Amos Jeffries
On 28/07/2014 10:15 p.m., Sucheta Joshi wrote:
 
 
 
 Hi,
 
 Our client is using Squid proxy.  We need to do following configurations in
 Squid Proxy.  We are using SquidGard UI to configure this.
 
 Block facebook and linkedin main sites but allow access to some of the
 facebook and Linkedin URL’s based on certain keywords.While doing this
 settings it url_regex worked for http access, but when we tested same for
 https it gives webpage not found.
 
 Need input on this.

Look in your Squid access.log.

Notice how the HTTPS traffic shows up as CONNECT requests with a
hostname/IP and : then port number. *only*.

Like so:
 CONNECT static-a.cdn.facebook.com:443 1.1

This static-a.cdn.facebook.com:443 part is the URL available to Squid
(and passed on to the squidguard URL helper). If you are going to use
regex patterns to match on URL that is all you have available for the
pattern to work on.

PS. you would be better off using dstdom_regex or dstdomain ACL types in
squid.conf when expecting to match CONNECT requests by URL.

Amos



AW: [squid-users] squid 3.4. uses 100% cpu with ntlm_auth

2014-07-28 Thread Rietzler, Markus (RZF, SG 324 / RIETZLER_SOFTWARE)
i want to bring back that issue.

- we are running squid on linux. 
- we are using squid with winbind for user auth against windows DC
- our clients are windows7 and ie10.

the problem is:

when we use squid 3.4.x squid will use 100% of cpu after a few minutes. with 
the old version 3.2.11 everythings works perfect. squid uses about 25% of cpu.
we have tested it today with the latest version 3.4.6 in our production 
environment. although due to sommer holidays cpu usage raises up to 100%. 
when we disable external user auth at all there is no problem. so

- with squid 3.2.11 external user auth is working
- with squid 3.4.6 external user auth is working - BUT squid will use 100% cpu
- with squid 3.4.6 and no user auth is it working.

thanxs for any hints and helps

markus
 
-Ursprüngliche Nachricht-
Von: Rietzler, Markus (RZF, SG 324 / RIETZLER_SOFTWARE) 
Gesendet: Dienstag, 7. Januar 2014 10:22
An: Amos Jeffries; squid-users@squid-cache.org
Betreff: AW: [squid-users] squid 3.4. uses 100% cpu with ntlm_auth

thanxs,

our assumption is, that it is related to helper management. with 3.4. there is 
a new helper protocol, right?
our environment worked with 3.2 without problems. now with the jump to 3.4. it 
will not work anymore. so number of requests are somehow important but as it 
worked in the past...

if we go without ntlm_auth we can't see any high cpu load. so the first thought 
ACL and eg. regex problems can be
discarded. maybe there are some cross influences. but we think it lies 
somewhere in helpers/auth.

we switched to 3.4 for two reasons:

1) we have a squid-hierarchy setup where user proxy talks to 4 parent proxies 
in a load balancer way. in the past we could switch of one of the parents and 
everything still was working. with 3.2. as soon as one of the four parents was 
missing internet access gets slower and slower. with 3.4. it is working.

2) 3.3. was no option as it hat problems with ACL and access internet sites 
with their ip-adresses. we have a couple of acls where we choose the right 
route (intranet/extranet/internet). in the past we could do www.google.de and 
http://173.194.35.184. but with 3.3 the ip-address didn't work anymore. 

3) so this was the reason to jump from 3.2/3.1 to 3.4

 -Ursprüngliche Nachricht-
 Von: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Gesendet: Montag, 6. Januar 2014 22:02
 An: squid-users@squid-cache.org
 Betreff: Re: [squid-users] squid 3.4. uses 100% cpu with ntlm_auth
 
 On 2014-01-07 01:52, Rietzler, Markus (RZF, SG 324 /
 RIETZLER_SOFTWARE) wrote:
  hi,
  we have switched from squid 3.2.x to 3.4.2. in our environment we are
  using squid with the ntlm_auth helper to do NTLM user auth against
  windows DC.
  after switching to squid 3.4.1 squid uses nearly 100% cpu after a few
  minutes. with squid 3.2.x everythings worked well.
 
  auth_param ntlm program /usr/bin/ntlm_auth
  --helper-protocol=squid-2.5-ntlmssp
  auth_param ntlm children 96 startup=24 idle=12
  auth_param ntlm keep_alive on
 
  auth_param basic program /usr/bin/ntlm_auth
  --helper-protocol=squid-2.5-basic
  auth_param basic children 5 startup=2 idle=1
  auth_param basic realm Internet-Zugriff [Benutzername/Kennwort aus BK]
  Nutzung des Internets nur zum Dienstgebrauch!
  auth_param basic credentialsttl 2 hours
  auth_param basic casesensitive off
 
 
  we have compiled with smp-support but at the moment using squid only
  with one worker, Kerberos support is compiled in but not used in
  squid.conf
  no negotiate configs in squid. is this enough or should we try without
  negotiate support, could this influence and cause this troubles?
 
  Squid Cache: Version 3.4.2
  configure options:  '--enable-auth-basic=MSNT,SMB'
  '--enable-auth-basic' '--enable-auth-ntlm'
  '--enable-auth-negotiate=kerberos' '--enable-delay-pools'
  '--enable-follow-x-forwarded-for' '--enable-removal-policies=lru,heap'
  '--with-filedescriptors=4096' '--with-winbind' '--with-async-io'
  '--enable-storeio=ufs,aufs,diskd,rock' '--disable-ident-lookups'
  '--prefix=/rzf/produkte/www/squid' '--enable-underscores'
  '--with-large-files'
  'PKG_CONFIG_PATH=/opt/gnome/lib64/pkgconfig:/opt/gnome/share/pkgconfig'
  --enable-ltdl-convenience
 
  /usr/bin/ntlm_auth -V
  Version 3.6.3-0.39.1-3012-SUSE-CODE11-x86_64
 
 
 
  we do not use wbinfo_group we only need the username. all users are
  allowed to surf the internet, there are some groups but they are
  retrieved external as they also are used in ufdbguard to filter some
  categories. so only ntlm_auth for username is needed and used.
 
  we only have short testet squid 3.3., because there we had the
  problem, that the internet access to sites with ip-address didn't work
  or are routed the wrong way (but that is another story, not related to
  this one).
 
  so the problem is, that with squid 3.4.2 the cpu usage rises to 100%.
  after squid -k reconfigure the cpu-usage drops but then after a fiew
  minutes rises again to 100%.
 


Re: [squid-users] Re: kerberos authentication with load balancers

2014-07-28 Thread Giorgi Tepnadze
Hello Markus

Thank you very much, everything works now. Only two question left
1) Is it necessary to run commands specified below every 30 day?

msktutil --auto-update --verbose --computer-name proxy1-k
msktutil --auto-update --verbose --computer-name proxy2-k
msktutil --auto-update --verbose --computer-name proxy-k

As I understand I should run them on one proxy1 and then copy updated
keytab file to proxy2 every month.

2) Can I use kerberos somehow to authenticate skype? All internet
browsers work but skype doesn't, only works by specifying user/pass in
configuration and as I think it uses basic ldap auth.
When there was NTLM auth, it worked, but now I removed all NTLM from
squid, only kerberos negotiate and basic is left.

George

On 26/07/14 15:55, Markus Moeller wrote:
 Hi Giorgi,

   It would be

 msktutil -c -b CN=COMPUTERS -s HTTP/proxy1.domain.com -h
 proxy1.domain.com -k /root/keytab/PROXY.keytab --computer-name PROXY1-K
 --upn HTTP/proxy1.domain.com--server addc03.domain.com --verbose
 --enctypes 28

 msktutil -c -b CN=COMPUTERS -s HTTP/proxy2.domain.com -h
 proxy2.domain.com -k /root/keytab/PROXY.keytab --computer-name PROXY2-K
 --upn HTTP/proxy2.domain.com --server addc03.domain.com --verbose
 --enctypes 28

 and one for DNS RR record

 msktutil -c -b CN=COMPUTERS -s HTTP/proxy.mia.gov.ge -h
 proxy1.domain.com -k /root/keytab/PROXY.keytab --computer-name PROXY-K
 --upn HTTP/proxy.mia.gov.ge --server addc03.domain.com --verbose
 --enctypes 28

 The -h value is not really used.  So for the DNS RR you can use either
 name.

 Regards
 Markus


 Giorgi Tepnadze  wrote in message news:53d219ea.1010...@mia.gov.ge...

 Hi Markus

 Excuse me for posting in old list, but I have a small question:

 So I have 2 squid servers (proxy1.domain.com and proxy2.domain.com) and
 one DNS RR record (proxy.mia.gov.ge). Regarding your recommendation how
 should I create keytab file.

 msktutil -c -b CN=COMPUTERS -s HTTP/proxy1.domain.com -h
 proxy1.domain.com -k /root/keytab/PROXY.keytab --computer-name PROXY1-K
 --upn HTTP/proxy1.mia.gov.ge --server addc03.domain.com --verbose
 --enctypes 28
 msktutil -c -b CN=COMPUTERS -s HTTP/proxy2.domain.com -h
 proxy2.domain.com -k /root/keytab/PROXY.keytab --computer-name PROXY2-K
 --upn HTTP/proxy2.mia.gov.ge --server addc03.domain.com --verbose
 --enctypes 28

 and one for DNS RR record

 msktutil -c -b CN=COMPUTERS -s HTTP/proxy.domain.com -h
 proxy1.domain.com -k /root/keytab/PROXY.keytab --computer-name PROXY2-K
 --upn HTTP/proxy.mia.gov.ge --server addc03.domain.com --verbose
 --enctypes 28

 But there is problem with last one, which server name should I put in
 -s, -h, --upn and --computer-name?

 Many Thanks

 George



 On 07/02/14 01:26, Markus Moeller wrote:
 Hi Joseph,

   it is all possible :-)

   Firstly I suggest not to use samba tools to create the squid keytab,
 but use msktutil (see
 http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos).
 Then create a keytab for the loadbalancer name ( that is the one
 configured in IE or Firefox). use this keytab on both proxy servers
 and use negotiate_kerberos_auth with  -s GSS_C_NO_NAME

  When you say multiple realms, do you have trust between the AD
 domains or are they separate ?   If the domains do not have trust do
 you intend to use the same loadbalancer name for the users of both
 domains ?

 Markus



 Joseph Spadavecchia  wrote in message
 news:2b43c569f8254a4e82c948ce4c247ed5158...@blx-ex01.alba.local...

 Hi there,

 What is the recommended way to configure Kerberos authentication
 behind two load balancers?

 AFAIK, based on the mailing lists, I should

 1) Create a user account KrbUser on the AD server and add an SPN
 HTTP/loadbalancer.example.com for the load balancer
 2) Join the domain with Kerberos and kinit
 3) net ads keytab add HTTP/loadbalancer.example.com@REALM -U KrbUser
 4) update squid.conf with an auth helper like negotiate_kerberos_auth
 -s HTTP/loadbalancer.example.com@REALM

 Unfortunately, when I try this it fails.

 The only way I could get it to work at all was by removing the SPN
 from the KrbUser and associating the SPN with the machine trust
 account (of the proxy behind the loadbalancer)  However, this is not a
 viable solution since there are two machines behind the load balancer
 and AD only allows you to associate a SPN with one account.

 Furthermore, given that I needed step (4) above, is it possible to
 have load balanced Kerberos authentication working with multiple
 realms?  If so, then how?

 Many thanks.






Re: [squid-users] RE: YouTube Resolution Locker

2014-07-28 Thread csn233
On Sat, Jul 26, 2014 at 7:52 PM, babajaga augustus_me...@yahoo.de wrote:
 Not correct. It is possible to cache youtubes content using StoreID.

Still possible to cache after YT changed/randomized their id=...
parameter for the same videos?


[squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-28 Thread Makson
Amos Jeffries wrote
 1) broken cacheability headers.
 The Expires: header says (Date: + 360 days), and s-maxage says 360days
 BUT ... Last-Modified says 1970. So Last-Modified + s-maxage is already
 expired.
   NP: this is not breaking Squid which still (incorrectly) uses Expires
 header in preference to s-maxage. But when we fix that bug this server
 will start to MISS constantly.

So this is caused by the application? It is made by IBM, if you fix this
bug, i guess we need to keep using the older version of Squid.


Amos Jeffries wrote
  Does the matching HTTP Server REQUEST to the parent peer for the
 eclipse transaction contain an If-Modified-Since and/or If-Match header?

Sorry, i didn't get that, would you please explain me in more detail?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Sibling-cache-peer-for-a-HTTPS-reverse-proxy-tp4667011p4667086.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-28 Thread Amos Jeffries
On 29/07/2014 3:39 a.m., Makson wrote:
 Amos Jeffries wrote
 1) broken cacheability headers.
 The Expires: header says (Date: + 360 days), and s-maxage says 360days
 BUT ... Last-Modified says 1970. So Last-Modified + s-maxage is already
 expired.
   NP: this is not breaking Squid which still (incorrectly) uses Expires
 header in preference to s-maxage. But when we fix that bug this server
 will start to MISS constantly.
 
 So this is caused by the application? It is made by IBM, if you fix this
 bug, i guess we need to keep using the older version of Squid.
 
 
 Amos Jeffries wrote
  Does the matching HTTP Server REQUEST to the parent peer for the
 eclipse transaction contain an If-Modified-Since and/or If-Match header?
 
 Sorry, i didn't get that, would you please explain me in more detail?

There is a HTTP request to the parent server leading to that reply you
posted the headers for. What is the request headers?

Amos


[squid-users] External ACL tags

2014-07-28 Thread Steve Hill


I'm trying to build ACLs based on the tags returned by an external ACL, 
but I can't get it to work.


These are the relevant bits of my config:

external_acl_type preauth children-max=1 concurrency=100 ttl=0 
negative_ttl=0 %SRC %{User-Agent} %URI %METHOD /usr/sbin/squid-preauth

acl preauth external preauth
acl need_http_auth tag http_auth
http_access allow !tproxy !tproxy_ssl !https preauth
http_access allow !preauth_done preauth_tproxy
http_access allow proxy_auth postauth



I can see the external ACL is being called and setting various tags:

2014/07/28 17:29:40.634 kid1| external_acl.cc(1503) Start: 
externalAclLookup: looking up for '2a00:1a90:5::14 
Wget/1.12%20(linux-gnu) http://nexusuk.org/%7Esteve/empty GET' in 'preauth'.
2014/07/28 17:29:40.634 kid1| external_acl.cc(1513) Start: 
externalAclLookup: will wait for the result of '2a00:1a90:5::14 
Wget/1.12%20(linux-gnu) http://nexusuk.org/%7Esteve/empty GET' in 
'preauth' (ch=0x7f1409a399f8).
2014/07/28 17:29:40.634 kid1| external_acl.cc(871) aclMatchExternal: 
2a00:1a90:5::14 Wget/1.12%20(linux-gnu) 
http://nexusuk.org/%7Esteve/empty GET: return -1.
2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: preauth = -1 
async
2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: 
http_access#7 = -1 async
2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: http_access 
= -1 async
2014/07/28 17:29:40.635 kid1| external_acl.cc(1371) 
externalAclHandleReply: reply={result=ERR, notes={message: 
53d67a74$2a00:1a90:5::14$baa34e80d2d5fb2549621f36616dce9000767e93b6f86b5dc8732a8c46e676ff; 
tag: http_auth; tag: cp_auth; tag: preauth_ok; tag: preauth_done; }}



But then when I test one of the tags, it seems that it isn't set:

2014/07/28 17:29:40.636 kid1| Acl.cc(157) matches: checking !preauth_done
2014/07/28 17:29:40.636 kid1| Acl.cc(157) matches: checking preauth_done
2014/07/28 17:29:40.636 kid1| StringData.cc(81) match: 
aclMatchStringList: checking 'http_auth'
2014/07/28 17:29:40.636 kid1| StringData.cc(85) match: 
aclMatchStringList: 'http_auth' NOT found

2014/07/28 17:29:40.636 kid1| Acl.cc(177) matches: checked: preauth_done = 0
2014/07/28 17:29:40.636 kid1| Acl.cc(177) matches: checked: 
!preauth_done = 1



It looks to me like its probably only looking at the first tag that the 
ACL returned - is this a known bug?  I couldn't spot anything in Bugzilla.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] RE: YouTube Resolution Locker

2014-07-28 Thread Stakres
Hi csn233,
If you keep the same resolution, yes it'll be cached.

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/YouTube-Resolution-Locker-tp4667042p4667089.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] SSL issues

2014-07-28 Thread Ikna Nou
Hello List, 
I've finally got a squid3 (squid3.4-4, compiled from sources on Debian) with 
SSL interception solution working quite decently.

Now, trying to make it to work better I found some entries in the cache.log 
file, like these:

2014/07/28 16:07:15 kid1| fwdNegotiateSSL: Error negotiating SSL connection on 
FD 683: error:14092105:SSL routines:SSL3_GET_SERVER_HELLO:wrong cipher returned 
(1/-1/0) 

2014/07/28 16:07:15 kid1| fwdNegotiateSSL: Error negotiating SSL connection on 
FD 160: error:14092105:SSL routines:SSL3_GET_SERVER_HELLO:wrong cipher returned 
(1/-1/0) 

2014/07/28 16:07:37 kid1| clientNegotiateSSL: Error negotiating SSL connection 
on FD 117: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca 
(1/0) 

2014/07/28 16:07:40 kid1| UPGRADE WARNING: URL rewriter reponded with garbage ' 
10.10.25.74/- - GET'. Future Squid will treat this as part of the URL. 

2014/07/28 16:07:52 kid1| clientNegotiateSSL: Error negotiating SSL connection 
on FD 922: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca 
(1/0) 

2014/07/28 16:08:55 kid1| UPGRADE WARNING: URL rewriter reponded with garbage ' 
10.10.25.75/- - GET'. Future Squid will treat this as part of the URL. 


I've been looking for solutions to this with no luck.

So, these are my questions:
1) is it possible to check or view a FD content in order to troubleshoot this?
2) could you please share some light to solve this?
3) how do I apply a patch to upgrade my actual squid solution?

Thank you!
Ikna


The SSL part of squid.conf:

http_port 3129
http_port 3128 intercept
https_port 3127 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=512MB cert=/etc/squid3/certs/ssl/public2.pem 
key=/etc/squid3/certs/ssl/private.pem options=NO_SSLv2,NO_SSLv3 
capath=/etc/ssl/certs

acl SSL_whitelist dstdomain /etc/squid3/acl/ssl_whitelist.acl
acl SSL_whitelist_ip dst /etc/squid3/acl/ssl_whitelist_ip.acl

ssl_bump none localhost
ssl_bump none SSL_whitelist
ssl_bump none SSL_whitelist_ip

ssl_bump server-first all
sslproxy_capath /etc/ssl/certs
sslproxy_options NO_SSLv2,NO_SSLv3
sslproxy_cert_error allow all

sslcrtd_program /usr/lib/squid3/ssl_crtd -s /usr/lib/ssl_db -M 200MB
sslcrtd_children 40



  

Re: AW: [squid-users] Squid 3.4 very high cpu usage

2014-07-28 Thread Eliezer Croitoru
I have wanted to ask you if you have tried to test it with a fake auth 
helper by any chance?


The issue is inside squid or in the helper.
We need to minimize and find out the source of the issue.

Markus can you contact me off-list?

Eliezer

On 07/28/2014 02:38 PM, Rietzler, Markus (RZF, SG 324 / 
RIETZLER_SOFTWARE) wrote:

did you use external auth helpers?
we have (still) the problem that with squid 3.4.x squid will go up to 99% cpu 
usage.
when we deactivate external auth helpers squid will stay around 20%.
we have to switch back to squid 3.2.11 which works without problems.

today we did a test with the latest release 3.4.6.

markus




Re: [squid-users] RE: YouTube Resolution Locker

2014-07-28 Thread Eliezer Croitoru

No it will not..
Since the ID is being changed from a stub id per video to using some 
hashing algorithm with some salt added into it there is no basic way to 
use the same ID for the same video.


Eliezer

On 07/28/2014 08:21 PM, Stakres wrote:

Hi csn233,
If you keep the same resolution, yes it'll be cached.

Bye Fred





[squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-28 Thread Makson
# request  reply for web browser client

2014/07/29 10:52:32.813 kid1| client_side.cc(2407) parseHttpRequest: HTTP
Client local=172.17.192.145:9443 remote=172.17.210.5:49639 FD 12 flags=1
2014/07/29 10:52:32.813 kid1| client_side.cc(2408) parseHttpRequest: HTTP
Client REQUEST:
-
GET
/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
HTTP/1.1
Host: servera.domain:9443
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101
Firefox/30.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: JazzFormAuth=Form;
LtpaToken2=Qs4sUFYCnMUEuCqSz3B0I+4LsGmHSjfqLfKJMlEUiuR8GTzSXXfgORRoqCCejdRxs53LZSEUsR/d7vqeALUsftR4yRZo6WfjzmCrvlc33hgVOMRoefiWzl1aPwbqLV71aLJFjzyuKkm/Niq37vC9M6Q+gjXPXbHxC3hv7SKIdloDJ0qPC6MNAKXbD+r4jAhQFp8STGXPSGy4QL2WMb6Q4536jo7Hzx8EtXsenxunMfeLCR+Y9HsXZuIjdQpNxoVyFfyQhnDQWUtaXYO5uo1iIDCWf/wPdIBwFxWcTMvYx2x4O2Y9uush0Twv3UBbTRT9mespX+RTJUDALjZpHWzZBAcHuU20V2EISX/LvPMwxN9OtcE3dI1B0xG7YRUZnCaVYUJysoTu4sNXZDsoaIxECeCxISfUwrJE1U6+h3lAZaImkX/RtE8rmOvV9PWrCiU2rHgn1qMsDYAU+vKTIf88R4VKxytRdpTVvYlJ1zCtRilxqrrUb+rpvB/5p78pDftRNK7TSaUoQrIedxK+sRIfyiYTzl+Kp1vlDT/D6yOgtmWVvPz2rTfmerP/+azlt2fbVDbKjfhzBkDu5Cr/V5ElWiE3nNS9q0kPig9sg+8EnuXgTeChEc7Kq29GODCKcYCwplwimSskpQfDPGwqsDMaSVrp4db7ySe/Vyn1YXYDCdsBxxY4EqpZeMf1oYSTsUdN;
JSESSIONID=DQE4xgyrKu2kZO9uoMLhcbH:-1
Connection: keep-alive


--
2014/07/29 10:52:32.815 kid1| http.cc(2219) sendRequest: HTTP Server
local=172.17.192.145:46722 remote=172.17.192.33:3128 FD 16 flags=1
2014/07/29 10:52:32.815 kid1| http.cc(2220) sendRequest: HTTP Server
REQUEST:
-
GET
https://servera.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
HTTP/1.1
Host: servera.domain:9443
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101
Firefox/30.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: JazzFormAuth=Form;
LtpaToken2=Qs4sUFYCnMUEuCqSz3B0I+4LsGmHSjfqLfKJMlEUiuR8GTzSXXfgORRoqCCejdRxs53LZSEUsR/d7vqeALUsftR4yRZo6WfjzmCrvlc33hgVOMRoefiWzl1aPwbqLV71aLJFjzyuKkm/Niq37vC9M6Q+gjXPXbHxC3hv7SKIdloDJ0qPC6MNAKXbD+r4jAhQFp8STGXPSGy4QL2WMb6Q4536jo7Hzx8EtXsenxunMfeLCR+Y9HsXZuIjdQpNxoVyFfyQhnDQWUtaXYO5uo1iIDCWf/wPdIBwFxWcTMvYx2x4O2Y9uush0Twv3UBbTRT9mespX+RTJUDALjZpHWzZBAcHuU20V2EISX/LvPMwxN9OtcE3dI1B0xG7YRUZnCaVYUJysoTu4sNXZDsoaIxECeCxISfUwrJE1U6+h3lAZaImkX/RtE8rmOvV9PWrCiU2rHgn1qMsDYAU+vKTIf88R4VKxytRdpTVvYlJ1zCtRilxqrrUb+rpvB/5p78pDftRNK7TSaUoQrIedxK+sRIfyiYTzl+Kp1vlDT/D6yOgtmWVvPz2rTfmerP/+azlt2fbVDbKjfhzBkDu5Cr/V5ElWiE3nNS9q0kPig9sg+8EnuXgTeChEc7Kq29GODCKcYCwplwimSskpQfDPGwqsDMaSVrp4db7ySe/Vyn1YXYDCdsBxxY4EqpZeMf1oYSTsUdN;
JSESSIONID=DQE4xgyrKu2kZO9uoMLhcbH:-1
Via: 1.1 hz-rtc3 (squid/3.4.5)
Surrogate-Capability: hz-rtc3=Surrogate/1.0
X-Forwarded-For: 172.17.210.5
Cache-Control: max-age=259200, only-if-cached
Connection: keep-alive


--
2014/07/29 10:52:32.816 kid1| ctx: enter level  0:
'https://servera.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4'
2014/07/29 10:52:32.816 kid1| http.cc(749) processReplyHeader: HTTP Server
local=172.17.192.145:46722 remote=172.17.192.33:3128 FD 16 flags=1
2014/07/29 10:52:32.816 kid1| http.cc(750) processReplyHeader: HTTP Server
REPLY:
-
HTTP/1.1 200 OK
X-Powered-By: Servlet/3.0
Content-Disposition: attachment
Last-Modified: Thu, 01 Jan 1970 00:00:00 GMT
Date: Mon, 28 Jul 2014 06:36:46 GMT
Accept-Ranges: bytes
Expires: Thu, 23 Jul 2015 06:36:46 GMT
Cache-Control: public, s-maxage=31104000
ETag: immutable
Content-Type: application/octet-stream
Content-Length: 3214
Content-Language: en-US
Age: 72948
X-Cache: HIT from hz-rtc2
Via: 1.1 hz-rtc2 (squid/3.4.5)
Connection: keep-alive

# request  reply for eclipse client

2014/07/29 10:54:21.468 kid1| client_side.cc(2407) parseHttpRequest: HTTP
Client local=172.17.192.145:9443 remote=172.17.210.5:49645 FD 12 flags=1
2014/07/29 10:54:21.468 kid1| client_side.cc(2408) parseHttpRequest: HTTP
Client REQUEST:
-
POST /ccm/service/com.ibm.team.scm.common.IScmService HTTP/1.1
http.useragent: com.ibm.team.repository.transport.client.RemoteTeamService
Accept-Language: en-US
Accept: text/xml
Accept-Charset: UTF-8
X-com-ibm-team-userid: scm
X-com-ibm-team-marshaller-version: 0.2
X-com-ibm-team-service-version: 12
Accept-Encoding: gzip
Authorization: jauth user_token=75a2f6b2ba2e41008b6d68c9deb78a84
X-com-ibm-team-configuration-versions:
com.ibm.team.jazz.foundation=4.0.6,com.ibm.team.rtc=4.0.6
User-Agent: Jakarta 

[squid-users] Re: Sibling cache peer for a HTTPS reverse proxy

2014-07-28 Thread Makson
# request  reply for web browser client

2014/07/29 11:10:04.351 kid1| client_side.cc(2407) parseHttpRequest: HTTP
Client local=172.17.192.145:9443 remote=172.17.210.5:49651 FD 12 flags=1
2014/07/29 11:10:04.628 kid1| client_side.cc(2408) parseHttpRequest: HTTP
Client REQUEST:
-
GET
/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
HTTP/1.1
Host: servera.domain:9443
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101
Firefox/30.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: JazzFormAuth=Form;
LtpaToken2=Qs4sUFYCnMUEuCqSz3B0I+4LsGmHSjfqLfKJMlEUiuR8GTzSXXfgORRoqCCejdRxs53LZSEUsR/d7vqeALUsftR4yRZo6WfjzmCrvlc33hgVOMRoefiWzl1aPwbqLV71aLJFjzyuKkm/Niq37vC9M6Q+gjXPXbHxC3hv7SKIdloDJ0qPC6MNAKXbD+r4jAhQFp8STGXPSGy4QL2WMb6Q4536jo7Hzx8EtXsenxunMfeLCR+Y9HsXZuIjdQpNxoVyFfyQhnDQWUtaXYO5uo1iIDCWf/wPdIBwFxWcTMvYx2x4O2Y9uush0Twv3UBbTRT9mespX+RTJUDALjZpHWzZBAcHuU20V2EISX/LvPMwxN9OtcE3dI1B0xG7YRUZnCaVYUJysoTu4sNXZDsoaIxECeCxISfUwrJE1U6+h3lAZaImkX/RtE8rmOvV9PWrCiU2rHgn1qMsDYAU+vKTIf88R4VKxytRdpTVvYlJ1zCtRilxqrrUb+rpvB/5p78pDftRNK7TSaUoQrIedxK+sRIfyiYTzl+Kp1vlDT/D6yOgtmWVvPz2rTfmerP/+azlt2fbVDbKjfhzBkDu5Cr/V5ElWiE3nNS9q0kPig9sg+8EnuXgTeChEc7Kq29GODCKcYCwplwimSskpQfDPGwqsDMaSVrp4db7ySe/Vyn1YXYDCdsBxxY4EqpZeMf1oYSTsUdN;
JSESSIONID=DQE4xgyrKu2kZO9uoMLhcbH:-1
Connection: keep-alive


--
2014/07/29 11:10:04.631 kid1| http.cc(2219) sendRequest: HTTP Server
local=172.17.192.145:46736 remote=172.17.192.33:3128 FD 16 flags=1
2014/07/29 11:10:04.631 kid1| http.cc(2220) sendRequest: HTTP Server
REQUEST:
-
GET
https://servera.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
HTTP/1.1
Host: servera.domain:9443
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101
Firefox/30.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: JazzFormAuth=Form;
LtpaToken2=Qs4sUFYCnMUEuCqSz3B0I+4LsGmHSjfqLfKJMlEUiuR8GTzSXXfgORRoqCCejdRxs53LZSEUsR/d7vqeALUsftR4yRZo6WfjzmCrvlc33hgVOMRoefiWzl1aPwbqLV71aLJFjzyuKkm/Niq37vC9M6Q+gjXPXbHxC3hv7SKIdloDJ0qPC6MNAKXbD+r4jAhQFp8STGXPSGy4QL2WMb6Q4536jo7Hzx8EtXsenxunMfeLCR+Y9HsXZuIjdQpNxoVyFfyQhnDQWUtaXYO5uo1iIDCWf/wPdIBwFxWcTMvYx2x4O2Y9uush0Twv3UBbTRT9mespX+RTJUDALjZpHWzZBAcHuU20V2EISX/LvPMwxN9OtcE3dI1B0xG7YRUZnCaVYUJysoTu4sNXZDsoaIxECeCxISfUwrJE1U6+h3lAZaImkX/RtE8rmOvV9PWrCiU2rHgn1qMsDYAU+vKTIf88R4VKxytRdpTVvYlJ1zCtRilxqrrUb+rpvB/5p78pDftRNK7TSaUoQrIedxK+sRIfyiYTzl+Kp1vlDT/D6yOgtmWVvPz2rTfmerP/+azlt2fbVDbKjfhzBkDu5Cr/V5ElWiE3nNS9q0kPig9sg+8EnuXgTeChEc7Kq29GODCKcYCwplwimSskpQfDPGwqsDMaSVrp4db7ySe/Vyn1YXYDCdsBxxY4EqpZeMf1oYSTsUdN;
JSESSIONID=DQE4xgyrKu2kZO9uoMLhcbH:-1
Via: 1.1 hz-rtc3 (squid/3.4.5)
Surrogate-Capability: hz-rtc3=Surrogate/1.0
X-Forwarded-For: 172.17.210.5
Cache-Control: max-age=259200, only-if-cached
Connection: keep-alive


--
2014/07/29 11:10:04.632 kid1| ctx: enter level  0:
'https://servera.domain:9443/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4'
2014/07/29 11:10:04.632 kid1| http.cc(749) processReplyHeader: HTTP Server
local=172.17.192.145:46736 remote=172.17.192.33:3128 FD 16 flags=1
2014/07/29 11:10:04.632 kid1| http.cc(750) processReplyHeader: HTTP Server
REPLY:
-
HTTP/1.1 200 OK
X-Powered-By: Servlet/3.0
Content-Disposition: attachment
Last-Modified: Thu, 01 Jan 1970 00:00:00 GMT
Date: Mon, 28 Jul 2014 06:36:46 GMT
Accept-Ranges: bytes
Expires: Thu, 23 Jul 2015 06:36:46 GMT
Cache-Control: public, s-maxage=31104000
ETag: immutable
Content-Type: application/octet-stream
Content-Length: 3214
Content-Language: en-US
Age: 73999
X-Cache: HIT from hz-rtc2
Via: 1.1 hz-rtc2 (squid/3.4.5)
Connection: keep-alive

# request  reply for eclipse client

2014/07/29 11:27:06.144 kid1| client_side.cc(2407) parseHttpRequest: HTTP
Client local=172.17.192.145:9443 remote=172.17.210.5:49664 FD 12 flags=1
2014/07/29 11:27:06.144 kid1| client_side.cc(2408) parseHttpRequest: HTTP
Client REQUEST:
-
GET
/ccm/service/com.ibm.team.scm.common.IVersionedContentService/content/com.ibm.team.filesystem/FileItem/_J-m1gK4-EeOvOJ84krOqLg/_fOPWkv3TEeOaa7Y2RPnTQg/FHFMF8a7A01tlvpKekGYG9gxlVc3bigGpRMSA11YKZ4
HTTP/1.1
http.useragent:
com.ibm.team.filesystem.client.internal.content.FileContentManager
Accept: text/json
Accept: */*
Accept-Charset: UTF-8
Accept-Language: en-US
X-com-ibm-team-userid: scm
Authorization: jauth user_token=334e081faf6044508e56e14632215d16
X-com-ibm-team-configuration-versions:

Re: [squid-users] External ACL tags

2014-07-28 Thread Amos Jeffries
On 29/07/2014 4:42 a.m., Steve Hill wrote:
 
 I'm trying to build ACLs based on the tags returned by an external ACL,
 but I can't get it to work.
 
 These are the relevant bits of my config:
 
 external_acl_type preauth children-max=1 concurrency=100 ttl=0
 negative_ttl=0 %SRC %{User-Agent} %URI %METHOD /usr/sbin/squid-preauth
 acl preauth external preauth
 acl need_http_auth tag http_auth
 http_access allow !tproxy !tproxy_ssl !https preauth
 http_access allow !preauth_done preauth_tproxy
 http_access allow proxy_auth postauth
 
 
 
 I can see the external ACL is being called and setting various tags:
 
 2014/07/28 17:29:40.634 kid1| external_acl.cc(1503) Start:
 externalAclLookup: looking up for '2a00:1a90:5::14
 Wget/1.12%20(linux-gnu) http://nexusuk.org/%7Esteve/empty GET' in
 'preauth'.
 2014/07/28 17:29:40.634 kid1| external_acl.cc(1513) Start:
 externalAclLookup: will wait for the result of '2a00:1a90:5::14
 Wget/1.12%20(linux-gnu) http://nexusuk.org/%7Esteve/empty GET' in
 'preauth' (ch=0x7f1409a399f8).
 2014/07/28 17:29:40.634 kid1| external_acl.cc(871) aclMatchExternal:
 2a00:1a90:5::14 Wget/1.12%20(linux-gnu)
 http://nexusuk.org/%7Esteve/empty GET: return -1.
 2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: preauth = -1
 async
 2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked:
 http_access#7 = -1 async
 2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: http_access
 = -1 async
 2014/07/28 17:29:40.635 kid1| external_acl.cc(1371)
 externalAclHandleReply: reply={result=ERR, notes={message:
 53d67a74$2a00:1a90:5::14$baa34e80d2d5fb2549621f36616dce9000767e93b6f86b5dc8732a8c46e676ff;
 tag: http_auth; tag: cp_auth; tag: preauth_ok; tag: preauth_done; }}

Hi Steve,
 This is how tag= keys were originally designed to work. Only to allow
one tag to be assigned to any HTTP transaction. The tag type ACL and
%EXT_TAG configurations still operate that way.

The note ACL type should match against values in the tag key name same
as any other annotation. If that does not work try a different key name
than tag=.

Amos