[squid-users] squid sslbump server-first local loops?

2014-04-11 Thread Amm

Hello,

I accidentally came across this. I was trying to test what TLS version 
my squid reports.


So I ran this command:
openssl s_client -connect 192.168.1.2:8081

where 8081 is https_port on which squid runs. (with sslbump)

And BOOM, squid went in to infinite loop! And started running out of 
file descriptors.


It continued the loop even after I ctrl-c'ed the openssl.

I suppose this happens due to server-first in sslbump, where squid keeps 
trying to connect to self in an infinite loop.


Port 8081 is NOT listed in Safe_ports. So shouldn't squid be blocking it 
before trying server-first?


Or shouldn't squid check something like this?

If (destIP == selfIP and destPort == selfPort) then break?

I am also not sure if this can be used to DoS. So just reporting,

Amm.


[squid-users] fallback to TLS1.0 if server closes TLS1.2?

2014-04-11 Thread Amm

Hello,

I recently upgraded OpenSSL from 1.0.0 to 1.0.1 (which supports TLS1.2)

I also recompiled squid against new OpenSSL.

Now there is this (BROKEN) bank site:

https://www.mahaconnect.in

This site closes connection if you try TLS1.2 or TLS1.1

When squid tries to connect, it says:

Failed to establish a secure connection to 125.16.24.200

The system returned: (71) Protocol error (TLS code: 
SQUID_ERR_SSL_HANDSHAKE) Handshake with SSL server failed: 
error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake 
failure


The site works, if I specify:
sslproxy_options NO_TLSv1_1


But then it stops using TLS1.2 for sites supporting it.

When I try in Chrome or Firefox without proxy settings, they auto detect 
this and fallback to TLS1.0/SSLv3.


So my question is shouldn't squid fallback to TLS1.0 when TLS1.2/1.1 
fails? Just like Chrome/Firefox does?


(PS: I can not tell bank to upgrade)

Amm.


[squid-users] Re: Squid not sending request to web

2014-04-11 Thread fordjohn
Hi Amos,
Below is the router script I have pasted into the firewall section of my
tomato router.  It does not seem to forward packets to the proxy. Any ideas
what I am doing wrong.  I am a newbie who is trying to learn.
Thanks for your help.

# IPv4 address of proxy
PROXYIP4= 192.168.1.16
# interface facing clients
CLIENTIFACE= eth0
# arbitrary mark used to route packets by the firewall. May be anything from
1 to 64.
FWMARK= 2
# permit Squid box out to the Internet
iptables -t mangle -A PREROUTING -p tcp --dport 80 -s $PROXYIP4 -j ACCEPT
# mark everything else on port 80 to be routed to the Squid box
iptables -t mangle -A PREROUTING -i $CLIENTIFACE -p tcp --dport 80 -j MARK
--set-mark $FWMARK
iptables -t mangle -A PREROUTING -m mark --mark $FWMARK -j ACCEPT
# NP: Ensure that traffic from inside the network is allowed to loop back
inside again.
iptables -t filter -A FORWARD -i $CLIENTIFACE -o $CLIENTIFACE -p tcp --dport
80 -j ACCEPT
ip rule add fwmark 2 table proxy
ip route add default via $PROXYIP table proxy

Below is a listing of my routers iptables

.root@Router:/tmp/home/root# iptables -t nat -vL;iptables -t filter -vL
Chain PREROUTING (policy ACCEPT 106 packets, 13596 bytes)
 pkts bytes target prot opt in out source  
destination 
0 0 ACCEPT udp  --  anyany anywhere anywhere
   
udp dpt:1194 
0 0 WANPREROUTING  all  --  anyany anywhere
wan-ip.Router   
0 0 DROP   all  --  ppp0   any anywhere
192.168.1.0/24  
0 0 upnp   all  --  anyany anywhere
wan-ip.Router   

Chain POSTROUTING (policy ACCEPT 22 packets, 1867 bytes)
 pkts bytes target prot opt in out source  
destination 
   48  3298 MASQUERADE  all  --  anytun11   192.168.1.0/24  
anywhere
0 0 MASQUERADE  all  --  anyppp0anywhere
anywhere
6  2412 SNAT   all  --  anybr0 192.168.1.0/24  
192.168.1.0/24  to:192.168.1.1 

Chain OUTPUT (policy ACCEPT 28 packets, 4279 bytes)
 pkts bytes target prot opt in out source  
destination 

Chain WANPREROUTING (1 references)
 pkts bytes target prot opt in out source  
destination 
0 0 DNAT   icmp --  anyany anywhere anywhere
   
to:192.168.1.1 
0 0 DNAT   tcp  --  anyany 192.168.1.0/24   anywhere
   
tcp dpt:www to:192.168.1.16:3128 
0 0 DNAT   udp  --  anyany 192.168.1.0/24   anywhere
   
udp dpt:www to:192.168.1.16:3128 
0 0 DNAT   tcp  --  anyany anywhere anywhere
   
tcp dpt:63893 to:192.168.1.16 
0 0 DNAT   udp  --  anyany anywhere anywhere
   
udp dpt:63893 to:192.168.1.16 

Chain upnp (1 references)
 pkts bytes target prot opt in out source  
destination 
Chain INPUT (policy DROP 0 packets, 0 bytes)
 pkts bytes target prot opt in out source  
destination 
0 0 ACCEPT all  --  tun21  any anywhere anywhere

0 0 ACCEPT udp  --  anyany anywhere anywhere
   
udp dpt:1194 
   25  2970 ACCEPT all  --  tun11  any anywhere anywhere

0 0 DROP   all  --  anyany anywhere anywhere
   
state INVALID 
 5813 7936K ACCEPT all  --  anyany anywhere anywhere
   
state RELATED,ESTABLISHED 
0 0 shlimittcp  --  anyany anywhere anywhere
   
tcp dpt:ssh state NEW 
8   564 ACCEPT all  --  lo any anywhere anywhere

  119 14722 ACCEPT all  --  br0any anywhere anywhere


Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target prot opt in out source  
destination 
0 0 DROP   all  --  br0vlan1   anywhere anywhere

0 0 DROP   all  --  br0ppp0anywhere anywhere

0 0 DROP   all  --  br0vlan2   anywhere anywhere

0 0 ACCEPT all  --  tun21  any anywhere anywhere

 5554 7375K ACCEPT all  --  tun11  any anywhere anywhere

 3638  539Kall  --  anyany anywhere anywhere
   
account: network/netmask: 192.168.1.0/255.255.255.0 name: lan 
0 0 ACCEPT all  --  br0br0 anywhere anywhere

280 DROP   all  --  anyany anywhere anywhere
   
state INVALID 
   82  5024 TCPMSS tcp  --  anyany anywhere an

Re: [squid-users] Re: Cache Windows Updates ONLY

2014-04-11 Thread Nick Hill
Hi Ellezer

I have re-compiled squid 4.3 along with the storeid_file_rewrite.
(Maybe largefile should be a default config directive!)

I added the following to squid.conf
store_id_program /usr/local/squid/libexec/storeid_file_rewrite
/etc/squid3/storeid_rewrite
store_id_children 40 startup=10 idle=5 concurrency=0
store_id_access allow windowsupdate
store_id_access deny all

My /etc/squid3/storeid_rewrite
^http:\/\/.+?\.ws\.microsoft\.com\/.+?_([0-9a-z]{40})\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip|psf|appx)
   http://wupdate.squid.local/$1
^http:\/\/.+?\.windowsupdate\.com\/.+?_([0-9a-z]{40})\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip|psf|appx)
   http://wupdate.squid.local/$1

echo 
"http://download.windowsupdate.com/msdownload/update/common/2014/04/11935736_1ad4d6ce4701a9a52715213f48c337a1b4121dff.cab";
| /usr/local/squid/libexec/storeid_file_rewrite
/etc/squid3/storeid_rewrite
OK store-id=http://wupdate.squid.local/1ad4d6ce4701a9a52715213f48c337a1b4121dff

Still need to check it works OK.

On 10 April 2014 20:07, Eliezer Croitoru  wrote:
> Hey Nick,
>
> In a case you do know the tokens meaning and if it is working properly you
> can try to use StoreID in 3.4.X
> http://wiki.squid-cache.org/Features/StoreID
>
> It is designed to allow you this specific issue you are sure it is.
>
> About the 4GB or 1GB updates it's pretty simple.
> Microsoft release an update which contains "everything" about the about even
> that the update for your machine is only part of the file.
> This is what the last time I verified the issue.
>
> Also there is another side that OS become more and more complex and an
> update can be really big which almost replacing half of the OS components.
>
> What ever goes for you from the options is fine and still I have not seen
> microsoft cache solution.
> How is it called?
>
> Eliezer
>
>
> On 04/10/2014 08:50 PM, Nick Hill wrote:
>>
>> Is there a convenient way to configure Squid to do this?
>>
>> Thanks.
>
>


Re: [squid-users] request_header_add question

2014-04-11 Thread Amos Jeffries
On 11/04/2014 10:18 p.m., Kein Name wrote:
> 
> 
> Amos Jeffries schrieb:
>>> Config:
>>> cache_peer 10.1.2.3 parent 8000 0 no-query originserver login=PASS
>>>
>>
>> This is a origin server peer. The header delivered to it is
>> WWW-Authenticate. Proxy-Authenticate is invalid on connections to origin
>> servers.
>>
>> Is your proxy a reverse-proxy or a forward-proxy?
>>
> 
> 
> It is a reverse proxy.
> 

Okay. In which case Squid is operating as an origin itself. As in;
working solely on the WWW-Authenticate header and you can switch all my
earlier mentions of Proxy-Auth header to WWW-Auth.


> 
>> Which of the servers (your proxy or the origin) is validating the
>> authentication?
>>
>>
> 
> The origin server.
> 

That might explain why %ul is not working. %ul is the username from
login authenticated / validated by the proxy.

If the proxy is not validating the authentication itself, then you only
have access to the username via an external ACL helper partially
decoding the auth header token and returning the username found to Squid.
 That value appears in either %un "any user name/label known" or %ue
"user=X kv-pair from external ACL helper"

> 
>>> The config seems to work, squid shows me the login dialog of the
>>> cache_peer. For several reasons I have to feed the username back as a
>>> header value
>>> I also tried login=PASSTHRU for testing, but without any difference.
>>
>> FWIW:
>> * "PASSTHRU" sends the received Proxy-Authenticate header (if any)
>> through to the peer untouched. Leaving no header if none provided by the
>> client.

For NTLM on a reverse-proxy with the origin performing authenticate is
exactly why this PASSTHRU was created.

You *may* also need the connection-auth=on option to be set on both the
http_port directive and the cache_peer directive since authentication is
not enabled in the proxy. End-to-end connection pinning required by NTLM
should be enabled automatically when the WWW-Authenticate:NTLM is
sighted, but these ensure that it is working anyway.


>>
>> * "PASS" tries to convert credentials to Basic auth and deliver to the
>> peer in Proxy-Authenticate. Will try to generate a header from any
>> available other sources of credentials if none are provided by the client.
>>
>> In both of the above the peer being an origin treats them as not having
>> www-Authenticate header (naturally) and responds with a challenge to get
>> some.
>>
>>
> 
> The origin peer creates the "WWW-Authenticate: NTLM" request upon which
> the rev proxy shows the user/password popup request.
> The Rev Proxy then replies with a "Authorization: NTLM
> TlRMTVNTUAADGAAYAGYAAADuAO4A [...]" Header.
> So I think PASS is OK, as nothing seems to be converted from NTLM...
> Or am I wrong?

By "proxy" I hope you mean these details are coming from the client and
being passed on as-is by the proxy?


Amos



Re: [squid-users] sslbump - firefox sec_error_inadequate_key_usage

2014-04-11 Thread Amos Jeffries
On 12/04/2014 1:19 a.m., Amm wrote:
> On Friday, 11 April 2014 6:29 PM, Amos wrote:
> 
> 
>> It seems to be something in firefox was buggy and they have a workaround
>> coming out in version 29.0, whether that will fix the warnign display or
>> just allow people to ignore/bypass it like other cert issues I'm not
>> certain.
> 
>> Amos
> 
> Ok, but then how come Firefox did not throw warning display just yesterday?
> 

Unknown. You got unlucky?
People have been reporting it on and off since late last year.

> Squid version and configuration were exact same yesterday.
> 
> Unfortunately I can not switch OpenSSL back to older version else I
> would have checked if squid "mimicked" key_usage in that version
> as well or not?


Amos


Re: [squid-users] sslbump - firefox sec_error_inadequate_key_usage

2014-04-11 Thread Amm
On Friday, 11 April 2014 6:29 PM, Amos wrote:


> It seems to be something in firefox was buggy and they have a workaround
> coming out in version 29.0, whether that will fix the warnign display or
> just allow people to ignore/bypass it like other cert issues I'm not
> certain.

> Amos

Ok, but then how come Firefox did not throw warning display just yesterday?

Squid version and configuration were exact same yesterday.

Unfortunately I can not switch OpenSSL back to older version else I
would have checked if squid "mimicked" key_usage in that version
as well or not?

Amm



Re: [squid-users] sslbump - firefox sec_error_inadequate_key_usage

2014-04-11 Thread Amos Jeffries
On 11/04/2014 11:55 p.m., Amm wrote:
> On Friday, 11 April 2014 5:19 PM
> 
> 
>> I also use this patch and would like if it is possible to somehow go on 
>> without it.
>>
>> May it be due to the fact squid caches the generated SSL certificates in the 
>> ssl_crtd store?
>> So we need to clear the store when root CA certificate for SSL bump is 
>> regenerated?

Yes to both of those questions.

They are not related to the warning in firefox though.

> 
>> Raf
> 
> I had cleared the ssl cert store but the issue still occured (without patch).
> 
> So finally I gave up trying different things and used the patch.
> 
> Here is exact same issue discussed earlier in mailing list:
> http://www.squid-cache.org/mail-archive/squid-users/201311/0310.html
> 
> Amm
> 

It seems to be something in firefox was buggy and they have a workaround
coming out in version 29.0, whether that will fix the warnign display or
just allow people to ignore/bypass it like other cert issues I'm not
certain.

Amos



Re: [squid-users] sslbump - firefox sec_error_inadequate_key_usage

2014-04-11 Thread Amm
On Friday, 11 April 2014 5:19 PM


> I also use this patch and would like if it is possible to somehow go on 
> without it.
> 
> May it be due to the fact squid caches the generated SSL certificates in the 
> ssl_crtd store?
> So we need to clear the store when root CA certificate for SSL bump is 
> regenerated?

> Raf

I had cleared the ssl cert store but the issue still occured (without patch).

So finally I gave up trying different things and used the patch.

Here is exact same issue discussed earlier in mailing list:
http://www.squid-cache.org/mail-archive/squid-users/201311/0310.html

Amm



RE: [squid-users] sslbump - firefox sec_error_inadequate_key_usage

2014-04-11 Thread Rafael Akchurin
I also use this patch and would like if it is possible to somehow go on without 
it.

May it be due to the fact squid caches the generated SSL certificates in the 
ssl_crtd store? So we need to clear the store when root CA certificate for SSL 
bump is regenerated?

Raf

From: Amm 
Sent: Friday, April 11, 2014 1:38 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] sslbump - firefox sec_error_inadequate_key_usage

On Friday, 11 April 2014 4:46 PM, Amos wrote:


> On 11/04/2014 10:16 p.m., Amm wrote:
>> After this upgrade i.e. from 1.0.0 to 1.0.1, Firefox started giving
>> certificate error stating "sec_error_inadequate_key_usage".
>>
>> This does not happen for all domains but looks like happening ONLY
>> for google servers. i.e. youtube, news.google.com
>>
>> Certificate is issued for *.google.com with lots of alternate names.
>>
>> Is it Firefox bug or squid bug?



> Hard to say.

> "key_usage" is an explicit restriction on what circumstances and
> actions the certificate can be used for.

> What the message you are seeing indicates one of two things:
> Either, the website owner has placed some limitations on how their
> website certificate can be used and your SSL-bumping is violating those
> restrictions.


As I said, its google domains. You can check
https://news.google.com OR https://www.youtube.com

Both have same ceritificate. *.google.com is primary and
youtube.com is one of the many alternate names.

It worked before I upgraded to OpenSSL 1.0.1.

The sslbump configuration was working till yesterday. Today
too it works for all other domains (Yahoo, hotmail etc.)

Infact https://www.google.com also works, because it has
specific certificate and not same *.google.com cerificate.


> Or, the creator of the certificate you are using to sign the generated
> SSL-bump certificates has restricted your signing certificate
> capabilities. (ie the main Trusted Authorities prohibit using certs they
> sign as secondary CA to generate fake certs like SSL-bump does).

> Either case is just as likely.

Did OpenSSL 1.0.0 not support key_usage? And hence squid did not
use it either?

I wonder why other Firefox+sslbump users are not complaining about this?

I see only few people complaining. That too was in November 2013.

I used the patch here:
http://www.squid-cache.org/mail-archive/squid-users/201311/att-0310/squid-3.3.9-remove-key-usage.patch

And it fixes the issue.

But I would prefer to do it without patch.

If I am the only one facing this, then what could be wrong?

Amm.


Re: [squid-users] sslbump - firefox sec_error_inadequate_key_usage

2014-04-11 Thread Amm
On Friday, 11 April 2014 4:46 PM, Amos wrote:


> On 11/04/2014 10:16 p.m., Amm wrote:
>> After this upgrade i.e. from 1.0.0 to 1.0.1, Firefox started giving
>> certificate error stating "sec_error_inadequate_key_usage".
>> 
>> This does not happen for all domains but looks like happening ONLY
>> for google servers. i.e. youtube, news.google.com
>> 
>> Certificate is issued for *.google.com with lots of alternate names.
>> 
>> Is it Firefox bug or squid bug?



> Hard to say.

> "key_usage" is an explicit restriction on what circumstances and
> actions the certificate can be used for.

> What the message you are seeing indicates one of two things:
> Either, the website owner has placed some limitations on how their
> website certificate can be used and your SSL-bumping is violating those
> restrictions.


As I said, its google domains. You can check
https://news.google.com OR https://www.youtube.com

Both have same ceritificate. *.google.com is primary and
youtube.com is one of the many alternate names.

It worked before I upgraded to OpenSSL 1.0.1.

The sslbump configuration was working till yesterday. Today
too it works for all other domains (Yahoo, hotmail etc.)

Infact https://www.google.com also works, because it has
specific certificate and not same *.google.com cerificate.


> Or, the creator of the certificate you are using to sign the generated
> SSL-bump certificates has restricted your signing certificate
> capabilities. (ie the main Trusted Authorities prohibit using certs they
> sign as secondary CA to generate fake certs like SSL-bump does).

> Either case is just as likely.

Did OpenSSL 1.0.0 not support key_usage? And hence squid did not
use it either?

I wonder why other Firefox+sslbump users are not complaining about this?

I see only few people complaining. That too was in November 2013.

I used the patch here:
http://www.squid-cache.org/mail-archive/squid-users/201311/att-0310/squid-3.3.9-remove-key-usage.patch

And it fixes the issue.

But I would prefer to do it without patch.

If I am the only one facing this, then what could be wrong?

Amm.


Re: [squid-users] sslbump - firefox sec_error_inadequate_key_usage

2014-04-11 Thread Amos Jeffries
On 11/04/2014 10:16 p.m., Amm wrote:
> Hello,
> 
> Yesterday I upgraded OpenSSL version. (Although I was using OpenSSL 1.0.0 - 
> not affected by Heartbleed, but I upgraded none-the-less)
> 
> 
> I am using sslbump (squid 3.4.4). Using Firefox 28.0 (latest 64bit tar.bz2)
> 
> After this upgrade i.e. from 1.0.0 to 1.0.1, Firefox started giving 
> certificate error stating "sec_error_inadequate_key_usage".
> 
> This does not happen for all domains but looks like happening ONLY for google 
> servers. i.e. youtube, news.google.com
> 
> Certificate is issued for *.google.com with lots of alternate names.
> 
> I also recompiled squid (with new OpenSSL) just to be sure.
> 
> I also cleared certificate store.
> 
> But error still occurs.
> 
> 
> Google search gave me a patch for this for 3.3.9. But just wanted to make 
> sure if there is any other way to resolve this issue? (Like some squid 
> configuration directive)
> 
> So please let me know, if patch is the only way OR this has been resolved?
> 
> Is it Firefox bug or squid bug?
> 

Hard to say.
 Is software correctly verifying and rejecting invalid SSL certficates a
bug?

"key_usage" is an explicit restriction on what circumstances and actions
the certificate can be used for.

What the message you are seeing indicates one of two things:
Either, the website owner has placed some limitations on how their
website certificate can be used and your SSL-bumping is violating those
restrictions.

Or, the creator of the certificate you are using to sign the generated
SSL-bump certificates has restricted your signing certificate
capabilities. (ie the main Trusted Authorities prohibit using certs they
sign as secondary CA to generate fake certs like SSL-bump does).

Either case is just as likely.

Amos


Re: [squid-users] request_header_add question

2014-04-11 Thread Kein Name


Amos Jeffries schrieb:
>> Config:
>> cache_peer 10.1.2.3 parent 8000 0 no-query originserver login=PASS
>>
> 
> This is a origin server peer. The header delivered to it is
> WWW-Authenticate. Proxy-Authenticate is invalid on connections to origin
> servers.
> 
> Is your proxy a reverse-proxy or a forward-proxy?
> 


It is a reverse proxy.


> Which of the servers (your proxy or the origin) is validating the
> authentication?
> 
> 

The origin server.


>> The config seems to work, squid shows me the login dialog of the
>> cache_peer. For several reasons I have to feed the username back as a
>> header value
>> I also tried login=PASSTHRU for testing, but without any difference.
> 
> FWIW:
> * "PASSTHRU" sends the received Proxy-Authenticate header (if any)
> through to the peer untouched. Leaving no header if none provided by the
> client.
> 
> * "PASS" tries to convert credentials to Basic auth and deliver to the
> peer in Proxy-Authenticate. Will try to generate a header from any
> available other sources of credentials if none are provided by the client.
> 
> In both of the above the peer being an origin treats them as not having
> www-Authenticate header (naturally) and responds with a challenge to get
> some.
> 
> 

The origin peer creates the "WWW-Authenticate: NTLM" request upon which
the rev proxy shows the user/password popup request.
The Rev Proxy then replies with a "Authorization: NTLM
TlRMTVNTUAADGAAYAGYAAADuAO4A [...]" Header.
So I think PASS is OK, as nothing seems to be converted from NTLM...
Or am I wrong?


Bye
Stefan



[squid-users] sslbump - firefox sec_error_inadequate_key_usage

2014-04-11 Thread Amm
Hello,

Yesterday I upgraded OpenSSL version. (Although I was using OpenSSL 1.0.0 - not 
affected by Heartbleed, but I upgraded none-the-less)


I am using sslbump (squid 3.4.4). Using Firefox 28.0 (latest 64bit tar.bz2)

After this upgrade i.e. from 1.0.0 to 1.0.1, Firefox started giving certificate 
error stating "sec_error_inadequate_key_usage".

This does not happen for all domains but looks like happening ONLY for google 
servers. i.e. youtube, news.google.com

Certificate is issued for *.google.com with lots of alternate names.

I also recompiled squid (with new OpenSSL) just to be sure.

I also cleared certificate store.

But error still occurs.


Google search gave me a patch for this for 3.3.9. But just wanted to make sure 
if there is any other way to resolve this issue? (Like some squid configuration 
directive)

So please let me know, if patch is the only way OR this has been resolved?

Is it Firefox bug or squid bug?


Thanks in advance,


Amm.



Re: [squid-users] Re: Cache Windows Updates ONLY

2014-04-11 Thread Eliezer Croitoru

On 04/11/2014 08:37 AM, Nick Hill wrote:

rmed a SGA1 checksum on the downloaded file. The checksum was
6fda48f8c83be2a15f49b83b10fc3dc8c1d15774

The file was downloaded using wget, with the tokens. This matches the
part of the file name between the underscore and period.

The only thing we need for Squid to match, is the part of the URL
between the underscore and period. If the checksum matches, we know
the content we are serving up is correct.
Do you by any chance have urls that can show this pattern in the form of 
logs?


Eliezer


Re: [squid-users] Re: Cache Windows Updates ONLY

2014-04-11 Thread Nick Hill
Dear Ellezer

Thank you for this. it appears the way forward would be to check that
the URL matches a pattern, and if it does, compute the store_id from
the checksum embedded in the URL. The same pattern might be used
across a large range of windows update objects, thereby avoiding cache
misses even when the same object is fetched with a significantly
different URL. For example, different windows update versions, update
methods and product versions.

A checksum match is a guarantee the object is identical.

i understand issues could arise from differing header information. I
suppose it is a matter of trying it and see.







On 10 April 2014 20:07, Eliezer Croitoru  wrote:
> Hey Nick,
>
> In a case you do know the tokens meaning and if it is working properly you
> can try to use StoreID in 3.4.X
> http://wiki.squid-cache.org/Features/StoreID
>
> It is designed to allow you this specific issue you are sure it is.
>
> About the 4GB or 1GB updates it's pretty simple.
> Microsoft release an update which contains "everything" about the about even
> that the update for your machine is only part of the file.
> This is what the last time I verified the issue.
>
> Also there is another side that OS become more and more complex and an
> update can be really big which almost replacing half of the OS components.
>
> What ever goes for you from the options is fine and still I have not seen
> microsoft cache solution.
> How is it called?
>
> Eliezer
>
>
> On 04/10/2014 08:50 PM, Nick Hill wrote:
>>
>> Is there a convenient way to configure Squid to do this?
>>
>> Thanks.
>
>


Re: [squid-users] request_header_add question

2014-04-11 Thread Kein Name


Amos Jeffries schrieb:
> On 11/04/2014 7:26 p.m., Kein Name wrote:
>> Hello List,
>>
>> at the moment I need to use the request_header_add directive to supply
>> information to a cache_peer backend.
>> I intended to use:
>> request_header_add X-Authenticated-User "%ul"
>> but the "%ul" is expanded to a dash (-) and I wonder why and how I can
>> submit the authenticated to user to my backend.
>> Can someone give me a hint?
>>
>> Thanks!
>> Regards
>> Stefan König
>>
> 
> Squid version?
>  Authentication method being used by your proxy?
>  Does the cache_peer option " login=*: " (exact syntax) work?
> 
> Amos
> 
> 

Hello Amos,

thanks for your answer!

Version:
Squid Cache: Version 3.3.8

Auth Method: NTLM (according to the header)

Config:
cache_peer 10.1.2.3 parent 8000 0 no-query originserver login=PASS

The config seems to work, squid shows me the login dialog of the
cache_peer. For several reasons I have to feed the username back as a
header value
I also tried login=PASSTHRU for testing, but without any difference.

regards
Stefan



Re: [squid-users] request_header_add question

2014-04-11 Thread Amos Jeffries
On 11/04/2014 7:26 p.m., Kein Name wrote:
> Hello List,
> 
> at the moment I need to use the request_header_add directive to supply
> information to a cache_peer backend.
> I intended to use:
> request_header_add X-Authenticated-User "%ul"
> but the "%ul" is expanded to a dash (-) and I wonder why and how I can
> submit the authenticated to user to my backend.
> Can someone give me a hint?
> 
> Thanks!
> Regards
> Stefan König
> 

Squid version?
 Authentication method being used by your proxy?
 Does the cache_peer option " login=*: " (exact syntax) work?

Amos



Re: [squid-users] Re: Cache Windows Updates ONLY

2014-04-11 Thread Stephen Borrill
On 10/04/2014 20:07, Eliezer Croitoru wrote:
> Hey Nick,
> 
> In a case you do know the tokens meaning and if it is working properly
> you can try to use StoreID in 3.4.X
> http://wiki.squid-cache.org/Features/StoreID
> 
> It is designed to allow you this specific issue you are sure it is.
> 
> About the 4GB or 1GB updates it's pretty simple.
> Microsoft release an update which contains "everything" about the about
> even that the update for your machine is only part of the file.
> This is what the last time I verified the issue.
> 
> Also there is another side that OS become more and more complex and an
> update can be really big which almost replacing half of the OS components.
> 
> What ever goes for you from the options is fine and still I have not
> seen microsoft cache solution.
> How is it called?

He's probably referring to WSUS:
http://en.wikipedia.org/wiki/Windows_Server_Update_Services

This isn't an HTTP cache solution, it downloads Windows updates and then
effectively acts as your own local Windows Update service - you point
your clients at it to get updates rather than the real ones.

-- 
Stephen


[squid-users] request_header_add question

2014-04-11 Thread Kein Name
Hello List,

at the moment I need to use the request_header_add directive to supply
information to a cache_peer backend.
I intended to use:
request_header_add X-Authenticated-User "%ul"
but the "%ul" is expanded to a dash (-) and I wonder why and how I can
submit the authenticated to user to my backend.
Can someone give me a hint?

Thanks!
Regards
Stefan König