Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-23 Thread garryd

On 2016-10-23 18:31, Amos Jeffries wrote:

On 23/10/2016 2:32 a.m., garryd wrote:

Since I started use Squid, it's configuration always RFC compliant by
default, _but_ there were always knobs for users to make it HTTP
violent. It was in hands of users to decide how to handle a web
resource. Now it is not always possible, and the topic is an evidence.
For example, in terms of this topic, users can't violate this RFC
statement [1]:

   A Vary field value of "*" signals that anything about the request
   might play a role in selecting the response representation, 
possibly

   including elements outside the message syntax (e.g., the client's
   network address).  A recipient will not be able to determine 
whether

   this response is appropriate for a later request without forwarding
   the request to the origin server.  A proxy MUST NOT generate a Vary
   field with a "*" value.

[1] https://tools.ietf.org/html/rfc7231#section-7.1.4



Please name the option in any version of Squid which allowed Squid to
cache those "Vary: *" responses.

No such option ever existed. For the 20+ years Vary has existed Squid
has behaved in the same way it does today. For all that time you did 
not

notice these responses.


You are absolutely right, but there were not such abuse vector in the 
past (at least in my practice). There were tools provided by devs to 
admins to protect against trending abuse cases. So, the question arised, 
what changed in Squid development policy? Why there is no configuration 
option like 'ignore_vary [acl]', so highly demanded by many users in the 
list? Personally, I'm no affected by the Vary abuse, but I suppose there 
will be increasing number of abuse cases in the future. One of your 
answers confirmed my assumption regarding the question:



 - there is a very high risk of copy-and-paste sysadmin spreading the
problems without realising what they are doing. Particularly since 
those

proposing it are so vocal about how great it *seems* for them.


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching http google deb files

2016-10-23 Thread garryd

On 2016-10-22 23:18, Heiler Bemerguy wrote:

I've never used ICAP, and I think hacking the code is way faster than
creating/using a separate service for that. And I'm not sure, but I
don't think I can manage to get this done with current squid's
options.


Hi,

For this case I also suggest to use content adaptation, especially eCAP, 
for the following reasons:


* ACL can be used to steer only abusing replies to eCAP to mangle Vary 
field

* There is no need to apply local patch to new Squid versions
* There is no need to build Squid from sources
* There is no need to use daemons for content adaptation
* There is a sample adapter 'ecap_adapter_modifying' [1], prepared by 
The Measurement Factory (Many thanks!), which successfully modifies HTTP 
message's body. It can be to modified to mangle HTTP headers.


[1] http://e-cap.org/Documentation

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread garryd

On 2016-10-22 17:56, Antony Stone wrote:

Disclaimer: I am not a Squid developer.

On Saturday 22 October 2016 at 14:43:55, gar...@comnet.uz wrote:


IMO:

The only reason I believe [explains] why core developers of Squid tend 
to

move HTTP violating settings from average users is to prevent possible
abuse/misuse.


I believe the reason is that one of Squid's goals is to be RFC 
compliant,

therefore it does not contain features which violate HTTP.


Nevertheless, I believe that core developers should publish an
_official_ explanations regarding the tendency, as it often becomes a
"center of gravity" of many topics.


Which "tendency"?

What are you asking for an official explanation of?


Antony.


Since I started use Squid, it's configuration always RFC compliant by 
default, _but_ there were always knobs for users to make it HTTP 
violent. It was in hands of users to decide how to handle a web 
resource. Now it is not always possible, and the topic is an evidence. 
For example, in terms of this topic, users can't violate this RFC 
statement [1]:


   A Vary field value of "*" signals that anything about the request
   might play a role in selecting the response representation, possibly
   including elements outside the message syntax (e.g., the client's
   network address).  A recipient will not be able to determine whether
   this response is appropriate for a later request without forwarding
   the request to the origin server.  A proxy MUST NOT generate a Vary
   field with a "*" value.

[1] https://tools.ietf.org/html/rfc7231#section-7.1.4
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread garryd

On 2016-10-22 16:05, Yuri Voinov wrote:

Good explanations do not always help to get a good solution. A person
needs no explanation and solution.

So far I've seen a lot of excellent reasons why Squid can not do
so-and-so in the normal configuration. However, this explanation does
not help in solving problems.

Nothing personal, just observation.


IMO:

The only reason I believe explains why core developers of Squid tend to 
move HTTP violating settings from average users is to prevent possible 
abuse/misuse. Options like 'refresh_pattern ... ignore-vary' can severe 
affect browsing experience if used by people without enough knowledge of 
HTTP protocol(s). The abuse can easily compromise reputation of Squid 
software.


Fortunately, the license of Squid permits modification of the software. 
There are many ways to get desired and not yet implemented features of 
Squid:


* Group of enthusiasts can easily make a fork project, name it 
"Humboldt", for example and implement options like 'refresh_pattern ... 
ignore-vary', 'host_forgery_verification off'. For example, some time 
ago there was the project Lusca, which implemented address spoofing 
(like TProxy) for BSD systems (among other features). The feature was 
highly demanded and Squid project also implemented it later for BSD 
systems. Now Lusca is not so popular.


* Commercial organizations like ISP or any other enterprise can hire a 
developer to implement the options.


* Many system administrators with programming skills can successfully 
modify the Squid sources to reach the goal. The squid-users list and 
bugzilla remembers those success stories.



Nevertheless, I believe that core developers should publish an 
_official_ explanations regarding the tendency, as it often becomes a 
"center of gravity" of many topics.


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-22 Thread garryd

On 2016-10-22 13:53, Rui Lopes wrote:

Hello,

I'm trying to receive a cached version of
googlechromestandaloneenterprise64.msi with:

refresh_pattern googlechromestandaloneenterprise64\.msi 4320 100% 4320
override-expire override-lastmod reload-into-ims ignore-reload
ignore-no-store ignore-private

and trying it with the following httpie command:

https_proxy=http://10.10.10.222:3128 http --verify=no -o chrome.msi
'https://dl.google.com/tag/s/appguid=%7B----%7D=%7B----%7D=en=4=0=Google%20Chrome=true/dl/chrome/install/googlechromestandaloneenterprise64.msi'

but squid never caches the response. it always shows:

1477125665.643   4040 10.10.10.1 TCP_MISS/200 50323942 GET
https://dl.google.com/tag/s/appguid=%7B----%7D=%7B----%7D=en=4=0=Google%20Chrome=true/dl/chrome/install/googlechromestandaloneenterprise64.msi
- HIER_DIRECT/216.58.210.174 [2] application/octet-stream

how can I make it cache?

-- RGL

PS I'm using squid 3.5.12-1ubuntu7.2 and my full squid.conf is:

acl localnet src 10.0.0.0/8 [1]
acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny all

http_port \
3128 \
ssl-bump \
generate-host-certificates=on \
dynamic_cert_mem_cache_size=16MB \
key=/etc/squid/ssl_cert/ca.key \
cert=/etc/squid/ssl_cert/ca.pem

ssl_bump bump all

sslcrtd_program \
/usr/lib/squid/ssl_crtd \
-s /var/lib/ssl_db \
-M 16MB \
-b 4096 \
sslcrtd_children 5

# a ~15 GiB cache (only caches files that have a length of 2 GiB or
less).
maximum_object_size 2 GB
cache_dir ufs /var/spool/squid 15000 16 256

cache_store_log daemon:/var/log/squid/store.log

shutdown_lifetime 2 seconds

coredump_dir /var/spool/squid

refresh_pattern googlechromestandaloneenterprise64\.msi 4320 100% 4320
override-expire override-lastmod reload-into-ims ignore-reload
ignore-no-store ignore-private



Links:
--
[1] http://10.0.0.0/8
[2] http://216.58.210.174


Hi,

It have already been well explained by Amos this month:

http://lists.squid-cache.org/pipermail/squid-users/2016-October/012869.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x and Peek and Splice - Host Header Forgery

2016-10-18 Thread garryd

On 2016-10-18 22:42, John Wright wrote:

Hi

Replying to the list

Yes i get that error on many different sites same exact error about
host headers.
Also if you watch the TTL on the amazonaws url i provided it changes
from 3 to 5 to 10 seconds to 60 to 10 back and forth.
If you go online to an dns lookup site like kloth i see via kloth 5
seconds TTL

i get a different TTL value at different times, it appears they dont
have a set TTL but they change it often and it varies.
Right now it appears to be a ttl of 60 seconds as you found but
earlier and over the weekend it has shown 5 seconds and even AWS
support verified it can vary as low as 5 seconds.
That being said , when it is changing every 3-5 seconds which comes
and goes , squid gives the header forgery errors as shown before.


The time interval between client's and Squid's name lookup is measured 
in milliseconds. So, in most cases, the would not be false positives in 
environments where same cashing DNS server is used.


That specific issue you encounter except alert messages and Squid's 
inability to cache HTTP responses for "forged" HTTP requests?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x and Peek and Splice - Host Header Forgery

2016-10-18 Thread garryd

On 2016-10-18 18:32, John Wright wrote:

Hi,

I have a constant problem with Host header forgery detection on squid
doing peek and splice.

I see this most commonly with CDN, Amazon and microsoft due to the
fact there TTL is only 5 seconds on certain dns entries im connecting
to.  So when my client connects through my squid i get host header
issues due to the contstant dns changes at these destinations.

I have ready many things online but how do i get around this.  I
basically want to allow certain domains or ip subnets to not hit the
host header error (as things break at this point for me ).

Any ideas ?

One example is

sls.update.microsoft.com [1]

Yes my client and Squid use same DNS server, i have even setup my
squid as a bind server and tried that just for fun same issue.  Fact
is the DNS at these places changes so fast (5 seconds) the dns
response keeps changing/

I just need these approved destinations to make it through



Links:
--
[1] http://sls.update.microsoft.com/


Hi,

Are you sure, that Squid and all your clients use same _caching_ DNS 
server? For example, here results from my server for name 
sls.update.microsoft.com:


$ dig sls.update.microsoft.com
...
sls.update.microsoft.com. 
3345	IN	CNAME	sls.update.microsoft.com.nsatc.net.

sls.update.microsoft.com.nsatc.net. 215 IN A157.56.77.141
...


Second request after 3 seconds:

$ dig sls.update.microsoft.com
...
sls.update.microsoft.com. 
3342	IN	CNAME	sls.update.microsoft.com.nsatc.net.

sls.update.microsoft.com.nsatc.net. 212 IN A157.56.77.141
...


Here I see that the TTL for the target A record is 300 seconds (not 5 
seconds), and _caching_ DNS server will serve same A record for all 
clients at least 5 minutes. That behaviour will not introduce false 
positives for host forgery detection.




On other hand, if the DNS server is not _caching_, you would get 
different A records for each request. For example, below are results 
from authoritative DNS server for zone nsatc.net:



$ dig @e.ns.nsatc.net sls.update.microsoft.com.nsatc.net
...
sls.update.microsoft.com.nsatc.net. 300 IN A157.55.240.220
...


Second request after 5 seconds:

$ dig @e.ns.nsatc.net sls.update.microsoft.com.nsatc.net
...
sls.update.microsoft.com.nsatc.net. 300 IN A157.56.96.54
...


Here I see, that the DNS server serves exactly one A record in 
round-robin fashion. Same true for Google public DNS services. That 
behavior could cause troubles for host forgery detection.


HTH

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Are there any distros with SSL Bump compiled by default?

2016-05-14 Thread garryd

On 2016-05-14 14:36, Tim Bates wrote:

Are there any Linux distros with pre-compiled versions of Squid with
SSL Bump support compiled in?

Alternatively, does anyone reputable do a 3rd party repo for
Debian/Ubuntu that includes SSL Bump?


Squid's SSL Bump support improves very fast, so it is recommended to 
always use newest version. Here, you can find packages for different 
distros http://wiki.squid-cache.org/SquidFaq/BinaryPackages. Most 
advanced SSL bump feature Peek and Splice requires configure options 
'--with-openssl' and '--enable-ssl-crtd'. For example, Eliezer's newest 
package (squid 3.5.19) for CentOS compiled with these options.


HTH
Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Host header forgery policy in service provider environment

2016-01-01 Thread garryd

On 2015-12-31 13:31, Amos Jeffries wrote:

On 2015-12-31 00:01, Garri Djavadyan wrote:

Hello Squid members and developers!

First of all, I wish you a Happy New Year 2016!

The current Host header forgery policy effectively prevents a cache
poisoning. But also, I noticed, it deletes verified earlier cached
object. Is it possible to implement more careful algorithm as an
option? For example, if Squid will not delete earlier successfully
verified and valid cached object and serve forged request from the
cache if would be more effective and in same time secure behavior.



This seems to be describing 



So far we don't have a solution. Patches very welcome.

Amos


Amos, thank you very much, bug 
 exactly the same 
problem I encountered! I've tested the proposed patch and updated the 
bug report.


Kind Regards,
Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users