Re: [squid-users] Squid Proxy not blocking websites

2020-05-05 Thread Amos Jeffries
On 6/05/20 4:47 am, Arjun K wrote:
> Hi Amos
> 
> Thanks for your response and suggestions and I will incorporate your
> inputs in the configuration.
> Please find the below contents of denylist as I am unable to attach as a
> document due to restrictions.
> 
> .hotmail.com

The above is dstdomain wildcard syntax.

Below is dstdomain FQDN syntax.


> *.appex-rf.msn.com
> *.itunes.apple.com

The format is a series of domain segments/labels. They get exact-string
compared against the domain, starting at the TLD and working left.

A '.' at the start of the line means "all subdomain labels match".

See also the FAQ


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Encrypt CONNECT Header

2020-05-05 Thread Ryan Le
Hi All,
Thanks for providing the information.
The issue is not related to the server certificate SNI. It's related to
exposing a few other sensitive data points such as the domain which is
clearly exposed in the CONNECT header. This would be exposed regardless of
TLS 1.3. Also, there are other headers that are sensitive and outside the
encrypted payload including User-Agent and Proxy-Authorization. The
Proxy-Authorization is of concern here. Most modern browsers now support
PAC with HTTPS versus PROXY.

The Proxy-Authorization can carry the Basic Auth (and NTLM) credentials
which is of concern currently since all users are mobile.

Being proactive before this become a problem at causes unnecessary
exposure. Zoom had a lot of issues and wouldn't want this to affect squid
or squid users.

On Tue, May 5, 2020 at 11:33 AM Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 5/5/20 10:18 AM, Ryan Le wrote:
> > Is there plans to support explicit forward proxy over HTTPS to the proxy
> > with ssl-bump?
>
> There have been a few requests for TLS-inside-TLS support, but I am not
> aware of any actual sponsors or features on the road map. It is a
> complicated project, even though each of its two components already
> works today.
>
>
> > We would like to use https_port ssl-bump without using the
> > intercept or tproxy option. Clients will use PAC with a HTTPS directive
> > rather than a PROXY directive. The goal is to also encrypted the CONNECT
> > header which exposes the domain in plain text while it traverses to the
> > proxy.
>
> Yes, it is a valid use case (that few people understand).
>
>
> > Felipe: you don't need to use ssl-bump with explicit https proxy.
>
> Popular browsers barely support HTTPS proxies and refuse to delegate TLS
> handling to them. Thus, a connection to a secure origin server will be
> encrypted by the browser and sent over an encrypted channel through the
> HTTPS proxy -- TLS-inside-TLS. If you want to look inside that browser
> connection, you have to remove both TLS layers. To remove the outer
> layer, you need an https_port in a forward proxy configuration. To
> remove the inner layer, you need SslBump. The combination is not yet
> supported.
>
>
> > Matus: people will still be able to see SNI SSL header.
>
> ... but not the origin server SNI. Only the proxy SNI is exposed in this
> use case, and that exposure is usually not a problem.
>
>
> Cheers,
>
> Alex.
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Proxy not blocking websites

2020-05-05 Thread Arjun K
 Hi Amos
Thanks for your response and suggestions and I will incorporate your inputs in 
the configuration.Please find the below contents of denylist as I am unable to 
attach as a document due to restrictions.
 
.hotmail.com*.appex-rf.msn.com*.itunes.apple.comauth.gfx.msbroadcast.skype.comc.bing.comc.live.comcl2.apple.comclient.hip.live.comd.docs.live.netdirectory.services.live.comdocs.live.neten-us.appex-rf.msn.comfoodanddrink.services.appex.bing.comlogin.live.commail.google.comms.tific.comodcsm.officeapps.live.comofficeimg.vo.msecnd.netoutlook.uservoice.comp100-sandbox.itunes.apple.compartnerservices.getmicrosoftkey.comprotection.office.comroaming.officeapps.live.comsas.office.microsoft.comsdk.hockeyapp.netsecure.meetup.comsignup.live.comsocial.yahooapis.comview.atdmt.comwatson.telemetry.microsoft.comweather.tile.appex.bing.comwww.dropbox.comwww.googleapis.comwww.wunderlist.com*.appex.bing.com*.broadcast.skype.com*.mail.protection.outlook.com*.protection.office.com*.protection.outlook.com*.skype.com*.skypeforbusiness.coma.wunderlist.comaccount.live.comaccounts.google.comacompli.helpshift.comapi.diagnostics.office.comapi.dropboxapi.comapi.login.yahoo.comapi.meetup.comapp.adjust.comapp.box.combit.ly,
 
www.acompli.comby.uservoice.comdata.flurry.complay.google.comrink.hockeyapp.netwww.evernote.comwww.google-analytics.comwww.youtube.com*.facebook.com*.yahoo.com*.msn.comclients4.google.comwww.reddit.com



Please find my responses and queries as well.
1. Instead of dstdomain , I tried the url_regex as defined below and even it is 
not blocking the sites through the proxy.
Kindly let me know how to allow and block the sites ?

acl allowedurl url_regex /etc/squid/allowed_url.txtacl denylist url_regex 
/etc/squid/denylist.txt
2.  I have defined only two ports 80 and 443 and removed all other ports. May I 
know whether the below order must be used since you stated the below "All 
custom rules should follow those." Kindly let me know whether the below order 
is correct or not.

http_access deny !Safe_ports
http_access deny denylisthttp_access allow allowedurlhttp_access allow 
localhost managerhttp_access allow localhosthttp_access allow 
localnethttp_access deny managerhttp_access deny all

RegardsArjun K.
On Tuesday, 5 May, 2020, 07:02:46 pm IST, Amos Jeffries 
 wrote:  
 
 On 6/05/20 12:58 am, Arjun K wrote:
> Hi All
> 
> Can any one help on the below issue.
> I tried changing the order of deny and allow acl but it did not yield
> any result.
> 

What is the contents of the denylist.txt file?

This usually happens when things in there are not the right dstdomain
syntax.





> Regards
> Arjun K
> 
> 
> On Sunday, 3 May, 2020, 05:21:02 pm IST, Arjun K 
> wrote:
> 
> 
> Hi All
> 
> The below is the configuration defined in the proxy server.
> The issue is that the proxy is not blocking the websites mentioned in a
> file named denylist.txt.
> Kindly let me know what needs to be changed to block the websites.
> 
> 
> 
> IP Ranges allowed to use proxy
> acl localnet src 10.196.0.0/16
> acl localnet src 10.197.0.0/16
> acl localnet src 10.198.0.0/16
> acl localnet src 10.199.0.0/16
> acl localnet src 10.200.0.0/16

These can be simplified:

 acl localnet 10.196.0.0-10.200.0.0/16


> 
> Allowed and Denied URLs
> acl allowedurl dstdomain /etc/squid/allowed_url.txt

dstdomain and URL are different things. The name of this ACL is deceptive.

> acl denylist dstdomain /etc/squid/denylist.txt
> 
...

You are missing the DoS protection checks:

 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports

All custom rules should follow those.


> http_access allow CONNECT wuCONNECT localnet
> http_access allow windowsupdate localnet
> 
> acl Safe_ports port 80 # http
> acl Safe_ports port 443 # https
> acl CONNECT method CONNECT
> 
> http_access allow allowedurl
> http_access deny denylist
> http_access allow localhost manager
> http_access allow localhost
> http_access allow localnet
> http_access deny manager
> http_access deny !Safe_ports

The manager and Safe_Ports checks are useless down here. Their entire
purpose is to prevent unauthorized access to dangerous protocols and
security sensitive proxy management API.


> http_access deny all
> 
...
> 
> refresh_pattern ^ftp:           1440    20%     10080
> refresh_pattern ^gopher:        1440    0%      1440
> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
> refresh_pattern .               0       20%     4320

No refresh_pattern following this line will ever match. The "." pattern
matches every URL possible. Order is important.

> refresh_pattern -i
> windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320
> 80% 43200 reload-into-ims
> refresh_pattern -i
> microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
> 43200 reload-into-ims
> refresh_pattern -i
> windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
> 43200 reload-into-ims
> 


Amos
___
squid-users 

Re: [squid-users] Squid Proxy not blocking websites

2020-05-05 Thread Arjun K
 Hi All
Can any one help on the below issue.
I tried changing the order of deny and allow acl but it did not yield any 
result.
RegardsArjun K

On Sunday, 3 May, 2020, 05:21:02 pm IST, Arjun K  
wrote:  
 
 Hi All
The below is the configuration defined in the proxy server.The issue is that 
the proxy is not blocking the websites mentioned in a file named denylist.txt.
Kindly let me know what needs to be changed to block the websites.


IP Ranges allowed to use proxyacl localnet src 10.196.0.0/16
acl localnet src 10.197.0.0/16acl localnet src 10.198.0.0/16acl localnet src 
10.199.0.0/16acl localnet src 10.200.0.0/16
Allowed and Denied URLsacl allowedurl dstdomain 
/etc/squid/allowed_url.txtacl denylist dstdomain /etc/squid/denylist.txt
acl windowsupdate dstdomain windowsupdate.microsoft.comacl windowsupdate 
dstdomain .update.microsoft.comacl windowsupdate dstdomain 
download.windowsupdate.comacl windowsupdate dstdomain 
redir.metaservices.microsoft.comacl windowsupdate dstdomain 
images.metaservices.microsoft.comacl windowsupdate dstdomain c.microsoft.comacl 
windowsupdate dstdomain www.download.windowsupdate.comacl windowsupdate 
dstdomain wustat.windows.comacl windowsupdate dstdomain crl.microsoft.comacl 
windowsupdate dstdomain sls.microsoft.comacl windowsupdate dstdomain 
productactivation.one.microsoft.comacl windowsupdate dstdomain 
ntservicepack.microsoft.comacl windowsupdate dstdomain 
eu.vortex-win.data.microsoft.comacl windowsupdate dstdomain 
eu-v20.events.data.microsoft.comacl windowsupdate dstdomain 
usseu1northprod.blob.core.windows.netacl windowsupdate dstdomain 
usseu1westprod.blob.core.windows.netacl windowsupdate dstdomain 
winatp-gw-neu.microsoft.comacl windowsupdate dstdomain 
winatp-gw-weu.microsoft.comacl windowsupdate dstdomain 
wseu1northprod.blob.core.windows.netacl windowsupdate dstdomain 
wseu1westprod.blob.core.windows.netacl windowsupdate dstdomain 
automatedirstrprdweu.blob.core.windows.netacl windowsupdate dstdomain 
automatedirstrprdneu.blob.core.windows.netacl windowsupdate dstdomain 
play.google.comacl windowsupdate dstdomain go.microsoft.com
acl CONNECT method CONNECTacl wuCONNECT dstdomain www.update.microsoft.comacl 
wuCONNECT dstdomain sls.microsoft.comhttp_access allow CONNECT wuCONNECT 
localnethttp_access allow windowsupdate localnet
acl Safe_ports port 80 # httpacl Safe_ports port 443 # httpsacl CONNECT method 
CONNECT
http_access allow allowedurlhttp_access deny denylisthttp_access allow 
localhost managerhttp_access allow localhosthttp_access allow 
localnethttp_access deny managerhttp_access deny !Safe_portshttp_access deny all
http_port 8080
cache_dir ufs /var/spool/squid 1 16 256coredump_dir /var/spool/squid
refresh_pattern ^ftp:           1440    20%     10080refresh_pattern ^gopher:   
     1440    0%      1440refresh_pattern -i (/cgi-bin/|\?) 0     0%      
0refresh_pattern .               0       20%     4320refresh_pattern -i 
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 43200 
reload-into-imsrefresh_pattern -i 
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 43200 
reload-into-imsrefresh_pattern -i 
windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 43200 
reload-into-ims



RegardsArjun K.  ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Encrypt CONNECT Header

2020-05-05 Thread Alex Rousskov
On 5/5/20 10:18 AM, Ryan Le wrote:
> Is there plans to support explicit forward proxy over HTTPS to the proxy
> with ssl-bump?

There have been a few requests for TLS-inside-TLS support, but I am not
aware of any actual sponsors or features on the road map. It is a
complicated project, even though each of its two components already
works today.


> We would like to use https_port ssl-bump without using the
> intercept or tproxy option. Clients will use PAC with a HTTPS directive
> rather than a PROXY directive. The goal is to also encrypted the CONNECT
> header which exposes the domain in plain text while it traverses to the
> proxy.

Yes, it is a valid use case (that few people understand).


> Felipe: you don't need to use ssl-bump with explicit https proxy.

Popular browsers barely support HTTPS proxies and refuse to delegate TLS
handling to them. Thus, a connection to a secure origin server will be
encrypted by the browser and sent over an encrypted channel through the
HTTPS proxy -- TLS-inside-TLS. If you want to look inside that browser
connection, you have to remove both TLS layers. To remove the outer
layer, you need an https_port in a forward proxy configuration. To
remove the inner layer, you need SslBump. The combination is not yet
supported.


> Matus: people will still be able to see SNI SSL header.

... but not the origin server SNI. Only the proxy SNI is exposed in this
use case, and that exposure is usually not a problem.


Cheers,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Passing XML through squid proxy

2020-05-05 Thread Alex Rousskov
On 5/5/20 10:22 AM, Cindy Yoho wrote:

> They are not actually passing a url to the squid server. The nginx config 
> allowed me to have a line as such:
> 
> proxy_pass https://calcconnect.vertexsmb.com/vertex-ws/services/CalculateTax

> The xml just got passed straight through to the url in the config
> file.Is there something comparable in squid I can set to tell it
> where to pass the code?   I am working on getting the wireshark
> packets but the  server is in a secure zone so there aren't any easy
> options for getting a file from it.

I am not intimate with nginx, but its proxy_pass configuration sounds
similar to Squid's cache_peer directive:
http://www.squid-cache.org/Doc/config/cache_peer/

Beyond that, without packet traces (or Squid cache.logs with
debug_options set to "ALL,2" or higher), it would be difficult for me to
say anything specific.


Good luck,

Alex.



> -Original Message-
> From: Alex Rousskov  
> Sent: Friday, May 1, 2020 11:26 AM
> To: Cindy Yoho ; squid-users@lists.squid-cache.org
> Subject: [External] Re: [squid-users] Passing XML through squid proxy
> 
> On 5/1/20 10:56 AM, Cindy Yoho wrote:
> 
>> When the Order Entry server sends the XML code, we get an error 
>> returned to the server making the request
> 
> Perhaps your Order Entry server does not use HTTP when talking to Squid?
> 
> Squid does not really care about the request payload, but the request has to 
> use the HTTP transport protocol. So sending a SOAP/XML request payload over 
> HTTP is OK, but sending raw SOAP (or SOAP over something other than HTTP) is 
> not.
> 
> If you can post a packet capture of the Order Entry server talking to Squid 
> (not the text interpretation of Squid response but the actual packets going 
> from the Order Entry server to Squid; use libpcap format which is often the 
> default for Wireshark export), then we should be able to confirm whether your 
> Order Entry server is using the right protocol to talk to/through Squid.
> 
> The same packet capture can point to HTTP request problems if the Order Entry 
> server is using HTTP but sending some HTTP token that Squid does not like (or 
> not sending an HTTP token that Squid needs).
> 
> 
> HTH,
> 
> Alex.
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Encrypt CONNECT Header

2020-05-05 Thread Matus UHLAR - fantomas

On 05.05.20 10:24, Felipe Polanco wrote:

I may be mistaken but I believe you don't need to use ssl-bump with
explicit https proxy.

In your browser settings, use an HTTPS proxy instead of HTTP.


and squid needs https_port to accept https traffic.


On Tue, May 5, 2020 at 10:19 AM Ryan Le  wrote:

Is there plans to support explicit forward proxy over HTTPS to the proxy
with
ssl-bump? We would like to use https_port ssl-bump without using the
intercept or tproxy option. Clients will use PAC with a HTTPS directive
rather than a PROXY directive. The goal is to also encrypted the CONNECT
header which exposes the domain in plain text while it traverses to the
proxy.


people will still be able to see SNI SSL header.

however, ssl-bump is different feature.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
WinError #9: Out of error messages.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid negotation auth for Java webstart not working

2020-05-05 Thread Molecki, Christian (STL)
Hello,
 
we are using Squid 3.5.21 and trying to implement the negotation 
authentification, based on kerberos and ntlm.
Browsing in the internet works fine, even with acls based on active directory 
groups.
 
 
Unfortunately we can't call java web start applications:
java.io.IOException: Unable to tunnel through proxy. Proxy returns "HTTP/1.1 
407 Proxy Authentication Required"

We are using Java 1.8.0_221 on the clients.
 
Squid.conf
auth_param negotiate program /usr/sbin/negotiate_wrapper_auth -d --ntlm 
/usr/bin/ntlm_auth --diagnostics --helper-protocol=squid-2.5-ntlmssp 
--domain=STL --kerberos /usr/sbin/negotiate_kerberos_auth -d -s GSS_C_NO_NAME
auth_param negotiate children 10
auth_param negotiate keep_alive off
 
acl grp-www external nt_group GRP_WWW
acl www-auth proxy_auth REQUIRED
 
http_access allow p-http  grp-www www-auth
http_access allow p-https grp-www www-auth
 
Without grp-www and www-auth the calls work fine, but there is also no 
authentification.
 
cache.log (last entry of kerberos debug)
negotiate_kerberos_auth.cc(801): pid=2876 :2020/05/05 16:12:02| 
negotiate_kerberos_auth: DEBUG: AF 
oYG3MIG0oAMKAQChCwYJKoZIgvcSAQICooGfBIGcYIGZBgkqhkiG9xIBAgICAG+BiTCBhqADAgEFoQMCAQ+iejB4oAMCARKicQRv5cOyDbJ0+OYmI5iv0/mdKKd3Ez6ewG43c2U2rzYvooNfdMUT4ap5vufPMNSw3fGLJvPKgupMawOvcduXlBkCHqa5pqkmczvXGAdJvC2yRSJagDSrpuvjC9/XXaZCJl906Pluwo2ovPaYcKCXDy9c
 
 
 The wiki says: AF - Success. Valid credentials. Deprecated by OK result from 
Squid-3.4 onwards.
 
Does anyone have a clue or a similar behavior?
 
 
 
Best Regards
Christian Molecki

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] allowing zip only for a specific url regex

2020-05-05 Thread robert k Wild
Thanks a lot Amos, as always you have been very helpful

Much appreciated mate

Rob

On Tue, 5 May 2020, 14:55 Amos Jeffries,  wrote:

> On 6/05/20 1:39 am, robert k Wild wrote:
> > Thanks Amos,
> >
> > so how would I allow these urls with a wild card then
> >
> > Http://domain.com/path/1/to/any/where
> >
> > Http://domain.com/path/2/to/any/where
> >
> > Would I do this
> >
> > Http://domain.com/path/*
> >
>
> No. As the url_regex ACL name says, these are regex patterns.
>
> You have to use special anchors (^ and $) to *prevent* them being
> wildcard matches.
>
> Simply do like this:
>
>   ^http://domain\.com/path/
>
>
>
> Cheers
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Encrypt CONNECT Header

2020-05-05 Thread Felipe Polanco
I may be mistaken but I believe you don't need to use ssl-bump with
explicit https proxy.

In your browser settings, use an HTTPS proxy instead of HTTP.

On Tue, May 5, 2020 at 10:19 AM Ryan Le  wrote:

> Is there plans to support explicit forward proxy over HTTPS to the proxy
> with
> ssl-bump? We would like to use https_port ssl-bump without using the
> intercept or tproxy option. Clients will use PAC with a HTTPS directive
> rather than a PROXY directive. The goal is to also encrypted the CONNECT
> header which exposes the domain in plain text while it traverses to the
> proxy.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [External] Re: Passing XML through squid proxy

2020-05-05 Thread Cindy Yoho
Alex, thank you for the quick reply.  
They are not actually passing a url to the squid server.   The nginx config 
allowed me to have a line as such:

proxy_pass https://calcconnect.vertexsmb.com/vertex-ws/services/CalculateTax

The xml just got passed straight through to the url in the config file.Is 
there something comparable in squid I can set to tell it where to pass the 
code?   I am working on getting the wireshark packets but the  server is in a 
secure zone so there aren't any easy options for getting a file from it. 

Thanks~
Cindy

-Original Message-
From: Alex Rousskov  
Sent: Friday, May 1, 2020 11:26 AM
To: Cindy Yoho ; squid-users@lists.squid-cache.org
Subject: [External] Re: [squid-users] Passing XML through squid proxy

On 5/1/20 10:56 AM, Cindy Yoho wrote:

> When the Order Entry server sends the XML code, we get an error 
> returned to the server making the request

Perhaps your Order Entry server does not use HTTP when talking to Squid?

Squid does not really care about the request payload, but the request has to 
use the HTTP transport protocol. So sending a SOAP/XML request payload over 
HTTP is OK, but sending raw SOAP (or SOAP over something other than HTTP) is 
not.

If you can post a packet capture of the Order Entry server talking to Squid 
(not the text interpretation of Squid response but the actual packets going 
from the Order Entry server to Squid; use libpcap format which is often the 
default for Wireshark export), then we should be able to confirm whether your 
Order Entry server is using the right protocol to talk to/through Squid.

The same packet capture can point to HTTP request problems if the Order Entry 
server is using HTTP but sending some HTTP token that Squid does not like (or 
not sending an HTTP token that Squid needs).


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Encrypt CONNECT Header

2020-05-05 Thread Ryan Le
Is there plans to support explicit forward proxy over HTTPS to the proxy
with
ssl-bump? We would like to use https_port ssl-bump without using the
intercept or tproxy option. Clients will use PAC with a HTTPS directive
rather than a PROXY directive. The goal is to also encrypted the CONNECT
header which exposes the domain in plain text while it traverses to the
proxy.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] allowing zip only for a specific url regex

2020-05-05 Thread Amos Jeffries
On 6/05/20 1:39 am, robert k Wild wrote:
> Thanks Amos,
> 
> so how would I allow these urls with a wild card then 
> 
> Http://domain.com/path/1/to/any/where
> 
> Http://domain.com/path/2/to/any/where
> 
> Would I do this
> 
> Http://domain.com/path/*
> 

No. As the url_regex ACL name says, these are regex patterns.

You have to use special anchors (^ and $) to *prevent* them being
wildcard matches.

Simply do like this:

  ^http://domain\.com/path/



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] allowing zip only for a specific url regex

2020-05-05 Thread robert k Wild
Thanks Amos,

so how would I allow these urls with a wild card then

Http://domain.com/path/1/to/any/where

Http://domain.com/path/2/to/any/where

Would I do this

Http://domain.com/path/*

Thanks,
Rob

On Tue, 5 May 2020, 14:04 Amos Jeffries,  wrote:

> On 6/05/20 12:42 am, robert k Wild wrote:
> > cool thanks Amos :)
> >
> > if your interested these are my lines in my config
> >
> > #allow special URL paths
> > acl special_url url_regex "/usr/local/squid/etc/urlspecial.txt"
> >
> > #deny MIME types
> > acl mimetype rep_mime_type "/usr/local/squid/etc/mimedeny.txt"
> > http_reply_access allow special_url
>
> The above is wrong. It is allowing by URL, regardless of the mime type.
>
> > http_reply_access deny mimetype
> >
>
> That is the opposite of your stated requirement. It will *prevent* the
> mime type check from identifying downloads in the special_url.
>
> A better way to write the above policy would be:
>
>   http_reply_access deny !special_url mimetype
>
>
> Also, be aware that http_reply_access denial only prevents the download
> reaching the client. It still has to be fully downloaded by Squid - lots
> of bandwidth and processing cycles wasted.
>  If you are blocking traffic by URL do that in http_access instead.
>
>
> > urlspecial.txt
> >
> > http://updater.maxon.net/server_test
> > http://updater.maxon.net/customer/R21.0/updates15
> > http://updater.maxon.net/customer/general/updates15
> > ^http://ccmdl.adobe.com/AdobeProducts/KCCC/1/win64/packages/.*
> > ^http://ccmdl.adobe.com/AdobeProducts/KCCC/1/osx10/packages/.*
> > ^http://www.eztitles.com/download.php?
> > ^https://attachments.office.net/owa/.*
> >
>
> Do not put .* on the end of regex patterns. That only forces the regex
> library to scan longer than necessary and waste memory.
>
> Also this pattern:
>
>  ^http://www.eztitles.com/download.php?
>
> actually means:
>
>  ^http://www.eztitles.com/download.ph
>
> ('?' is a regex special character. Like '*' it is deceptively harmful at
> the start or end of a pattern)
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Proxy not blocking websites

2020-05-05 Thread Amos Jeffries
On 6/05/20 12:58 am, Arjun K wrote:
> Hi All
> 
> Can any one help on the below issue.
> I tried changing the order of deny and allow acl but it did not yield
> any result.
> 

What is the contents of the denylist.txt file?

This usually happens when things in there are not the right dstdomain
syntax.





> Regards
> Arjun K
> 
> 
> On Sunday, 3 May, 2020, 05:21:02 pm IST, Arjun K 
> wrote:
> 
> 
> Hi All
> 
> The below is the configuration defined in the proxy server.
> The issue is that the proxy is not blocking the websites mentioned in a
> file named denylist.txt.
> Kindly let me know what needs to be changed to block the websites.
> 
> 
> 
> IP Ranges allowed to use proxy
> acl localnet src 10.196.0.0/16
> acl localnet src 10.197.0.0/16
> acl localnet src 10.198.0.0/16
> acl localnet src 10.199.0.0/16
> acl localnet src 10.200.0.0/16

These can be simplified:

 acl localnet 10.196.0.0-10.200.0.0/16


> 
> Allowed and Denied URLs
> acl allowedurl dstdomain /etc/squid/allowed_url.txt

dstdomain and URL are different things. The name of this ACL is deceptive.

> acl denylist dstdomain /etc/squid/denylist.txt
> 
...

You are missing the DoS protection checks:

 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports

All custom rules should follow those.


> http_access allow CONNECT wuCONNECT localnet
> http_access allow windowsupdate localnet
> 
> acl Safe_ports port 80 # http
> acl Safe_ports port 443 # https
> acl CONNECT method CONNECT
> 
> http_access allow allowedurl
> http_access deny denylist
> http_access allow localhost manager
> http_access allow localhost
> http_access allow localnet
> http_access deny manager
> http_access deny !Safe_ports

The manager and Safe_Ports checks are useless down here. Their entire
purpose is to prevent unauthorized access to dangerous protocols and
security sensitive proxy management API.


> http_access deny all
> 
...
> 
> refresh_pattern ^ftp:           1440    20%     10080
> refresh_pattern ^gopher:        1440    0%      1440
> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
> refresh_pattern .               0       20%     4320

No refresh_pattern following this line will ever match. The "." pattern
matches every URL possible. Order is important.

> refresh_pattern -i
> windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320
> 80% 43200 reload-into-ims
> refresh_pattern -i
> microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
> 43200 reload-into-ims
> refresh_pattern -i
> windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
> 43200 reload-into-ims
> 


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] allowing zip only for a specific url regex

2020-05-05 Thread Amos Jeffries
On 6/05/20 12:42 am, robert k Wild wrote:
> cool thanks Amos :)
> 
> if your interested these are my lines in my config
> 
> #allow special URL paths
> acl special_url url_regex "/usr/local/squid/etc/urlspecial.txt"
> 
> #deny MIME types
> acl mimetype rep_mime_type "/usr/local/squid/etc/mimedeny.txt"
> http_reply_access allow special_url

The above is wrong. It is allowing by URL, regardless of the mime type.

> http_reply_access deny mimetype
> 

That is the opposite of your stated requirement. It will *prevent* the
mime type check from identifying downloads in the special_url.

A better way to write the above policy would be:

  http_reply_access deny !special_url mimetype


Also, be aware that http_reply_access denial only prevents the download
reaching the client. It still has to be fully downloaded by Squid - lots
of bandwidth and processing cycles wasted.
 If you are blocking traffic by URL do that in http_access instead.


> urlspecial.txt
> 
> http://updater.maxon.net/server_test
> http://updater.maxon.net/customer/R21.0/updates15
> http://updater.maxon.net/customer/general/updates15
> ^http://ccmdl.adobe.com/AdobeProducts/KCCC/1/win64/packages/.*
> ^http://ccmdl.adobe.com/AdobeProducts/KCCC/1/osx10/packages/.*
> ^http://www.eztitles.com/download.php?
> ^https://attachments.office.net/owa/.*
> 

Do not put .* on the end of regex patterns. That only forces the regex
library to scan longer than necessary and waste memory.

Also this pattern:

 ^http://www.eztitles.com/download.php?

actually means:

 ^http://www.eztitles.com/download.ph

('?' is a regex special character. Like '*' it is deceptively harmful at
the start or end of a pattern)


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Best way to prevent squid from bumping CONNECTs

2020-05-05 Thread Alex Rousskov
On 5/5/20 5:38 AM, Amos Jeffries wrote:
> On 5/05/20 4:31 am, Alex Rousskov wrote:
>> On 5/3/20 10:41 PM, Scott wrote:
>>> https://wiki.squid-cache.org/Features/SslPeekAndSplice says "At no point 
>>> during ssl_bump processing will dstdomain ACL work".

>> I have not tested this, but I would expect the dstdomain ACL to work
>> during SslBump steps using the destination address from the (real or
>> fake) CONNECT request URI.

> We do not save the CONNECT tunnel message objects in the TLS handshake
> state objects. As such the state needed by dstdomain is not available
> during ssl_bump ACL processing.

I do not know what you mean by "CONNECT tunnel message objects" and "TLS
handshake state objects" exactly but HttpRequest with the (real or fake)
CONNECT request should exist and be available to ssl_bump and
http_access ACLs during SslBump steps. The dstdomain ACL uses
HttpRequest AFAICT.

Most deployed http_access configurations allow those CONNECT requests
while peeking at TLS; and many broken configurations deny them (too
soon), triggering support queries on this mailing list.


> Only state from the TCP connection and the underway TLS handshake are
> guaranteed to be available to the ssl_bump ACLs. Anything else is
> best-effort.

For intercepted connections, the fake CONNECT request carries
information extracted from the TCP connection and the TLS handshake.

For other cases, there is a real CONNECT request to carry that
information (and more). It is adjusted with SNI info if possible.

At least that is the way SslBump should work in modern Squids. I agree
that many SslBump bugs have been fixed since the quoted wiki paragraph
was written, but the presence of the CONNECT HttpRequest is rather
fundamental since the beginning of Peek and Splice approach because
http_access rules are difficult to write without it, especially because
we did not want to make "step" ACLs officially available for the
http_access rules.


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] allowing zip only for a specific url regex

2020-05-05 Thread robert k Wild
cool thanks Amos :)

if your interested these are my lines in my config

#allow special URL paths
acl special_url url_regex "/usr/local/squid/etc/urlspecial.txt"

#deny MIME types
acl mimetype rep_mime_type "/usr/local/squid/etc/mimedeny.txt"
http_reply_access allow special_url
http_reply_access deny mimetype

urlspecial.txt

http://updater.maxon.net/server_test
http://updater.maxon.net/customer/R21.0/updates15
http://updater.maxon.net/customer/general/updates15
^http://ccmdl.adobe.com/AdobeProducts/KCCC/1/win64/packages/.*
^http://ccmdl.adobe.com/AdobeProducts/KCCC/1/osx10/packages/.*
^http://www.eztitles.com/download.php?
^https://attachments.office.net/owa/.*

mimedeny.txt

application/octet-stream
application/x-msi
application/zip
application/vnd.ms-cab-compressed

is this the best way of doing it?

thanks,
rob


On Tue, 5 May 2020 at 13:27, Amos Jeffries  wrote:

> On 5/05/20 11:38 pm, robert k Wild wrote:
> > hi all,
> >
> > i wanto to allow only zip files via a specific url regex
> >
> > atm im allowing all attachments
> >
> > ^https://attachments.office.net/owa/.*
> >
> > could i do this to lock it down to only zips
> >
> > ^https://attachments.office.net/owa/.zip
> >
>
> That regex will only match a small set of URLs which are unlikely ever
> to exist.
>
> What you want is:
>
>  acl downloads url_regex https://attachments.office.net/owa/
>  acl dotZip urlpath_regex \.zip(\?)?.*$
>  http_access allow downloads !dotZip
>
>  acl zipCt rep_header Content-Type application/zip
>  http_reply_access deny zipCt
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


-- 
Regards,

Robert K Wild.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] allowing zip only for a specific url regex

2020-05-05 Thread Amos Jeffries
On 5/05/20 11:38 pm, robert k Wild wrote:
> hi all,
> 
> i wanto to allow only zip files via a specific url regex
> 
> atm im allowing all attachments
> 
> ^https://attachments.office.net/owa/.*
> 
> could i do this to lock it down to only zips
> 
> ^https://attachments.office.net/owa/.zip
> 

That regex will only match a small set of URLs which are unlikely ever
to exist.

What you want is:

 acl downloads url_regex https://attachments.office.net/owa/
 acl dotZip urlpath_regex \.zip(\?)?.*$
 http_access allow downloads !dotZip

 acl zipCt rep_header Content-Type application/zip
 http_reply_access deny zipCt


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] allowing zip only for a specific url regex

2020-05-05 Thread robert k Wild
hi all,

i wanto to allow only zip files via a specific url regex

atm im allowing all attachments

^https://attachments.office.net/owa/.*

could i do this to lock it down to only zips

^https://attachments.office.net/owa/.zip

thanks,
rob

-- 
Regards,

Robert K Wild.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Let Squid use SSL certificate for a parent cache peer

2020-05-05 Thread Amos Jeffries
On 5/05/20 10:21 pm, mariolatif741 wrote:
> The purpose of proxy A is that its the proxy that will be given to my
> clients. The purpose of all what I am doing is to let my clients use proxy B
> indirectly through proxy A (so they can use proxy B without installing the
> CA certificate)
> 

It sounds to me like you only need one proxy. Squid can listen on
multiple ports and treat traffic differently per-port.

If you do not want to (or cannot) install a custom CA on clients that is
fine. It just prevents you from using the SSL-Bump 'bump' action on the
TLS traffic from those clients. More than one proxy will not help with
that restriction.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Let Squid use SSL certificate for a parent cache peer

2020-05-05 Thread Antony Stone
On Tuesday 05 May 2020 at 12:21:19, mariolatif741 wrote:

> The purpose of proxy A is that its the proxy that will be given to my
> clients. The purpose of all what I am doing is to let my clients use proxy
> B indirectly through proxy A (so they can use proxy B without installing
> the CA certificate)

Won't work.

If you are doing HTTPS / SSL / TLS interception *at any point* in the chain 
between the client and the real server, then the machine doing the 
interception is going to have to generate a fake certificate for what it sends 
back to the client (no matter whether that passes through an intermediate 
proxy or not), therefore the client needs to have the fake CA certificate 
installed in order to trust what it receives.

There is no way for the client to get the "real" certificate from the "real" 
server if a machine in between intercepts and decrypts the communication.  If 
there were, TLS security would be meaningless.

Regards,


Antony.

-- 
"Measuring average network latency is about as useful as measuring the mean 
temperature of patients in a hospital."

 - Stéphane Bortzmeyer

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Let Squid use SSL certificate for a parent cache peer

2020-05-05 Thread mariolatif741
The purpose of proxy A is that its the proxy that will be given to my
clients. The purpose of all what I am doing is to let my clients use proxy B
indirectly through proxy A (so they can use proxy B without installing the
CA certificate)


Antony Stone wrote
> On Tuesday 05 May 2020 at 11:48:12, mariolatif741 wrote:
> 
>> Since you said "If the client is participating in the TLS handshake it
>> *always* requires the CA to be installed.", then I guess what I want to
>> do
>> is not possible.
>> 
>> Can I make Squid send the requests received from the client to the cache
>> peer? (so the cache peer would see the requests coming from the Squid
>> server and not from the client), I think if this is possible then it'd
>> help in my case.
> 
> What are you trying to achieve?
> 
> It sounds as though you want the client to talk to proxy A, which talks to 
> proxy B, which sends requests to the Internet, and you want to do content 
> inspection / filtering on proxy B.
> 
> What is the purpose of proxy A?
> 
> Regards,
> 
> 
> Antony.
> 
> -- 
> "Remember: the S in IoT stands for Security."
> 
>  - Jan-Piet Mens
> 
>Please reply to the
> list;
>  please *don't* CC
> me.
> ___
> squid-users mailing list

> squid-users@.squid-cache

> http://lists.squid-cache.org/listinfo/squid-users





--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Let Squid use SSL certificate for a parent cache peer

2020-05-05 Thread Amos Jeffries
On 5/05/20 9:48 pm, mariolatif741 wrote:
> Since you said "If the client is participating in the TLS handshake it
> *always* requires 
> the CA to be installed.", then I guess what I want to do is not possible.
> 
> Can I make Squid send the requests received from the client to the cache
> peer? (so the cache peer would see the requests coming from the Squid server
> and not from the client), I think if this is possible then it'd help in my
> case.

That is what peers are for. So yes - with the caveat that it is not
clear whether what you are calling "requests" are actually HTTP messages.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Let Squid use SSL certificate for a parent cache peer

2020-05-05 Thread Antony Stone
On Tuesday 05 May 2020 at 11:48:12, mariolatif741 wrote:

> Since you said "If the client is participating in the TLS handshake it
> *always* requires the CA to be installed.", then I guess what I want to do
> is not possible.
> 
> Can I make Squid send the requests received from the client to the cache
> peer? (so the cache peer would see the requests coming from the Squid
> server and not from the client), I think if this is possible then it'd
> help in my case.

What are you trying to achieve?

It sounds as though you want the client to talk to proxy A, which talks to 
proxy B, which sends requests to the Internet, and you want to do content 
inspection / filtering on proxy B.

What is the purpose of proxy A?

Regards,


Antony.

-- 
"Remember: the S in IoT stands for Security."

 - Jan-Piet Mens

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Let Squid use SSL certificate for a parent cache peer

2020-05-05 Thread mariolatif741
Since you said "If the client is participating in the TLS handshake it
*always* requires 
the CA to be installed.", then I guess what I want to do is not possible.

Can I make Squid send the requests received from the client to the cache
peer? (so the cache peer would see the requests coming from the Squid server
and not from the client), I think if this is possible then it'd help in my
case.



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Best way to prevent squid from bumping CONNECTs

2020-05-05 Thread Amos Jeffries
On 5/05/20 4:31 am, Alex Rousskov wrote:
> On 5/3/20 10:41 PM, Scott wrote:
> 
>> acl tcp_open_connect_sslbump at_step SslBump1
>> acl ssl_splice_sni ssl::server_name "/usr/local/etc/squid/acls/splice_sni"
>> acl guest_net_src src x.y.z.0/24
>>
>> ssl_bump peek tcp_open_connect_sslbump
>> ssl_bump splice ssl_splice_sni
>> ssl_bump bump guest_net_src
>> ssl_bump splice
> 
> 
>> where I splice instead of bump for destinations that are often used with 
>> certificate pinning software (.apple.com with iOS for example).
> 
> 
>> https://wiki.squid-cache.org/Features/SslPeekAndSplice says "At no point 
>> during ssl_bump processing will dstdomain ACL work".
> 
> I have not tested this, but I would expect the dstdomain ACL to work
> during SslBump steps using the destination address from the (real or
> fake) CONNECT request URI. It is possible that, for the author of that
> wiki statement, that kind of functionality is equivalent to "not work",
> but I personally would not phrase it that way.
> 

We do not save the CONNECT tunnel message objects in the TLS handshake
state objects. As such the state needed by dstdomain is not available
during ssl_bump ACL processing.

Only state from the TCP connection and the underway TLS handshake are
guaranteed to be available to the ssl_bump ACLs. Anything else is
best-effort.

 At least that was the situation when that documentation was written.
The bugs we have about other CONNECT state not being available are still
open so I doubt the situation has changed even with the more recent
refactoring.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Let Squid use SSL certificate for a parent cache peer

2020-05-05 Thread Amos Jeffries
On 5/05/20 9:04 pm, mariolatif741 wrote:
> Hello,
> 
> I have a Squid proxy server (proxy A) and I redirect all its traffic to
> another proxy (proxy B) using a parent cache peer.
> 
> However, proxy B requires a SSL certificate to be used so it can intercept
> the HTTPS requests and read them.
> 
> I want to specify the path of the CA certificate to Squid in proxy A so my
> users can be redirected to proxy B without having to install the CA
> certificate.
> 
> Is it possible?

If the client is participating in the TLS handshake it *always* requires
the CA to be installed.


To use TLS on the connection between proxyA and proxyB:

  cache_peer proxyB parent 3128 0 tls-ca=/path/to/proxyB_CA.pem

Note that this is only to encrypt traffic between the proxies. When the
client is not involved.


To further improve security you should also use a client certificate for
proxyA and setup client cert validation between the proxies.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid logging disable based on ACL & kernel: Out of memory

2020-05-05 Thread Amos Jeffries
On 3/05/20 12:58 am, Akshay Hegde wrote:
> Dear Amos,
> 
> Can you please elaborate, I didnt understand. If possible can you
> explain with one example ? I mean behaviour of security and privacy
> flaws when 
> strip_query_terms is on and when strip_query_terms is off.
> 

That directive only affects the URLs visible in your logs etc. on the
proxy machine. It's main purpose is to prevent security/privacy
information leaks when site store sensitive info in the query-string of
the URL. The benefit is that your service is not a vector for those leaks.

On the other hand, it also prevents you being able to troubleshoot a lot
of types of issue with any site using query strings. Both allowing a
range of security attacks to hide themselves, and preventing you being
aware when sensitive info is wrongly placed in the URL.

It is up to you to decide which type of security/privacy issue is the
most important to prevent.


I bring this up because there have recently been several high-profile
services caught for major credential leaks - noticed only because some
people paid attention to their query-string's.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Let Squid use SSL certificate for a parent cache peer

2020-05-05 Thread mariolatif741
Hello,

I have a Squid proxy server (proxy A) and I redirect all its traffic to
another proxy (proxy B) using a parent cache peer.

However, proxy B requires a SSL certificate to be used so it can intercept
the HTTPS requests and read them.

I want to specify the path of the CA certificate to Squid in proxy A so my
users can be redirected to proxy B without having to install the CA
certificate.

Is it possible?



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users