From: Amos Jeffries
> any special meaning (like doing a lookahead) is prevented.
OK, so I'll do an acl for deny and another for allow.
Thanks
___
squid-users mailing list
On 18/09/17 21:04, Antony Stone wrote:
On Monday 18 September 2017 at 09:43:12, Vieri wrote:
Hi,
I'd like to block access to URLs ending in *.dll except for those ending in
mriweb.dll.
acl denied_filetypes urlpath_regex -i denied.filetypes
where denied.filetypes contains a list of
On Monday 18 September 2017 at 09:43:12, Vieri wrote:
> Hi,
>
> I'd like to block access to URLs ending in *.dll except for those ending in
> mriweb.dll.
>
> acl denied_filetypes urlpath_regex -i denied.filetypes
>
> where denied.filetypes contains a list of expressions
Are the others
Hi,
I'd like to block access to URLs ending in *.dll except for those ending in
mriweb.dll.
acl denied_filetypes urlpath_regex -i denied.filetypes
where denied.filetypes contains a list of expressions of which:
(\?!mriweb\.dll$).*\.dll$
This doesn't seem to work if I try to deny access.
eg.
On Monday 18 January 2016 at 18:22:24, Lucía Guevgeozian wrote:
> acl good_facebook urlpath_regex groups
> acl banned_sites url_regex "/etc/squid/config/banned_sites"
>
> inside banned_sites I have the word facebook
>
> http_access allow good_facebook
> http_access deny banned_sites
Okay, so
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
And more:
Facebook (like more others) uses Akamai CDN as background delivery service.
So, facebook.* domain is a little part of whole big fat Facebook :)
18.01.16 23:29, Antony Stone пишет:
> On Monday 18 January 2016 at 18:22:24, Lucía
On Monday 18 January 2016 at 18:31:40, Yuri Voinov wrote:
> Facebook (like more others) uses Akamai CDN as background delivery service.
>
> So, facebook.* domain is a little part of whole big fat Facebook :)
True, but that should still match *request* URLs (once the HTTP/S problem is
sorted
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
18.01.16 23:38, Antony Stone пишет:
> On Monday 18 January 2016 at 18:31:40, Yuri Voinov wrote:
>
>> Facebook (like more others) uses Akamai CDN as background delivery
service.
>>
>> So, facebook.* domain is a little part of whole big fat
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
18.01.16 23:56, Lucía Guevgeozian пишет:
> Thank you very much for your responses.
>
> I understand from http://www.squid-cache.org/Doc/config/http_access/ that
> http_access will not work with https in version of squid older than 3.3.
>
> Do
Hello,
I think I have a very basic question about acl, but I can't figure out why
this simple config is not working:
In my squid.conf file I have 2 acl
acl good_facebook urlpath_regex groups
acl banned_sites url_regex "/etc/squid/config/banned_sites"
inside banned_sites I have the word
Thank you very much for your responses.
I understand from http://www.squid-cache.org/Doc/config/http_access/ that
http_access will not work with https in version of squid older than 3.3.
Do you know if an alternative config exists without upgrading?
Regards,
Lucia
2016-01-18 14:38 GMT-03:00
Ok, thanks again for the quick reply, I'm upgrading :)
Regards,
Lucia
2016-01-18 14:58 GMT-03:00 Yuri Voinov :
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
>
>
> 18.01.16 23:56, Lucía Guevgeozian пишет:
> > Thank you very much for your responses.
> >
> > I
I didn't test this, but i think it works better:
*http_access deny banned_sites !good_facebook*
is it works?
2016-01-18 16:35 GMT-02:00 Lucía Guevgeozian :
> Ok, thanks again for the quick reply, I'm upgrading :)
>
> Regards,
> Lucia
>
> 2016-01-18 14:58 GMT-03:00 Yuri
Hi, unfortunately I tried that already and in 3.0 version I can say it
didn't work.
cheers
2016-01-18 15:43 GMT-03:00 Jorgeley Junior :
> I didn't test this, but i think it works better:
> *http_access deny banned_sites !good_facebook*
> is it works?
>
> 2016-01-18 16:35
On 19/01/2016 6:56 a.m., Lucía Guevgeozian wrote:
> Thank you very much for your responses.
>
> I understand from http://www.squid-cache.org/Doc/config/http_access/ that
> http_access will not work with https in version of squid older than 3.3.
Incorrect. http_access works with any HTTP message
I have just noticed that urlpath_regex isn't doing what I want:
acl wuau_repo dstdomain .download.windowsupdate.com
acl wuau_path urlpath_regex -i \.psf$
acl dst_server dstdomain server
acl apt_cacher browser apt-cacher
cache deny dst_server
cache deny apt_cacher
cache deny wuau_repo
cache allow
Three things;
* by re-writing you are generating an entirely new request with the
apt-cacher server URL as the destination. The HTTP message details about
what was originally requested and from where is *gone* when the traffic
leaves for the server. The solution for that is outlined at the
I also tried the same thing with http_access and that works as expected -
*.psf files are allowed, non *.psf file are denied. I'm thinking bug at the
point... I'll do some more testing and see if I can narrow it doen.
Found it. Really stupid mistake. The documentation shows [-i] for case
On 13/03/2015 12:30 a.m., James Harper wrote:
I also tried the same thing with http_access and that works as expected -
*.psf files are allowed, non *.psf file are denied. I'm thinking bug at the
point... I'll do some more testing and see if I can narrow it doen.
Found it. Really stupid
Found it. Really stupid mistake. The documentation shows [-i] for
case insensitivity, but I hadn't picked up that the [] around the -i
indicated that it was optional. I had just cut and pasted from
examples. So the .cab thing was irrelevant - it just happened that
the .cab files had an
On 12/03/2015 9:14 p.m., James Harper wrote:
I have just noticed that urlpath_regex isn't doing what I want:
acl wuau_repo dstdomain .download.windowsupdate.com
acl wuau_path urlpath_regex -i \.psf$
acl dst_server dstdomain server
acl apt_cacher browser apt-cacher
cache deny dst_server
21 matches
Mail list logo