Re: [squid-users] Sslbump with multiple users and multiple ACLs for each

2019-01-03 Thread Benjamin E. Nichols
Why are you asking support questions about a commercial product, on the 
squid proxy email users list?


On 1/3/2019 9:40 AM, stressedtux wrote:

With ufdbguard is possible to allow one user to have an acl and other user a
different acl? Im trying to completly block access to inet except for what i
should allow.



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
Signed,

Benjamin E. Nichols
Founder & Chief Architect
1-(405)-301-9516
http://www.squidblacklist.org

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] can squid use dns server on random port(non-53)?

2018-06-27 Thread Benjamin E. Nichols
This is actually standard practice, it is very easy and common for 
administrators to configure their firewalls to redirect all 53 tcp/udp 
requests to a specific host to prevent those people and/or malicious 
applications which may be smart enough to change their dns server 
settings in an attempt to bypass a dns based filtering solution.


A solution to your problem would seem obvious to some but, I think that 
you may consider redirecting all requests to udp/tcp 53 from the host 
running Squid to your intended destination port using firewall 
rules.ssentially, you use a firewall to forward requests destined for 
port 53 to whatever port you want. ( yes  you can do this without 
forwarding to a specific host )


I hope that helps.

--
Signed,

Benjamin E. Nichols
Founder & Chief Architect
1-(405)-301-9516
http://www.squidblacklist.org

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID + TOR

2017-12-19 Thread Benjamin E. Nichols
Looks like the ssl certs expired or revoked for the official squid wiki. 

Signed,
Benjamin E. Nichols1-405-301-9516http://www.squidblacklist.org
 Original message From: "C. L. Martinez"  
Date: 12/19/17  2:02 AM  (GMT-06:00) To: squid-users@lists.squid-cache.org 
Subject: [squid-users] SQUID + TOR 
Hi all,

 As Squid's wiki shows: 
https://wiki.squid-cache.org/ConfigExamples/Strange/TorifiedSquid, is it really 
needed to install privoxy to use squid as a proxy to access .onion domains? Is 
not possible to install only squid+tor an put the following:

cache_peer   localhostparent   9040   7   no-query default

in squid.conf (9040 port is the port configured as a transportport in torrc)?

Thanks.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] July 25 2017 - #RIP Urlblacklist.com closed down.

2017-07-26 Thread Benjamin E. Nichols
This is a courtesy message to inform Squid Proxy users who may be using 
blacklists  by urlblacklist.com


On July 25 2017,   Blacklist provider Urlblacklist.com has closed down, 
shut of its website, and thrown in the towel, they have refunded current 
subscribers and closed up shop.



Also July 25th was my Birthday. #Celebration

--
--

Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

1-405-397-1360 - Call Anytime.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Gratis Domain Whitelists Now Available - Squidblacklist.org

2017-07-05 Thread Benjamin E. Nichols
For those in this community who are filtering content, I Just wanted to 
give you guys a heads  up, we now have a gratis domain whitelist area of 
our website.


Files can be found at the following url,

http://www.squidblacklist.org/downloads/whitelists/

As time progresses we will be adding more, and more lists.

Thank you.

--

Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Introducing Charcoal - Centralised URL Filter for squid

2017-06-14 Thread Benjamin E. Nichols
This sounds great, and would you mind specifying the source of the 
blacklist data at the core of your services?


In other words, what I dare ask you is this, and im sure others might 
want to know, are you using the blacklists from shalla, UT1, or 
urlblacklist? Or have you developed your own domain management technology?



--
Signed,

Benjamin E. Nichols

http://www.squidblacklist.org


On 6/14/2017 5:36 AM, Nishant Sharma wrote:

Hi,

We are excited to invite early users to test drive Charcoal 
(http://charcoal.io) - a Squid URL Rewriter for distributed proxies.


Charcoal is designed to help administrators manage access rules for 
the proxies at just one place with a GUI, instead of editing 
configuration of individual proxy servers.


It has come out of our need of managing ACLs for 100+ proxy servers on 
embedded devices (OpenWRT/LEDE) running at our customer offices across 
the geography of India. We are releasing it in the hope that it will 
be useful for Squid users who have to manage multiple proxy servers 
everyday.


The architecture is API key driven client-server, where a squid 
url-rewrite helper contacts server to query access controls for the 
incoming requests.


Current features:
-
- Supports Squid 2.x and 3.x
- 70+ pre-existing domains blacklist
- Custom destination groups/categories
- Custom source groups for IPs and Networks (usernames in the pipeline)
- As of now only domain filter support (no full url filtering)
- API key driven

Configuration:
--
- Download the helper from 
https://raw.githubusercontent.com/Hopbox/charcoal-helper/master/squid/charcoal-helper.pl.

- Make sure IO::Socket module for Perl is installed.
- Add following lines to squid.conf after downloading the helper:

url_rewrite_program /path/to/charcoal-helper.pl YOUR_API_KEY
url_rewrite_children X startup=Y idle=Z concurrency=1

YOUR_API_KEY for our hosted Charcoal service can be requested by 
filling in the form at http://charcoal.io or writing in to 
charc...@hopbox.in. The credentials for login to 
https://active.charcoal.io to manage the ACL will be emailed along 
with YOUR_API_KEY.


License:

URL Rewrite helper for squid is licensed under GPLv2.0 while Charcoal 
Server is licensed under AGPLv3.0.


GIT Repo:
-
Squid URL Rewrite helper can be downloaded from 
https://github.com/Hopbox/charcoal-helper


Git repository for Charcoal Server is at 
https://github.com/Hopbox/charcoal


Regards,
Nishant
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
Signed,

Benjamin E. Nichols

http://www.squidblacklist.org

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TPROXY issues with Google sites

2017-05-26 Thread Benjamin E. Nichols

Here is a list of google domains that may help you,

http://www.squidblacklist.org/downloads/whitelists/google.domains


On 5/26/2017 10:44 AM, Vieri wrote:

Hi,

I'd like to block access to Google Mail but allow it to Google Drive. I also 
need to intercept Google Drive traffic (https) and scan its content via c-icap 
modules for threats (with clamav and other tools which would block potentially 
harmful files).

I've failed so far.

I added mail.google.com to a custom file named "denied.domains" and loaded as 
denied_domains ACL in Squid. I know that in TLS traffic there are only IP addresses, so I created 
the "server_name" ACL as seen below.

[...]
acl denied_domains dstdomain "/usr/local/share/proxy-settings/denied.domains"
http_access deny denied_domains !allowed_groups !allowed_ips
http_access deny CONNECT denied_domains !allowed_groups !allowed_ips
[...]
reply_header_access Alternate-Protocol deny all
acl AllowTroublesome ssl::server_name .google.com .gmail.com
acl DenyTroublesome ssl::server_name mail.google.com
http_access deny DenyTroublesome
ssl_bump peek all
ssl_bump splice AllowTroublesome
ssl_bump bump all

First of all, I was expecting that if a client tried to open 
https://mail.google.com, the connection would be blocked by Squid 
(DenyTroublesome ACL). It isn't. Why?

Second, I am unable to scan content since Squid is splicing all Google traffic. However, 
if I "bump AllowTroublesome", I can enter my username in 
https://accounts.google.com, but trying to access to the next step (user password) fails 
with an unreported error.

Any suggestions?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
--

Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

1-405-397-1360 - Call Anytime.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] New Member - Just testing mail list

2017-05-24 Thread Benjamin E. Nichols

Good afternoon!


On 5/24/2017 2:53 PM, Rogerio Coelho wrote:

Hi Squid Users !

Just testing mail list.

Rogério Ceni Coelho
Engenheiro de Infraestrutura - Infrastructure Engineer
Diretoria de TI e Telecom - Grupo RBS
Fone: +55 (51) 3218-6983
Celular: +55 (51) 8186-2933 Claro
Celular: +55 (51) 8050-4225 Vivo
rogerio.coe...@gruporbs.com.br
http://www.gruporbs.com.br



Esta mensagem e quaisquer anexos são exclusivamente para o uso da parte 
endereçada e poderão conter dados privilegiados e confidenciais. Caso o leitor 
da mensagem não seja a parte a quem ela foi endereçada, nem um representante 
autorizado da mesma, ficará notificado, por meio desta, que qualquer divulgação 
desta comunicação é estritamente proibida. Se esta comunicação for recebida 
erroneamente, por favor, notifique-nos disto imediatamente por e-mail e delete 
a mensagem  e quaisquer anexos a ela de seu sistema.


O Grupo RBS pauta sua atuação por seu Código de Ética e Conduta, em 
conformidade com a Legislação Brasileira. Qualquer situação irregular deve ser 
informada via Canal de Ética pelo site 
https://www.contatoseguro.com.br/gruporbs ou 0800 602 1831. Este e-mail e seus 
anexos podem conter informações confidenciais. Se você recebeu esta mensagem 
por engano, por favor apague-a e notifique o remetente imediatamente.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
--

Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

1-405-397-1360 - Call Anytime.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] custom error pages with stylesheets doesn't work for me

2017-05-19 Thread Benjamin E. Nichols
You might actually be in the wrong directory, I wouldnt be surprised if 
this was the case, particularly if you are using a debian or ubuntu box.


Heres a hint.  You need to be editing the following default document 
instead...


/usr/share/squid-langpack/templates/ERR_ACCESS_DENIED

And make sure your css is within your style tag.

.someclass{background:#000;}

And make sure you place this css inside the tag 


On 5/19/2017 3:10 AM, Dieter Bloms wrote:

Hello Alex,

On Thu, May 18, Alex Rousskov wrote:


On 05/18/2017 03:17 AM, Dieter Bloms wrote:


I wrote some custom error pages and activated style sheets in the header of the 
error pages like:


%l


In the squid.conf file I set err_page_stylesheet to my stylesheet file and I 
restarted squid.
My expectation was, that the content of this style sheet file will be included 
in the error page at the %l position.

Your expectation was correct.



But the place between  and  is empty.
Does anybody know how can I insert the content of the style sheet file to the 
error pages?

The steps you described above appear correct to me. Did you check for
errors in cache.log when starting Squid? Squid should complain if it
cannot load err_page_stylesheet but, unfortunately, Squid thinks that
you do not really care much about style and keeps running despite any
loading failures.

Temporary renaming the stylesheet file (so that Squid cannot load it)
will help you test whether you are looking for errors in the right place.

thank you for the hint.
Squid had no read permission to this file. After right permissions
it worked.
But there was _no_ error message in the cache log file.
I found the wrong permission with the help of strace command.
It would be nice, when squid drop a note, that it can't read the file.




--
--

Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

1-405-397-1360 - Call Anytime.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Hint for howto wanted ...

2016-11-28 Thread Benjamin E. Nichols
For your dynamic ip problem, you could easily write a small bash script to do a 
scheduled nslookup on a dynamic dns hostname using dyn or no-ip. Write it so 
that it dumps the output into your firewall rules to keep the ip updated in 
your firewall rules.
 Benjamin  E. Nicholshttp://www.squidblacklist.org
1-405-397-1360
-- Original message--From: Walter H.Date: Mon, Nov 28, 2016 2:58 AMTo: 
Eliezer Croitoru;Cc: squid-users@lists.squid-cache.org;Subject:Re: 
[squid-users] Hint for howto wanted ...
On Mon, November 28, 2016 06:56, Eliezer Croitoru wrote:> OK so the next step 
is:> Routing over tunnel to the other proxy and on it(which has ssl-bump)> 
intercept.by now only the 3.5.20 squid on the local VM does SSL-bump> If you 
have a public on the remote proxies which can use ssl-bump then> route the 
traffic to there using Policy Based routing.how do I configure this?> You can 
selectively route by source or destination IP addresses.by now the remote has 
in its iptables to only accept port 3128 from myhome IP (IPv6 and IPv4), but 
the IPv4 at home changes several times ayear;means it is not fix;>> Now my main 
question is: Can't you just install 3.5 on the 3.1.23 machine> and bump 
there?SSL bump and parent proxy together doesn't work,if this worked I wouldn't 
need the 3.1.23 machine at all ...the 3.1.23 machine has the other 2 proxies 
(3.4.14-remote and3.5.20-local) as parent ...I should mention that the 3.5.20 
box also has ClamAV (SquidClam) whichdoes malware checking ...(the remote
  proxy can't run ClamAV)> How are you intercepting the connections? What are 
the iptables rules you> are using?the client have configured the 3.1.23 squid 
box as proxy> What OS are you running on top of the Squid boxes?all squid boxes 
run CentOS 
6.8Thanks,Walter___squid-users 
mailing 
listsquid-users@lists.squid-cache.orghttp://lists.squid-cache.org/listinfo/squid-users___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICAP question

2016-10-09 Thread Benjamin E. Nichols

Dearest Mr. James Lay.

After considering your previous slanderous, and inflammatory trolling of 
my earlier correspondence, my initial response is to help alleviate your 
issue but rather, I think not, I believe considering the nature of your 
personal disposition to be a slanderous communist piece of human poop, 
instead rather, I shall instruct you to go walk yourself off of a bridge 
, yes Mr. James Lay.



On 10/9/2016 12:02 PM, James Lay wrote:
Trying to just get some content filtering working and I'm running into 
the below:


WARNING: Squid is configured to use ICAP method REQMOD for service 
icap://localhost:1344/srv_cfg_filter but OPTIONS response declares the 
methods are RESPMOD


Here's the icap snippet from squid.conf:

icap_enable on
icap_send_client_ip on
icap_persistent_connections on
icap_service srv_cfg_filter_req reqmod_precache 
icap://localhost:1344/srv_cfg_filter bypass=on

adaptation_access srv_cfg_filter_req allow all
icap_service srv_cfg_filter_resp respmod_precache 
icap://localhost:1344/srv_cfg_filter bypass=off

adaptation_access srv_cfg_filter_resp allow all

interesting c-icap.conf bits:

ModulesDir /opt/icap/lib/c_icap
ServicesDir /opt/icap/lib/c_icap
acl localhost src 127.0.0.1/255.255.255.255
acl PERMIT_REQUESTS type REQMOD RESPMOD
icap_access allow localhost PERMIT_REQUESTS
icap_access deny all
Include srv_content_filtering.conf

lastly, srv_content_filtering.conf:

Service srv_cfg_filter srv_content_filtering.so
srv_content_filtering.Match default body /(test)/ig score=5
LogFormat mySrvContentFiltering "%tl, %>a %im %is %huo  [Scores: 
%{srv_content_filtering:scores}Sa] [ActionFilter: 
%{srv_content_filtering:action_filter}Sa] [Action: 
%{srv_content_filtering:action}Sa]"


not sure why I can't seem to get this to fly...any assistance would be 
appreciated...thank you.


James


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
Signed,

Benjamin E. Nichols

http://www.squidblacklist.org

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Whitelist domain ignored?

2016-10-04 Thread Benjamin E. Nichols
s.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
Signed,

Benjamin E. Nichols

http://www.squidblacklist.org

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Large text ACL lists

2016-09-30 Thread Benjamin E. Nichols
Also if you are going to use Squid Native ACL blacklists and reload 
while you are updating, its a good idea to have a parent proxy 
configured, so that your traffic/users wont be interrupted, squid will 
default to the next available proxy while its unavailable/reloading the 
blacklists and forward traffic to it, otherwise your proxy will be down 
during the reload process and your users will be without the ability to 
surf.





On 9/30/2016 8:02 PM, Darren wrote:

One further question

If I have to reload the ACL lists do I restart squid or is there a way 
to update without impacting the users to much?


In some of the scenarios, some acl lists may change frequently

thanks again.



Sent from Mailbird 
<http://www.getmailbird.com/?utm_source=Mailbird_medium=email_campaign=sent-from-mailbird>


On 1/10/2016 6:05:05 AM, Darren <darren.j.breeze...@gmail.com> wrote:

Hi

My main issue with squid guard is that when I try and block say 
www.facebook.com and the user goes to https://www.facebook.com, 
squidguard only sees the initial CONNECT as the target IP so doesn't 
match against the domain entry.


If squidguard did a reverse DNS lookup, I could keep using that more 
complex filtering solution. That is where the dstdomain acl is a 
better option but has the ram overhead.


Time for some experimentation

thanks again for the feedback




Sent from Mailbird 
<http://www.getmailbird.com/?utm_source=Mailbird_medium=email_campaign=sent-from-mailbird>


On 30/09/2016 7:21:53 PM, Yuri Voinov <yvoi...@gmail.com> wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Amos, I'm afraid that this is not a solution. Block lists have 
become so

huge that only their compression and / or placement in an external
database (as Marcus) can save the situation.


30.09.2016 12:59, Amos Jeffries пишет:
> On 30/09/2016 6:58 p.m., Darren wrote:
>> Thank you Amos
>>
>> The resources I save not running multiple Squidguards will make more
>> ram available as you say and having a simpler setup is never a bad
>> thing either.
>>
>> Just to clarify, so when squid fires up, it caches the ACL file into
>> ram in it's entirety and then does some optimizations? If that is
>> the case I would need to budget the ram to allow for this.
>
> Not quite. Squid still reads the files line by line into a memory
> structure for whatever type of ACL is being loaded. That is part 
of why

> its so much slowe to load than the helpers (which generally do as you
> describe).
>
> The optimizations are type dependent and fairly simplistic. Ignoring
> duplicate entries, catenating regex into bigger " A|B " patterns 
(faster

> to check against), etc.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJX7kq8AAoJENNXIZxhPexGH+cH/jmZsQlcZgXpwt62pHDtHp4t
TWDnhr5KOfHv+GFeBUmJYuD2nn8wefb5KUUhea5fdpRAeDihFDQDPQDwAnaC/E5q
FzE68zh+nF13xVwTW9/5mQhK75G17mOGJPGFPn1ZUC3lf/Q2JCOhWB+0MFilXXcQ
/ptCeQII/E8oXaiBOvHPzasOp6eDnu/m51q0DnkfoUceEWap9W0rY/vKxwL32FI9
fjqoZGGBPt3FDczjb8/9X6trqeGBwUl4PKSTE4JSdyU6z52evaCSsVbEgAmw+LjI
ELCBPOuU7buFxNjCSNLVhDNQeZJFJxPV8Oh/OcDQZQDhdUYliEwRke5Sz+Rz37k=
=hFD2
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
--

Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

1-405-397-1360 - Call Anytime.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Large text ACL lists

2016-09-30 Thread Benjamin E. Nichols
I would recommend you stop squid and start it, simply doing a -k 
reconfigure is a bad idea, because sometimes squid will not reload the 
new blacklists, I have no idea why it is unpredictable in this manner or 
if they have fixed this problem, I didnt write the software, but what I 
do know, in my experience, is that the most reliable way to ensure the 
lists actually get reloaded when using large acl domain lists in the 
manner you are, is to stop squid3 and start , which is also kinda lame 
because it takes longer, but its sure to work.


Anyway thats my two cents.


On 9/30/2016 8:02 PM, Darren wrote:

One further question

If I have to reload the ACL lists do I restart squid or is there a way 
to update without impacting the users to much?


In some of the scenarios, some acl lists may change frequently

thanks again.



Sent from Mailbird 
<http://www.getmailbird.com/?utm_source=Mailbird_medium=email_campaign=sent-from-mailbird>


On 1/10/2016 6:05:05 AM, Darren <darren.j.breeze...@gmail.com> wrote:

Hi

My main issue with squid guard is that when I try and block say 
www.facebook.com and the user goes to https://www.facebook.com, 
squidguard only sees the initial CONNECT as the target IP so doesn't 
match against the domain entry.


If squidguard did a reverse DNS lookup, I could keep using that more 
complex filtering solution. That is where the dstdomain acl is a 
better option but has the ram overhead.


Time for some experimentation

thanks again for the feedback




Sent from Mailbird 
<http://www.getmailbird.com/?utm_source=Mailbird_medium=email_campaign=sent-from-mailbird>


On 30/09/2016 7:21:53 PM, Yuri Voinov <yvoi...@gmail.com> wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Amos, I'm afraid that this is not a solution. Block lists have 
become so

huge that only their compression and / or placement in an external
database (as Marcus) can save the situation.


30.09.2016 12:59, Amos Jeffries пишет:
> On 30/09/2016 6:58 p.m., Darren wrote:
>> Thank you Amos
>>
>> The resources I save not running multiple Squidguards will make more
>> ram available as you say and having a simpler setup is never a bad
>> thing either.
>>
>> Just to clarify, so when squid fires up, it caches the ACL file into
>> ram in it's entirety and then does some optimizations? If that is
>> the case I would need to budget the ram to allow for this.
>
> Not quite. Squid still reads the files line by line into a memory
> structure for whatever type of ACL is being loaded. That is part 
of why

> its so much slowe to load than the helpers (which generally do as you
> describe).
>
> The optimizations are type dependent and fairly simplistic. Ignoring
> duplicate entries, catenating regex into bigger " A|B " patterns 
(faster

> to check against), etc.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJX7kq8AAoJENNXIZxhPexGH+cH/jmZsQlcZgXpwt62pHDtHp4t
TWDnhr5KOfHv+GFeBUmJYuD2nn8wefb5KUUhea5fdpRAeDihFDQDPQDwAnaC/E5q
FzE68zh+nF13xVwTW9/5mQhK75G17mOGJPGFPn1ZUC3lf/Q2JCOhWB+0MFilXXcQ
/ptCeQII/E8oXaiBOvHPzasOp6eDnu/m51q0DnkfoUceEWap9W0rY/vKxwL32FI9
fjqoZGGBPt3FDczjb8/9X6trqeGBwUl4PKSTE4JSdyU6z52evaCSsVbEgAmw+LjI
ELCBPOuU7buFxNjCSNLVhDNQeZJFJxPV8Oh/OcDQZQDhdUYliEwRke5Sz+Rz37k=
=hFD2
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
--

Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

1-405-397-1360 - Call Anytime.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] New Domain Blacklist Options...

2016-08-17 Thread Benjamin E. Nichols
We heard you loud and clear, you wanted our enhanced blacklists in a 
similar archive/file structure as shallalist and urlblacklist for your 
web filtering platform, so we finally did it. Available now to all 
squidblacklist.org members is the new “Universal Archive Structure 
Format” for any platform coded for shallalist or urlblacklist file 
structured archives, just copy and paste the link ( registration required ).


http://www.squidblacklist.org/downloads/squidblacklists/squidblacklist.tar.gz 




--
--

Signed,

Benjamin E. Nichols

http://www.squidblacklist.org

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Running squid on a machine with only one network interface.

2016-06-27 Thread Benjamin E. Nichols

You clowns are over complicating this.

Simply add a firewall rule allowing the ip of the squid box to bypass 
your redirect rule.


( squid has to be able to bypass your port 80 redirect rule to fetch 
http data from the web, hence, forward loop error )


--
Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

1-405-397-1360

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Running squid on a machine with only one network interface.

2016-06-27 Thread Benjamin E. Nichols

Did you add a firewall rule to allow your squid box/ip to go direct?

You need to, otherwise youll be sending your traffic in a loop.


On 6/27/2016 3:45 PM, Ataro wrote:


Hi there,


I've set up a FreeBSD machine inside a VirtualBox machine and used 
IPFW to forward all the requests to the internet through a squid 
server running on the same machine in port 3128 in intercept mode.


The problem is that I get 403 http responses on every site I try to 
access to, even on the sites that I've explicitly allowed in the 
squid.conf file.



I also get a warning message on the tty that squid is running on (I've 
run squid in no daemon mode) which says: Warning: Forwarding loop 
detected for:.



I guess that this error occurs since the squid server and the IPFW 
firewall are running on the same machine which have only one  network 
interface.



Am I right?


Regards,


ataro.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

1-405-397-1360

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Good Home Cable Modem Blacklist

2016-06-27 Thread Benjamin E. Nichols
It would also be trivial to gather up all known ip ranges issued to 
consumer cable isps and convert them to a domain name acl compatible format.


I will put it on the whiteboard.


On 6/27/2016 12:21 PM, Antony Stone wrote:

On Monday 27 June 2016 at 19:06:17, Michael Pelletier wrote:


Does anyone know of a good blacklist of home cable modems?

I don't think you'll get any list of *home* cable modems, which excludes small
business connections as well.

Also, with a lot of ISPs, I don't think you'll get a list of *cable* modems,
separate from DSL modems; many of them use combined DHCP pools for both.

However, depending on what your reason for needing such a list is, you might
find that a sufficiently effective solution is to do a reverse DNS lookup on an 
IP
address and look for any of:

cable
dsl
dynamic
pool

as discrete words (often in a format such as "cable-79-35-42-183.isp.com").






Hope that helps,


Antony.



--
Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

1-405-397-1360

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Regex optimization

2016-06-16 Thread Benjamin E. Nichols



On 6/16/2016 3:28 PM, Yuri Voinov wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
  
I propose to nominate for the second place of the contest "The most

inefficient use of computing resources - 2016." :-!:-D

Because first place already occuped. :-D 30 millions pornsites in one
squid's ACL and 7 minutes for squid -k refresh. 8-)
Yeah and Ill bet about 27 Million of them are dead, expired, parked or 
redirected because your list sucks.


If you really intend to use blacklists tailored for Squid proxy Native 
ACL, we are the leading and only provider of such lists.
And we actually query each domain daily with batch updates, dead domains 
are placed into a holding pool to be queried again cyclically and re added

as necessary.

Shallalist is a joke, urlblacklist is garbage, if you are serious and 
need a better blacklist, we would be happy to serve you.





17.06.2016 1:20, Antony Stone пишет:

On Thursday 16 June 2016 at 21:11:50, Alfredo Rezinovsky wrote:


Well.. I tried.
I need to ban 8613 URLs. Because a law.

Have you considered https://www.urlfilterdb.com/products/ufdbguard.html ?


If I put one per line in a file and set the filename for an url_regex acl
it works. But when the traffic goes up the cpu load goes 100% (even using
workers) and the proxy turns unusable.

Er, I'm not surprised.


I tested and saw my squid can't parse regexes with more than 8192
characters.
I managed to combine the 8000 uris in 34 regexes using a ruby gem,

and the

cpu load stays almost at the same level it is without any acl (same
traffic).

That must be *way* past anything to be described as "maintainable".


the regex is:

Er, thanks, that confirms my suspicions above :)


Antony.


-BEGIN PGP SIGNATURE-
Version: GnuPG v2
  
iQEcBAEBCAAGBQJXYwv7AAoJENNXIZxhPexGn9QH/R2ino1lTfOWrd4E8Z+UUsuH

wjEfi4e96ptkkye57mcOTHXiLgrau+x+vXVS35CNgpwsB3daN1/E6DvAZz/XwABJ
O6/aqIn/JNKmkwLj/XPB0nD0lsrWXoOdknGpL7r/E9un2N2mfAdBVKUbItAuUM+G
DQeKfnRjCDS0Pgt4zlNIQjo0xxSxrjrHThKoWlAi00v2LzWkSmJtbZyW1WtzNbNf
qH8j1LlTbiOg9FmOpp+GVQ8XKEjGnWnhjnydKdVlPr9mXCA6XN5Kn5L6tmckqSc/
Snn9jKZfJAtTg97gTzOJpw9BuGw7pqSRyARcV0/t4PsySNTD/4NpJz/HVKhlT+E=
=Mgx4
-END PGP SIGNATURE-



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

1-405-397-1360

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Somewhat OT: Content Filter with https

2016-06-08 Thread Benjamin E. Nichols
We have many satisfied subscribers who use our blacklists with ufdbguard 
as their primary content filter and they seem to be quite satisfied.


Of course we are going to promote our services, but to be forthright 
with a response,


UfdbGuard seems to have gained quite a lot of traction and there is a 
reason for that.



And I would agree that should be the best choice.

--
Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

1-405-397-1360


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Why is overlapping dstdomains a FATAL error now?

2015-08-06 Thread Benjamin E. Nichols
Agreed, whoever decided it was a wise decision to make this a stop error 
should be fired or at the very least, slapped in the back of the head.


On 8/6/2015 6:44 PM, Dan Charlesworth wrote:
This used to just cause a WARNING right? Is this really a good enough 
reason to stop Squid from starting up?


2015/08/07 09:25:43| ERROR: '.ssl.gstatic.com http://ssl.gstatic.com/' is a 
subdomain of '.gstatic.com http://gstatic.com/'
2015/08/07 09:25:43| ERROR: You need to remove '.ssl.gstatic.com 
http://ssl.gstatic.com/' from the ACL named 'cache_bypass_domains'
FATAL: Bungled /etc/squid/squid.conf line 149: acl cache_bypass_domains dstdomain 
/acls/lists/8/squid_domains”



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid proxy to block sites

2015-05-29 Thread Benjamin E. Nichols

Here is a working conf.

--- http://www.squidblacklist.org/downloads/squid.conf.txt

And here is the worlds largest porn blacklist.  ( 23mb - 1,27x,xxx 
domains )


-- 
http://www.squidblacklist.org/downloads/squidblacklists/squid-porn.tar.gz


On 5/29/2015 5:19 AM, Nishant Sharma wrote:

On Friday 29 May 2015 03:09 PM, Flupke wrote:


All those files are under 1mb, one file is bigger the file of Porn is 
around

16mb, when loading this file, the squid service crashed.

When I loaded this config it worked just perfect.

What can I do to walk around this issue?


Can you try to run squid in foreground with following command:

squid -NX -f /path/to/squid.conf

and see what does it say before crashing?

Regards,
Nishant
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Transparent proxy and ppp

2012-03-19 Thread Benjamin E. Nichols

I know this is just my opinion, but, if it was me
I would use a dedicated hardware device as the vpn/ppp  client
and just pipe that out to a switch to make things less complicated


Like you could use a DDWRT enabled router, or Many other platforms to do 
this dirtywork for you.


That way your work on the squid side would be a heck of a lot easier.

But then again, thats just me.



On 03/19/2012 07:53 PM, Amos Jeffries wrote:

On 20.03.2012 15:30, zozo zozo wrote:

Hi all

I've setup squid and it works if I forward network from eth0 to wlan0
(ap mode)
But if instead of ethernet I try to use ppp0 packets, squid doesn't
forward stuff, and in access log entries were something like 0_ABORTED
(don't have those logs at hand, will provide more info tomorrow)
Ports that are not sent to squid work fine, ICMP and HTTPS are
forwarded correctly.

ppp0 is interface created by wvdial (I share 3G modem internet)
syslog doesn't show anything interesting

Is there anything special to know about ppp and squid?


Just that squid does not interact with NIC directly.
Squid sends packets with a source IP address (set by either 
tcp_outgoing_address or system-selected default IP). What happens to 
those packets is up to the OS routing.


Amos





Re: [squid-users] Stopping Torrent access in Squid

2012-03-11 Thread Benjamin E. Nichols

Well you could begin by enabling squids built in blacklisting feature
and then add all known torrent tracker domains to the blacklist, that 
would be a good start



And then I think you can use another blacklisting method based on regex 
to block *torrent* from your network
But others on here would be better equipped to explain regex, its a bit 
too leet for my current level

of knowledge.

Anyone else have any answers for this question?

On 03/11/2012 05:38 AM, Vishal Agarwal wrote:

Hi,
I am Vishal.

Pl advise me how I can stop Torrent Downloads, which are getting connected
through squid, using Connect method Connect ACL.

Thanks/regards,
Vishal Agarwal


   




Re: [squid-users] Re: access.log issues with squid 3.2.0.15

2012-03-10 Thread Benjamin E. Nichols
Now that is an odd location for log files, forgive me for the 
unproductive intrusion.

But may I ask,  is this a windows variant?

On 03/09/2012 11:36 PM, gewe...@gmx.net wrote:

I had access_log stdio:/Applications/oss/logs/access.log squid
which
worked fine.

Today, I switched to:

logformat customfmt %tl
access_log stdio:/Applications/oss/logs/access-customfmt.log
customfmt

based on the suggestion in


 

http://squid-web-proxy-cache.1019090.n4.nabble.com/Date-time-format-in-access-log-td1458569.html.
 

This doesn't seem to have any effect. In fact, squid continues to
log to
/Applications/oss/logs/access.log in the squid native format. I've
tried
with or without the stdio: part. I did restart or -k reconfigure
squid. Am
I missing something obvious?
 

/Applications/oss/logs/access.log
vs
/Applications/oss/logs/access-customfmt.log

perhapse?


Amos

 

Nah, I double-checked that. In fact, squid still logs in epoch timestamps with 
the following config:

logformat customfmt %tl
access_log stdio:/Applications/oss/logs/access.log customfmt

Or it only logs to access.log, with the following:

logformat customfmt %tl
access_log stdio:/Applications/oss/logs/access-customfmt.log customfmt
access_log stdio:/Applications/oss/logs/access.log squid

It almost as if squid is using a cached copy of its previous squid.conf.
   




Re: [squid-users] Squid 2.7 problem with url

2012-03-09 Thread Benjamin E. Nichols

Please use Squid 3.x series,

2.7 is ancient.

On 03/09/2012 08:18 PM, Iojan Sebastian wrote:



Clue #1:


Via: 1.1 cache:31280 (Lusca/LUSCA_FMI)


This appears to be Lusca. Not Squid. Lusca is a commercial branch of 
Squid-2.7. Please contact Xenion for Lusca support.


Amos

Sorry, downloaded from google code, compiled, and installed, didn't 
know that it was commercial and/or it was far away from squid.


Thanks
Sebastian






Re: [squid-users] squid with squidguard issue

2012-03-05 Thread Benjamin E. Nichols

Well you could use squids built in blacklist capabilities instead of
adding complexity by trying to us squidGard or DansGuardian,
particularly if your a noob at squid. Ive taken a look at them and
decided that its too much effort to try and implement, Rather, this is
how ive done it.


Try this instead, its what I do.

created a blacklist file, and place it somewhere, mine is in my squid dir

/etc/squid3/squid-block.acl  (u can name it whatever u want of course)

add a few test entries to this file in the following format

.pornsite.com
.unwantedsite.com
.whatevershit.com
.someshitwebsite.com

the . will ensure thatwww.pornsite.com  or any subdomain is also blocked.


So next add these  lines to your squid.conf

#blacklist by haxradio.com==

acl blacklist dstdomain /etc/squid3/squid-block.acl
http_access deny blacklist

#==

then do

squid3 +k reconfigure   (assuming that your running squid3.x series)

Voila, you are blocking sites using a black list my friend.

btw, just ignore the stupid warning messages. they do not affect the
functionality of this feature and ive learned
to just ignore them.

Thanks to Amos for  helping me to properly do this.





On 03/05/2012 05:19 PM, jeffrey j donovan wrote:

On Mar 5, 2012, at 8:40 AM, Muhammad Yousuf Khan wrote:

   

can some one plz help. i followed
http://wiki.debian.org/DebianEdu/HowTo/SquidGuard and using lenny
squid 2.7 and squidguard 1.2.0

i write the below line at the end of squid.conf
redirect_program /usr/bin/squidGuard
 

okay

   

i denied ads in squidGuard.conf and addme.com is a domain which i
am sure is in the list of blocklist database.
now when i go to addme.com it just open the website (which i dont want though)

here is squidGuard.conf rule.

dest adult {
domainlist  ads/domains
#   urllist /var/lib/squidguard/db/blacklists/porn/urls
#   expressionlist  adult/expressions
redirecthttp://google.com

}
 

you need to supply a source and destination. basically who is allowed to access 
squidguard. and then tell squidguard what to do with the clients 
request,..allow or deny.

eg;
dbhome /usr/local/squidGuard/db
logdir /usr/local/squidGuard/log


#
# SOURCE ADDRESSES:

src admin {
ip  10.1.1.1
}

src fooclients {
ip  10.132.0.0/16 10.155.0.0/16
}

src freedomzone {
ip  10.154.1.0/24 10.154.2.0/24
}
# DESTINATION CLASSES:
#
dest whitelist {
domainlist  whitelist/domains
}
dest education {
domainlist education/schools/domains
urllist education/schools/urls
}
dest denied {
domainlist  denied/domains
urllist denied/urls
redirecthttp://10.0.2.3/surfb1.html
log deniedaccess.log
}

acl {
admin {
pass any
}

fooclients {
passwhitelist education !denied any
} else {
pass any
}
freedomzone {
passwhitelist education !pornexp !porn any
redirect http://staff2.beth.k12.pa.us/index.html
} else {
pass any
}

default {
pass none
redirect http://10.0.2.3/index.html
}
}




   

here is squidguard log. /var/log/squid/squidGuard.log

2012-03-05 08:06:53 [4180] squidGuard 1.2.0 started (1330952813.099)
2012-03-05 08:06:53 [4180] recalculating alarm in 30187 seconds
2012-03-05 08:06:53 [4180] squidGuard ready for requests (1330952813.101)
2012-03-05 08:06:53 [4182] destblock good missing active content, set inactive
2012-03-05 08:06:53 [4182] destblock local missing active content, set inactive
2012-03-05 08:06:53 [4182] init domainlist /var/lib/squidguard/db/ads/domains
2012-03-05 08:06:53 [4182] loading dbfile /var/lib/squidguard/db/ads/domains.db
2012-03-05 08:06:53 [4182] squidGuard 1.2.0 started (1330952813.107)
2012-03-05 08:06:53 [4182] recalculating alarm in 30187 seconds
2012-03-05 08:06:53 [4182] squidGuard ready for requests (1330952813.108)

here is access.log.the thing which is making me confuse that redirect
tag is not present which suppose to be there. however i can not find
any redirect tag in default 2.7 squid.conf file. can u please tell me
what is going on and how can i redirect or can solve the issue

1330953994.304640 10.51.100.240 TCP_CLIENT_REFRESH_MISS/200 1910
GET http://www.addme.com/favicon.ico - DIRECT/69.43.161.4 image/x-icon


Thanks,
 
   




[squid-users] Unable to forward this request at this time. cache_peer

2012-02-28 Thread Benjamin E. Nichols

Ok I have a network 192.168.1.x with squid proxy 192.168.1.205 upstream of
network 10.10.1.x which is my local network with squid proxy at 10.10.1.105

  Both squids are 3.1.16 Debian and I need to know which lines to add 
to the conf
to allow cache peering to the upstream proxy cache. Of course I would 
like both squids to serve cache when possible.


Below is the conf for the 10.10.1.x proxy

===

http_port 10.10.1.105:3128
hierarchy_stoplist cgi-bin ?
icp_port 3129

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

dns_nameservers 10.10.1.1
hosts_file /etc/hosts
cache_swap_low 95
cache_swap_high 98
access_log /var/log/squid3/access.log
cache_mem 500 MB

memory_pools on
maximum_object_size_in_memory 150 MB
maximum_object_size 150 MB
log_icp_queries off
half_closed_clients on
cache_mgr mrnicho...@gmail.com
cache_dir ufs /mnt/secondary/var/spool/squid3 14000 32 256
visible_hostname deviant.evil
shutdown_lifetime 1 second

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl localnet src 10.10.1.0/24
#acl blacklist dstdomain /mnt/secondary/squid3/squid-block.acl

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 21 # http
acl Safe_ports port 443 # ftp
acl Safe_ports port 70 # https
acl Safe_ports port 210 # gopher
acl Safe_ports port 1025-65535 # wais
acl Safe_ports port 280 # unregistered ports
acl Safe_ports port 488 # http-mgmt
acl Safe_ports port 591 # gss-http
acl Safe_ports port 777 # filemaker
acl CONNECT method CONNECT # multiling http

#icp_access allow  localnet
#icp_access deny all

#http_access deny blacklist
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access deny all




[squid-users] Need help with Parent/Client proxy configuration

2012-02-28 Thread Benjamin E. Nichols

I currently have two networks, one is upstream of the other


192.168.1.x with squid 3.1.16 cache @ 129.168.1.205

and down stream

10.10.1.x network with 10.10.1.105 Squid 3.1.16 Proxy cache


I need to know what I need to ad to the 10.10.1.x proxy config file to 
enable caching from the upstream squid box, and I want both squid 
machines to serve cache.



#
#Begin Squid Configuration  10.10.1.105
#

http_port 10.10.1.105:3128
hierarchy_stoplist cgi-bin ?
icp_port 3129




refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

dns_nameservers 10.10.1.1
hosts_file /etc/hosts
cache_swap_low 95
cache_swap_high 98
access_log /var/log/squid3/access.log
cache_mem 500 MB
memory_pools on
maximum_object_size_in_memory 150 MB
maximum_object_size 150 MB
log_icp_queries off
half_closed_clients on
cache_mgr mrnicho...@gmail.com
cache_dir ufs /mnt/secondary/var/spool/squid3 14000 32 256
visible_hostname deviant.evil
shutdown_lifetime 1 second

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl localnet src 10.10.1.0/24
#acl blacklist dstdomain /mnt/secondary/squid3/squid-block.acl


acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 21 # http
acl Safe_ports port 443 # ftp
acl Safe_ports port 70 # https
acl Safe_ports port 210 # gopher
acl Safe_ports port 1025-65535 # wais
acl Safe_ports port 280 # unregistered ports
acl Safe_ports port 488 # http-mgmt
acl Safe_ports port 591 # gss-http
acl Safe_ports port 777 # filemaker
acl CONNECT method CONNECT # multiling http

#icp_access allow  localnet
#icp_access deny all

#http_access deny blacklist
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access deny all