Re: [squid-users] TProxy and client_dst_passthru

2015-07-02 Thread Stakres
Hi,

I'm back to this post because it still does not work.
You explain OFF - Squid selects a (possibly new, or not) IP to be used as
the
server (logs DIRECT)., sorry to say this is not the reality in the Squid.
We have set the pass-thru directive to OFF and here is the result:
TCP_MISS/206 72540 GET
http://www.google.com/dl/chrome/win/B6585D9F8CF5DBD2/43.0.2357.130_chrome_installer.exe
- ORIGINAL_DST/216.58.220.36

Is there a way to totaly disable the DNS control done by Squid ?

Thanks 

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672013.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] acl for redirect

2015-07-02 Thread Stuart Henderson
On 2015-07-01, Mike mcsn...@afo.net wrote:
 This is a proxy server, not a DNS server, and does not connect to a DNS 
 server that we have any control over... The primary/secondary DNS is 
 handled through the primary host (Cox) for all of our servers so we do 
 not want to alter it for all several hundred servers, just these 4 
 (maybe 6).
 I was originally thinking of modifying the resolv.conf but again that is 
 internal DNS used by the server itself. The users will have their own 
 DNS settings causing it to either ignore our settings, or right back to 
 the Website cannot be displayed errors due to the DNS loop.

resolv.conf would work, or you can use dns_nameservers in squid.conf and
point just squid (if you want) to a private resolver configured to hand
out the forcesafesearch address.

When a proxy is used, the client defers name resolution to the proxy, you
don't need to change DNS on client machines to do this.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] acl for redirect

2015-07-02 Thread Rafael Akchurin
Hello Mike,

Access to ICAP is controlled with same looking acls as access to anything else. 
Something like:

icap_enable on
icap_service qlproxy1 reqmod_precache bypass=0 icap://127.0.0.1:1344/reqmod
icap_service qlproxy2 respmod_precache bypass=0 icap://127.0.0.1:1344/respmod
acl target_domains dstdomain /path/to/target/domains/list
adaptation_access qlproxy1 allow target_domains
adaptation_access qlproxy2 allow target_domains
adaptation_access qlproxy1 deny all
adaptation_access qlproxy2 deny all

will forward *only* requests/responses to those domain names specified in 
/path/to/target/domains/list to ICAP REQMOD and RESPMOD services.
All other connections are not forwarded to ICAP.

Raf


From: Mike mcsn...@afo.net
Sent: Wednesday, July 1, 2015 5:11 PM
To: Rafael Akchurin; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] acl for redirect

Rafael, We're trying to keep the setups lean, and primarily just deal
with google and youtube, not all websites. ICAP processes deal with a
whole new layer of complexity and usually cover all websites, no just
the few.

On 6/30/2015 16:17 PM, Rafael Akchurin wrote:
 Hello Mike,

 May be it is time to take a look at ICAP/eCAP protocol implementations which 
 target specifically this problem - filtering within the *contents* of the 
 stream on Squid?

 Best regards,
 Rafael

Marcus,

This is multiple servers used for thousands of customers across North
America, not an office, so changing from a proxy to a DNS server is not
an option, since we would also be required to change all several
thousand of our customers DNS settings.

On 6/30/2015 17:30 PM, Marcus Kool wrote:
 I suggest to read this:
 https://support.google.com/websearch/answer/186669

 and look at option 3 of section 'Keep SafeSearch turned on for your
 network'

 Marcus

Such a pain, there is no reason for our every day searches should be
encrypted.


Mike

 -Original Message-
 From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
 Behalf Of Mike
 Sent: Tuesday, June 30, 2015 10:49 PM
 To: squid-users@lists.squid-cache.org
 Subject: Re: [squid-users] acl for redirect

 Scratch that (my previous email to this list), google disabled their insecure 
 sites when used as part of a redirect. We as individual users can use that 
 url directly in the browser (
 http://www.google.com/webhp?nord=1 ) but any google page load starts with 
 secure page causing that redirect to fail... The newer 3.1.2 e2guardian SSL 
 MITM requires options (like a der certificate file) that cannot be used with 
 thousands of existing users on our system, so squid may be our only option.

 Another issue right now is google is using a VPN-style internal redirect on 
 their server, so e2guardian (shown in log) sees
 https://www.google.com:443  CONNECT, passes along TCP_TUNNEL/200
 www.google.com:443 to squid (shown in squid log), and after that it is 
 directly between google and the browser, not allowing e2guardian nor squid to 
 see further urls from google (such as search terms) for the rest of that 
 specific session. Can click news, maps, images, videos, and NONE of these are 
 passed along to the proxy.

 So my original question still stands, how to set an acl for google urls that 
 references a file with blocked terms/words/phrases, and denies it if those 
 terms are found (like a black list)?

 Another option I thought of is since the meta content in the code including 
 title is passed along, so is there a way to have it can the header or title 
 content as part of the acl content scan process?


 Thanks
 Mike


 On 6/26/2015 13:29 PM, Mike wrote:
 Nevermind... I found another fix within e2guardian:

 etc/e2guardian/lists/urlregexplist

 Added this entry:
 # Disable Google SSL Search
 # allows e2g to filter searches properly
 ^https://www.google.[a-z]{2,6}(.*)-http://www.google.com/webhp?nord=1;


 This means whenever google.com or www.google.com is typed in the
 address bar, it loads the insecure page and allows e2guardian to
 properly filter whatever search terms they type in. This does break
 other aspects such as google toolbars, using the search bar at upper
 right of many browsers with google as the set search engine, and other
 ways, but that is an issue we can live with.

 On 26/06/2015 2:36 a.m., Mike wrote:
 Amos, thanks for info.

 The primary settings being used in squid.conf:

 http_port 8080
 # this port is what will be used for SSL Proxy on client browser
 http_port 8081 intercept

 https_port 8082 intercept ssl-bump connection-auth=off
 generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
 cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
 cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-
 RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH


 sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M
 16MB sslcrtd_children 50 startup=5 idle=1 ssl_bump server-first all
 ssl_bump 

Re: [squid-users] Squid kerberos_ldap_group ACL dependencies on SUSE12.

2015-07-02 Thread Ashish Behl
Thanks a lot Tom, I was able to compile after setting this PATH.

thanks again for your help.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-kerberos-ldap-group-ACL-dependencies-on-SUSE12-tp4672008p4672015.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-02 Thread Stakres
Hi Amos,

216.58.220.36 != www.google.com ??? 
Have a look: http://www.ip-adress.com/whois/216.58.220.36, this is google.

Depending the DNS server used, the IP can change, we know that especialy due
to BGP.

In the case the client is an ISP providing internet to smaller ISPs with
different DNS with their end users, here I understand that due to the
ORIGINAL_DST squid will check the headers and if the dns records do not
match so squid will not cache, even with a storeid engine, because too many
different DNS servers in the loop (users - small ISP - big ISP - squid -
internet), am I right ?

So, the result is a very poor 9% saving where we could expect around 50%
saving. 

Can you plan, for a next build, a workaround to accept the original dns
record from the headers and check dns if and only if the headers do not
contain any dns record ?
I understand Squid should provide some securities but here we should have
the possibility to ON/OFF these securities.
Or do we need to downgrade to Squid 2.7/3.0 ?

ISPs need to cache a lot, security is not their main issue.

Thanks in advance.
Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672020.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] a lot of TCP_SWAPFAIL_MISS/200

2015-07-02 Thread Amos Jeffries
On 2/07/2015 11:31 a.m., HackXBack wrote:
 after upgrading to 3.5.5
 i see in cache.log
 2015/07/02 01:51:51 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
 directory
 2015/07/02 01:51:51 kid1|   /cache01/2/16/AA/0016AA3B
  - ORIGINAL_DST/203.77.186.75 video/mp4
 access.log
 TCP_SWAPFAIL_MISS/200
 

Your cache index (from swap.state) does not match what objects actually
exist on disk. This is Squid auto-recovery fetching a new copy from the
network.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid version 3.5.5

2015-07-02 Thread Amos Jeffries
On 2/07/2015 5:55 p.m., Paul Martin wrote:
 Hello
 
 Thanks and just another question about internet bandwith
 
 I notice
 -squid version 3.3.8: I have 40k squid cache objects
 -squid version 3.5.5: I have 2k squid cache objects
 
 Does it mean I need x10 to x20 times more internet bandwith network with
 squid 3.5.5 compare to 3.3.8 ?

No. They are not related like that. You could have a full cache and not
HIT at all, or a single object that HITs for 100% of all traffic.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-02 Thread Amos Jeffries
On 2/07/2015 6:32 p.m., Stakres wrote:
 Hi,
 
 I'm back to this post because it still does not work.
 You explain OFF - Squid selects a (possibly new, or not) IP to be used as
 the
 server (logs DIRECT)., sorry to say this is not the reality in the Squid.
 We have set the pass-thru directive to OFF and here is the result:
 TCP_MISS/206 72540 GET
 http://www.google.com/dl/chrome/win/B6585D9F8CF5DBD2/43.0.2357.130_chrome_installer.exe
 - ORIGINAL_DST/216.58.220.36
 
 Is there a way to totaly disable the DNS control done by Squid ?

No. The requests where ORIGINAL_DST is mandatory it is so because the
client Host header contains an identifiable problem. The URL cannot be
cached without allowing other clients to be affected by that problem.
Specifically that 216.58.220.36 != www.google.com.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-02 Thread Stakres
Hi Yury,

In your installation, with your devices... At home, I do the same like you,
but I'm not an ISP.

Here the issue is that end users could use different dns the ISPs cannot
control.
Home/Entreprise, the admin can control the used DNS servers with devices. In
an ISP environment, we cannot control/manage, end users do what they want.
2 different worlds, not the same rules, sorry 

Fred






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672024.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-02 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Fred,

I'm talkin not about localhost installation.

My squid serves business-center. With hundreds of users.

In this environment, we use also transparent DNS interception onto DNS
cache. DNS cache itself uses clean sources for resolving, using dnscrypt.

This permit me almost full control above DNS. ;)

Sorry, but you can build your own world. :)) Or can't
: As you wish.

WNR, Yuri

02.07.15 18:59, Stakres пишет:
 Hi Yury,

 In your installation, with your devices... At home, I do the same like
you,
 but I'm not an ISP.

 Here the issue is that end users could use different dns the ISPs cannot
 control.
 Home/Entreprise, the admin can control the used DNS servers with
devices. In
 an ISP environment, we cannot control/manage, end users do what they want.
 2 different worlds, not the same rules, sorry

 Fred






 --
 View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672024.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJVlT4lAAoJENNXIZxhPexG90EH/0YI+7ERqjv32GDz564YupeF
Cu0y2oCdclt5zNBQMVzXfKOwYpePk6XDk9coSCMiTPOq8gjagB4sx5nm+da3tCd/
+vJvF17ht4f0Ue1CPblv7h2McX+ui6+92V3/saaDMMHr59XjAqfycg3Iev8wnH56
uWL35hYfm+djZVse0roKUdB4E43fAFH5NelMEnFOdWRXuJn8WFlWPTNMly1mYOzz
5KwQR0mWhb9QyKgQc/rWmsEoby2SxqulkbpkHfu5cT+F1G0CtcNvjcaseEZ7S9ku
WSaex0XNQtBX/WDEDla/pagPc45yMUBpQXm10k5B4V6RUO8R/67/EZmUXrQ+8EE=
=aBUc
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2015-07-02 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Fred,

I'm talkin not about localhost installation.

My squid serves business-center. With hundreds of users.

In this environment, we use also transparent DNS interception onto DNS
cache. DNS cache itself uses clean sources for resolving, using dnscrypt.

This permit me almost full control above DNS. ;)

Sorry, but you can build your own world. :)) Or can't
: As you wish.

WBR, Yuri

02.07.15 18:59, Stakres пишет:
 Hi Yury,

 In your installation, with your devices... At home, I do the same like
you,
 but I'm not an ISP.

 Here the issue is that end users could use different dns the ISPs cannot
 control.
 Home/Entreprise, the admin can control the used DNS servers with
devices. In
 an ISP environment, we cannot control/manage, end users do what they want.
 2 different worlds, not the same rules, sorry

 Fred






 --
 View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672024.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJVlT4zAAoJENNXIZxhPexGgIUIAKxf4R9KsRmCAsQPOMysX/LO
EhMyv5FgGzVCWg2aSLfPX1QwPJJS0FAg7VUxEXuKVk8biRWGDpgHIlJEMGThSkRh
bp7GH6CLesvv5fs+jG9uumWtS/bS7Kogvr8dZso784qo1fU6bxEp1imol1JnIW8i
I45E8+3JBuniIrxY62wY5jgbKoa+JxAEyGRcptLGaBpTofivg5b7Lkoe8s9+zRSy
YoJl8N/KoTk0bP4BTTjsC+YKKvqMhzv1iFEoebqd/Tpk2t+9pPoek26gosfmbZyw
iZE6FKtH2Hx5YROHYnY0lJTRZS7Av2NO8ZwtEEOORfJM5nnzGWMaXlLer/w7KwA=
=yLQQ
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] reply_body_max_size question

2015-07-02 Thread Danny
I am running Debian 8 with Squid3 installed (transparent). However, I would 
like to know a little more about the reply_body_max_size directive. I have 
read quite a bit about it but none of the discussions on the net fits my 
criteria ... 
(Oh yes, squidGuard is also running around my server somewhere doing what it is 
supposed to do ... I hope ... )

It is a home setup with the Debian box serving DHCP IP's over wlan0 (which all 
devices in the house connect to for internet access). 9 laptops, 4 PC's, 7 
tablets and 9 SmartPhones (and that is only the kid's stuff fighting for 
bandwidth supremacy ... ;) ) ... We are all on the same subnet ...

The problem I have (as with most parents) is to limit the kid's download sizes 
from all over the net. Where I am we have capped internet and have to pay for 
more cap. 
Currently I get 20GB of data every month and by the end of the month I have 
purchased in excess of 100GB throughout the month which gets very expensive. 
My son plays games on his PS3 and some of the games (Call of Duty, I think) one 
player can download another player's in-game recorded video (or something like 
that) and that eats up the cap.

Currently my reply_body_max_size is set to 20 MB in my efforts to curb 
downloads and save some bandwidth. 
However, whenever myself or the wife wants to download or visit youtube I have 
to change the 20MB limit, restart Squid3, watch youtube, change limit back to 
20MB and reload Squid3 again ... which is a pain in the butt ...

Currently my ACL's look like this:

acl localnet src 10.0.0.0/24
acl localnet_dad_laptop 10.0.0.10
acl localnet_dad_smartphone 10.0.0.11
acl localnet_mom_laptop 10.0.0.12
acl localnet_mom_smartphone 10.0.0.13
acl localnet_son_laptop 10.0.0.14
acl localnet_son_smartphone 10.0.0.15
acl localnet_son_tablet 10.0.0.16

---and so it goes on for all the other devices---

http_access allow localnet
http_access allow localnet_dad_laptop
http_access allow localnet_dad_smartphone
http_access allow localnet_mom_laptop
http_access allow localnet_mom_smartphone
http_access allow localnet_son_laptop
http_access allow localnet_son_smartphone
http_access allow localnet_son_tablet

---and so it goes on for all the other devices---

How can I allow mom and dad unlimited download sizes but limit download sizes 
for my kids (son, daughter and daughter) and all the kid's friends that visit 
and sleep over?

Thank You

Danny
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reply_body_max_size question

2015-07-02 Thread Amos Jeffries
On 3/07/2015 2:41 a.m., Danny wrote:
 I am running Debian 8 with Squid3 installed (transparent). However, I would 
 like to know a little more about the reply_body_max_size directive. I have 
 read quite a bit about it but none of the discussions on the net fits my 
 criteria ... 

It works as documented at
http://www.squid-cache.org/Doc/config/reply_body_max_size/.  If that
does not fit your criteria then its not what you need.


 (Oh yes, squidGuard is also running around my server somewhere doing what it 
 is supposed to do ... I hope ... )
 
 It is a home setup with the Debian box serving DHCP IP's over wlan0 (which 
 all devices in the house connect to for internet access). 9 laptops, 4 PC's, 
 7 tablets and 9 SmartPhones (and that is only the kid's stuff fighting for 
 bandwidth supremacy ... ;) ) ... We are all on the same subnet ...
 
 The problem I have (as with most parents) is to limit the kid's download 
 sizes from all over the net. Where I am we have capped internet and have to 
 pay for more cap. 
 Currently I get 20GB of data every month and by the end of the month I have 
 purchased in excess of 100GB throughout the month which gets very expensive. 
 My son plays games on his PS3 and some of the games (Call of Duty, I think) 
 one player can download another player's in-game recorded video (or something 
 like that) and that eats up the cap.
 
 Currently my reply_body_max_size is set to 20 MB in my efforts to curb 
 downloads and save some bandwidth. 
 However, whenever myself or the wife wants to download or visit youtube I 
 have to change the 20MB limit, restart Squid3, watch youtube, change limit 
 back to 20MB and reload Squid3 again ... which is a pain in the butt ...
 
 Currently my ACL's look like this:
 
 acl localnet src 10.0.0.0/24
 acl localnet_dad_laptop 10.0.0.10
 acl localnet_dad_smartphone 10.0.0.11
 acl localnet_mom_laptop 10.0.0.12
 acl localnet_mom_smartphone 10.0.0.13
 acl localnet_son_laptop 10.0.0.14
 acl localnet_son_smartphone 10.0.0.15
 acl localnet_son_tablet 10.0.0.16
 
 ---and so it goes on for all the other devices---
 
 http_access allow localnet

NOTE: No http_access ACLs controlling 10.0.0.0/24 have any effect below
this one that allows them all access to use the proxy.

 http_access allow localnet_dad_laptop
 http_access allow localnet_dad_smartphone
 http_access allow localnet_mom_laptop
 http_access allow localnet_mom_smartphone
 http_access allow localnet_son_laptop
 http_access allow localnet_son_smartphone
 http_access allow localnet_son_tablet
 
 ---and so it goes on for all the other devices---
 
 How can I allow mom and dad unlimited download sizes but limit download sizes 
 for my kids (son, daughter and daughter) and all the kid's friends that visit 
 and sleep over?

By applying ACLs for the kids on the reply_body_max_size directive lines
setting the sizes to use for them. Like so:
  reply_body_max_size 50 KB localnet_son_smartphone

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] New to Squid, Foward proxy problems with domain blocks.

2015-07-02 Thread Augusto Gabanzo
Hello, as the subject says im new. 

 

Been reading a lot and some examples and i do have a weird problem where i
can't block some domains. First and foremost im using the squid proxy for
windows version 2.7.8 

as thats the only one for windows that works for me the 3.x versions always
deny requests from clients even with the default conf. I've been testing all
this in a production enviroment so ... help me!! please of i will get killed
soon :D.

 

my conf for 2.7.8 is(I modifying one that comes with proxy 3-1):

 

#Modified by Kyi Thar 15 March 2010

http_port 8080

cache_mgr helpd...@ole.com.do

visible_hostname lotus.hidden

hierarchy_stoplist cgi-bin ?

cache_mem 64 MB

cache_replacement_policy heap LFUDA

cache_dir aufs c:/Squid/cache01 2000 16 256

cache_dir aufs c:/Squid/cache02 2000 16 256

cache_dir aufs c:/Squid/cache03 2000 16 256

cache_access_log c:/Squid/var/logs/access.log

cache_log c:/Squid/var/logs/cache.log

cache_store_log c:/Squid/var/logs/store.log

mime_table c:/Squid/etc/mime.conf

pid_filename c:/Squid/var/logs/squid.pid (this part here i dont know whats
its use as i cant find info about it on the net)

diskd_program c:/Squid/libexec/diskd.exe

unlinkd_program c:/Squid/libexec/unlinkd.exe

logfile_daemon c:/squid/libexec/logfile-daemon.exe

forwarded_for off

via off

httpd_suppress_version_string on

uri_whitespace strip

 

maximum_object_size 524288 KB

maximum_object_size_in_memory 1024 KB

 

#redirect_program c:/usr/local/squidGuard/squidGuard.exe

 

#authenication with Windows server (commented this part as i dont want users
to have to log on once more in the web pages I wasnt able to stop them from
doing so and my boss didnt like the extra hassle)

#auth_param basic program c:/squid/libexec/mswin_auth.exe -O HIDDEN

#auth_param ntlm program c:/squid/libexec/mswin_ntlm_auth.exe

#auth_param ntlm children 5

#auth_param ntlm keep_alive on

 

acl all src all

acl manager proto cache_object

acl localhost src 127.0.0.1/32

acl to_localhost dst 127.0.0.0/8 0.0.0.0/32

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
(some of my computers are in this range)

acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
(Dont use this range but i will make a DMZ for the servers with it)

acl localnet src 192.168.0.0/16   # RFC1918 possible internal network
(NORMAL range for users)

 

# catch certain bugs (for example with persistent connections)
and possibly

# buffer-overflow or denial-of-service attacks.

request_header_max_size 20 KB

reply_header_max_size 20 KB

 

#Limit upload to 2M and download to 10M (trying to stop users from uploading
big files to email sites and fb and download big files  as i only have 6mbps
and 1mbps down/up bandwidth)

request_body_max_size 2048 KB

reply_body_max_size 10485760 deny localnet

 

# compressed (i moddief this part as instead of 0 they had 10080 and instead
of 10080 they had 99 those times are too big files could stay forever
fresh! inside the cache)

 

refresh_pattern -i \.gz$ 0 90% 10080 

refresh_pattern -i \.cab$ 0 90% 10080 

refresh_pattern -i \.bzip2$ 0 90% 10080 

refresh_pattern -i \.bz2$ 0 90% 10080 

refresh_pattern -i \.gz2$ 0 90% 10080 

refresh_pattern -i \.tgz$ 0 90% 10080 

refresh_pattern -i \.tar.gz$ 0 90% 10080 

refresh_pattern -i \.zip$ 0 90% 10080 

refresh_pattern -i \.rar$ 000 90% 10080 

refresh_pattern -i \.tar$ 0 90% 10080 

refresh_pattern -i \.ace$ 0 90% 10080 

refresh_pattern -i \.7z$ 0 90% 10080 

 

# documents

refresh_pattern -i \.xls$ 0 90% 10080 

refresh_pattern -i \.doc$ 0 90% 10080 

refresh_pattern -i \.xlsx$ 0 90% 10080 

refresh_pattern -i \.docx$ 0 90% 10080 

refresh_pattern -i \.pdf$ 0 90% 10080 

refresh_pattern -i \.ppt$ 0 90% 10080 

refresh_pattern -i \.pptx$ 0 90% 10080 

refresh_pattern -i \.rtf\?$ 0 90% 10080 

 

# multimedia

refresh_pattern -i \.mid$ 0 90% 10080 

refresh_pattern -i \.wav$ 0 90% 10080 

refresh_pattern -i \.viv$ 0 90% 10080 

refresh_pattern -i \.mpg$ 0 90% 10080 

refresh_pattern -i \.mov$ 0 90% 10080 

refresh_pattern -i \.avi$ 0 90% 10080 

refresh_pattern -i \.asf$ 0 90% 10080 

refresh_pattern -i \.qt$ 0 90% 10080 

refresh_pattern -i \.rm$ 0 90% 10080 

refresh_pattern -i \.rmvb$ 0 90% 10080 

refresh_pattern -i \.mpeg$ 0 90% 10080 

refresh_pattern -i \.wmp$ 0 90% 10080 

refresh_pattern -i \.3gp$ 0 90% 10080 

refresh_pattern -i \.mp3$ 0 90% 10080 

refresh_pattern -i \.mp4$ 0 90% 10080 

 

# images

refresh_pattern -i \.gif$ 0 90% 10080 

refresh_pattern -i \.jpg$ 0 90% 10080 

refresh_pattern -i \.png$ 0 90% 10080 

refresh_pattern -i \.jpeg$ 0 90% 10080 

refresh_pattern -i \.bmp$ 0 90% 10080 

refresh_pattern -i \.psd$ 0 90% 10080 

refresh_pattern -i \.ad$ 0 90% 10080 

refresh_pattern -i \.gif\?$ 0 90% 10080 

refresh_pattern -i \.jpg\?$ 0 90% 10080 

refresh_pattern -i \.png\?$ 0 90% 10080 

refresh_pattern -i \.jpeg\?$ 0 90% 10080 

refresh_pattern -i \.psd\?$ 0 90% 10080 

 

Re: [squid-users] squid 3.5.5 issue after restart the system

2015-07-02 Thread Mohammad Shakir
Ok, we will try it today and post our results.



On Thursday, July 2, 2015 8:19 PM, Amos Jeffries squ...@treenet.co.nz wrote:
On 3/07/2015 3:09 a.m., Mohammad Shakir wrote:
 We are running single instance of squid. after 2 days running squid we got 
 the same error.


Please try the latest snapshot of 3.5. r13857 or later.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5.5 issue after restart the system

2015-07-02 Thread Mohammad Shakir
We are running single instance of squid. after 2 days running squid we got the 
same error.

[root@cache ~]# cat /var/log/squid/cache.log 

2015/07/02 20:01:28 kid1| DiskThreadsDiskFile::openDone: (2) No such file or 
directory 
2015/07/02 20:01:28 kid1|   /cache01/cache02/0C/F4/000CF437 
2015/07/02 20:01:29 kid1| DiskThreadsDiskFile::openDone: (2) No such file or 
directory 
2015/07/02 20:01:29 kid1|   /cache01/cache02/13/00/00130051 
2015/07/02 20:01:29 kid1| WARNING: 1 swapin MD5 mismatches 
2015/07/02 20:01:29 kid1| Could not parse headers from on disk object 
2015/07/02 20:01:29 kid1| BUG 3279: HTTP reply without Date: 
2015/07/02 20:01:29 kid1| StoreEntry-key: 388240BC608374033E480EABC453432A 
2015/07/02 20:01:29 kid1| StoreEntry-next: 0x20becba8 
2015/07/02 20:01:29 kid1| StoreEntry-mem_obj: 0x238c2140 
2015/07/02 20:01:29 kid1| StoreEntry-timestamp: -1 
2015/07/02 20:01:29 kid1| StoreEntry-lastref: 1435849289 
2015/07/02 20:01:29 kid1| StoreEntry-expires: -1 
2015/07/02 20:01:29 kid1| StoreEntry-lastmod: -1 
2015/07/02 20:01:29 kid1| StoreEntry-swap_file_sz: 0 
2015/07/02 20:01:29 kid1| StoreEntry-refcount: 1 
2015/07/02 20:01:29 kid1| StoreEntry-flags: PRIVATE,FWD_HDR_WAIT,VALIDATED 
2015/07/02 20:01:29 kid1| StoreEntry-swap_dirn: -1 
2015/07/02 20:01:29 kid1| StoreEntry-swap_filen: -1 
2015/07/02 20:01:29 kid1| StoreEntry-lock_count: 2 
2015/07/02 20:01:29 kid1| StoreEntry-mem_status: 0 
2015/07/02 20:01:29 kid1| StoreEntry-ping_status: 2 
2015/07/02 20:01:29 kid1| StoreEntry-store_status: 1 
2015/07/02 20:01:29 kid1| StoreEntry-swap_status: 0 
2015/07/02 20:01:29 kid1| assertion failed: store.cc:1885: isEmpty() 
2015/07/02 20:01:41 kid1| Current Directory is /root 
2015/07/02 20:01:41 kid1| Starting Squid Cache version 3.5.5 for 
x86_64-redhat-linux-gnu... 
2015/07/02 20:01:41 kid1| Service Name: squid 
2015/07/02 20:01:41 kid1| Process ID 12717 
2015/07/02 20:01:41 kid1| Process Roles: worker 
2015/07/02 20:01:41 kid1| With 65536 file descriptors available 
2015/07/02 20:01:41 kid1| Initializing IP Cache... 
2015/07/02 20:01:41 kid1| DNS Socket created at [::], FD 6 
2015/07/02 20:01:41 kid1| DNS Socket created at 0.0.0.0, FD 8 
2015/07/02 20:01:41 kid1| Adding nameserver 192.167.1.1 from squid.conf 
2015/07/02 20:01:41 kid1| Adding nameserver 192.167.1.1 from squid.conf 
2015/07/02 20:01:41 kid1| helperOpenServers: Starting 10/40 'storeid.pl' 
processes 
2015/07/02 20:01:41 kid1| Logfile: opening log /var/log/squid/access.log 
2015/07/02 20:01:41 kid1| WARNING: log name now starts with a module name. Use 
'stdio:/var/log/squid/access.log' 
2015/07/02 20:01:41 kid1| Store logging disabled 
2015/07/02 20:01:41 kid1| Swap maxSize 143360 + 131072 KB, estimated 
4779103 objects 
2015/07/02 20:01:41 kid1| Target number of buckets: 238955 
2015/07/02 20:01:41 kid1| Using 262144 Store buckets 
2015/07/02 20:01:41 kid1| Max Mem  size: 131072 KB 
2015/07/02 20:01:41 kid1| Max Swap size: 143360 KB 
2015/07/02 20:01:41 kid1| Rebuilding storage in /cache01/cache01 (dirty log) 
2015/07/02 20:01:41 kid1| Rebuilding storage in /cache01/cache02 (dirty log) 
2015/07/02 20:01:41 kid1| Rebuilding storage in /cache02/cache01 (dirty log) 
2015/07/02 20:01:41 kid1| Rebuilding storage in /cache02/cache02 (dirty log) 
2015/07/02 20:01:41 kid1| Using Least Load store dir selection 
2015/07/02 20:01:41 kid1| Current Directory is /root 
2015/07/02 20:01:41 kid1| Finished loading MIME types and icons. 
2015/07/02 20:01:41 kid1| Sending SNMP messages from [::]:3401 
2015/07/02 20:01:41 kid1| Squid plugin modules loaded: 0 
2015/07/02 20:01:41 kid1| Adaptation support is off. 
2015/07/02 20:01:41 kid1| Accepting HTTP Socket connections at local=[::]:3129 
remote=[::] FD 38 flags=9 
2015/07/02 20:01:41 kid1| Accepting NAT intercepted HTTP Socket connections at 
local=0.0.0.0:8080 remote=[::] FD 39 flags=41 
2015/07/02 20:01:41 kid1| Accepting SNMP messages on [::]:3401 
2015/07/02 20:01:41 kid1| Store rebuilding is 3.79% complete 
2015/07/02 20:01:43 kid1| Done reading /cache02/cache01 swaplog (105665 
entries) 
2015/07/02 20:01:43 kid1| Done reading /cache01/cache01 swaplog (105579 
entries) 
2015/07/02 20:01:56 kid1| Store rebuilding is 77.09% complete 
2015/07/02 20:02:01 kid1| Done reading /cache02/cache02 swaplog (1463112 
entries) 
2015/07/02 20:02:01 kid1| Done reading /cache01/cache02 swaplog (1465014 
entries) 
2015/07/02 20:02:01 kid1| Finished rebuilding storage from disk. 
2015/07/02 20:02:01 kid1|   3139360 Entries scanned 
2015/07/02 20:02:01 kid1| 0 Invalid entries. 
2015/07/02 20:02:01 kid1| 0 With invalid flags. 
2015/07/02 20:02:01 kid1|   3139356 Objects loaded. 
2015/07/02 20:02:01 kid1| 0 Objects expired. 
2015/07/02 20:02:01 kid1| 0 Objects cancelled. 
2015/07/02 20:02:01 kid1| 2 Duplicate URLs purged. 
2015/07/02 20:02:01 kid1| 2 Swapfile clashes avoided. 
2015/07/02 20:02:01 kid1|   Took 19.76 seconds (158911.29 objects/sec). 
2015/07/02 

Re: [squid-users] squid 3.5.5 issue after restart the system

2015-07-02 Thread Amos Jeffries
On 3/07/2015 3:09 a.m., Mohammad Shakir wrote:
 We are running single instance of squid. after 2 days running squid we got 
 the same error.


Please try the latest snapshot of 3.5. r13857 or later.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reply_body_max_size question

2015-07-02 Thread Danny
 It works as documented at
 http://www.squid-cache.org/Doc/config/reply_body_max_size/.  If that
 does not fit your criteria then its not what you need.

I am aware of that, I was just a little unsure how to split the different 
dowload
sizes amongst all the different users.
 
  http_access allow localnet
 
 NOTE: No http_access ACLs controlling 10.0.0.0/24 have any effect below
 this one that allows them all access to use the proxy.
 
  http_access allow localnet_dad_laptop
  http_access allow localnet_dad_smartphone
  http_access allow localnet_mom_laptop
  http_access allow localnet_mom_smartphone
  http_access allow localnet_son_laptop
  http_access allow localnet_son_smartphone
  http_access allow localnet_son_tablet

Thank you ... did not know that ... I was under the impression every user i.e
device needed to be granted http_access ...

 By applying ACLs for the kids on the reply_body_max_size directive lines
 setting the sizes to use for them. Like so:
   reply_body_max_size 50 KB localnet_son_smartphone

O.k ... so currently I have:
reply_body_max_size 20 MB

If I combine your suggestion and Augusto Gabanzo's (who suggested something a 
little different) can I then do something like this:
##
reply_body_max_size 0 MB !localnet_son_laptop !localnet_son_smartphone 
!localnet_son_tablet
reply_body_max_size 5 MB localnet_son_laptop localnet_son_smartphone 
localnet_son_tablet (// Or must each device get it's own limit?)
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] acl for redirect

2015-07-02 Thread Mike
We have a DNS guru on staff and editing the resolv.conf in this manner 
does not work (we tested it to make sure). Looks like we are using an 
older desktop to setup a basic DNS server and then point squid to redirect.




Mike


On 7/2/2015 2:06 AM, Stuart Henderson wrote:

On 2015-07-01, Mike mcsn...@afo.net wrote:

This is a proxy server, not a DNS server, and does not connect to a DNS
server that we have any control over... The primary/secondary DNS is
handled through the primary host (Cox) for all of our servers so we do
not want to alter it for all several hundred servers, just these 4
(maybe 6).
I was originally thinking of modifying the resolv.conf but again that is
internal DNS used by the server itself. The users will have their own
DNS settings causing it to either ignore our settings, or right back to
the Website cannot be displayed errors due to the DNS loop.

resolv.conf would work, or you can use dns_nameservers in squid.conf and
point just squid (if you want) to a private resolver configured to hand
out the forcesafesearch address.

When a proxy is used, the client defers name resolution to the proxy, you
don't need to change DNS on client machines to do this.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] New to Squid, Foward proxy problems with domain blocks.

2015-07-02 Thread Amos Jeffries
On 3/07/2015 3:29 a.m., Augusto Gabanzo wrote:
 Hello, as the subject says im new. 
 
  
 
 Been reading a lot and some examples and i do have a weird problem where i
 can't block some domains. First and foremost im using the squid proxy for
 windows version 2.7.8 
 
 as thats the only one for windows that works for me the 3.x versions always
 deny requests from clients even with the default conf. I've been testing all
 this in a production enviroment so ... help me!! please of i will get killed
 soon :D.
 
  
 
 my conf for 2.7.8 is(I modifying one that comes with proxy 3-1):

Don't. 2.7 contains no built-in defaults where 3.x does. The .conf file
contents need to be very different.


 pid_filename c:/Squid/var/logs/squid.pid (this part here i dont know whats
 its use as i cant find info about it on the net)

http://www.squid-cache.org/Doc/config/pid_filename/

The PID is used for sending signals to the Squid process/service.

 
 #Limit upload to 2M and download to 10M (trying to stop users from uploading
 big files to email sites and fb and download big files  as i only have 6mbps
 and 1mbps down/up bandwidth)
 
 request_body_max_size 2048 KB
 
 reply_body_max_size 10485760 deny localnet
 
  
 
 # compressed (i moddief this part as instead of 0 they had 10080 and instead
 of 10080 they had 99 those times are too big files could stay forever
 fresh! inside the cache)

forever in HTTP is no more than 68 years. In 2.7 thats 1 year.

And no, these lines only affect objects with are completely lacking
Cache-Control values. Most traffic has such controls and Squid obeys them.

Also, each refresh_pattern line has to be matched against a request
individually. Repeating many lines causes a lot of work to be done for
each request. Better to combine the patterns manually.


 
 acl fullvideo src c:/squid/etc/ipfullvideo.sq  # here is a file with ips
 allowed to see youtube and facebook videos , media streaming 
 
 acl bad_url url_regex -i c:/squid/etc/bad-sites.sq # .facebook.com
 .twitter.com rule to block those sites for users inside ipbloqueada

So why is it a slow regex and not a fast dstdomain ?

 
 acl ipbloqueada src 192.168.1.117/32 192.168.1.179/32 192.168.1.170/32
 192.168.1.15/32  # ips of 3 users that shouldnt be accessing fb and twitter.
 
 acl bad_ext urlpath_regex -i c:/squid/etc/extensiones.sq # rule to block
 some file extesions like .avi$, .mpg$ etc stop downloads from them even if
 they are smaller than 10MB (this doesn't WORK!)
 

The regex syntax mentioned assumes the URL ends with the file extension.
That is fairly uncommon. Most of the download sites these days the ext
is some dynamic script like .php or .asp. Using the content-type and
content-disposition headers to deliver the filename details.


 
 http_access allow localnet #let the
 network use the proxy
 
 http_access allow localhost   #let the
 proxy server use itself ??( O_o i dont quite get this part.)


Lets other software on the Squid server us it. Yes that includes the
proxy looping traffic back at tself, but the Via header protects against
that begin a problem.


 
 http_access allow manager localhost
 
  
 
 http_access deny bad_url ipbloqueada   #here i want all the urls
 in BAD_URL from the ips IPBLOQUEADA to be denied used to work ... when i
 started but now it doesnt i will show a sample of the file at the end

If I'm reding that comment on the ipbloqueada definition you are
assuming that Facebook, Twitter etc are still using plaintext HTTP
through the proxy. They dont. These days they use TLS with SPDY or
HTTP/2 or QUIC or HTTPS.


 
 http_access deny bad_ext#block
 reading of files with those extensions.
 
 deny_info TCP_RESET bad_ext#send a tcp_reset
 so they dont know proxy blocked them
 
 http_reply_access deny media !fullvideo   # here i try to deny
 access to media to all but those inside fullvideo (doesnt quite work either
 youtube loads and works :D) some other streaming are blocked well
 

YT is HTTPS not HTTP now.


 
 # And finally deny all other access to this proxy
 
 http_access deny all
 
  
 
 #always_direct allow all  # i
 feel this part is to let squidguard work, i removed it cuz it blocked
 youtube  and many other sites i bet that was because the ads.
 

always_direct has no effect unless cache_peer directive is used. In
which case it makes the cache_peer not be used for traffic.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Setting logfile_daemon

2015-07-02 Thread Byron Mackay
I'm running Squid 3.3.8 on Ubuntu inside a Docker container and I want to
add a custom logger. I want to keep what the current logger is doing and
append a few things, so I simply copied the default logger's source (
http://www.squid-cache.org/Doc/code/log__file__daemon_8cc_source.html) and
put it into another file. Then in my config I set that file to be the
daemon using logfile_daemon and setting the path to the file. When I spin
up the server, I get the following:

2015/07/02 20:24:06| logfileHandleWrite: daemon:/var/log/squid3/access.log:
error writing ((32) Broken pipe) 2015/07/02 20:24:06| Closing HTTP port
[::]:3128 2015/07/02 20:24:06| storeDirWriteCleanLogs: Starting...
2015/07/02 20:24:06| Finished. Wrote 0 entries. 2015/07/02 20:24:06| Took
0.00 seconds ( 0.00 entries/sec). FATAL: I don't handle this error well!
2015/07/02 20:24:06| Closing Pinger socket on FD 18

I've tried a number of things (including making a skeleton version with
just the methods and no content) and looked all over online and in the
Squid 3.1 beginner book (I know, out of date, but still a good reference),
but I haven't found a thing. Am I missing something obvious here? I feel
like it doesn't like something inside my logger, but I can't be sure based
on the error. Attached is my config file. The line of interest is
logfile_daemon /usr/lib/squid3/my_logger.


squid.conf
Description: Binary data
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users