Re: [squid-users] Re: Should I see a massive slowdown when chaining squid = privoxy

2011-06-05 Thread Amos Jeffries

On 05/06/11 11:55, Harry Putnam wrote:

Amos Jeffriessqu...@treenet.co.nz  writes:

[...]

Harry wrote (summarized -ed hp):
Adding squid and privoxy into a proxy setup seems to really really slow
down my browsing as compared to browsing with a direct connection. (no proxy)

And asks if this is normal.

[...]


Speed gain/loss/other depends on what you are moving from.

MORE IMPORTANTLY: how you define slow!

  Keep in mind that you also now have around 2x the processing going on
with 2 proxies. The difference added by Squid can be at least
10ms. Some people call that noticeable slowdown. Some dont care about
anything less than a second.


I'm guessing its more than seconds slower but not really sure how
to gage the difference reliably, so as not to be giving flawed
information here or have difference due to caching or something.

Can you suggest a method to arrive at a fairly good comparison?


The developer tools in any NetKit browser (Safari, Chrome, Gecko etc) 
or FireBug in Firefox or Iceweasel browsers shows a wealth of timing 
information on the network analysis panel. One forced load with and one 
without will tell you pretty clearly what the timing and speed 
difference is. And whether its page processing or transfer lag.





  * 3.1 is about 10-20% slower than the latest 2.7 on the same
config. With the older versions of 3.1 being on the slower end of that
scale as we work to optimize and fix things throughout the series.


Wouldn't 20% be noticeable?  So you're saying to back down a few
versions for now?


Possibly. It varies a lot on where and how you run Squid. The slow 
here is in CPU time cycles. So if your CPU is maxed out you see it as 
real seconds, if you have spare CPU it should not be noticeably different.





  * Moving to Squid from a non-proxy setup can be a major drop down
depending on the browser age. The browsers themselves drop the
parallel fetch rate from hundreds down to under 10. Browser tweaking
is the only way to avoid this.


I'm using firefox 4.X on all home lan machines (that have a gui).

Can you recommend some documentation that might help with what you
called `Browser tweaking', I've never done anything special to a
browser other than add or subtract add-on tools.


  * Moving from browser-privoxy to a browser-squid-privoxy setup you
should have seen only a small drop. Some possibilities are Squid using
slow disks (maybe RAID), or Squid box is swapping, or the bandwidth is
being routed down the same physical links to/from Squid.


No raid, and the hardware of the server is P4 Intel(R) Celeron(R) CPU
3.06GHz and  2GBram running on oldish IDE discs

----   ---=---   -  

Probably should have included some questions about the squid.conf and
privoxy/config in the first post.  Maybe there are things in there
that are not good left as default.

I realize that both tools have several config files.  I've left all
but the two main ones in default state and have posted my working
/etc/squid/squid.conf and /etc/privoxy/config

There is a lot of debris in the comments but still I thought it might
be useful to leave it all in for jogging memories...But also included
a way to prune the comments by changing the name of the cgi script
at the end of the URL from `disp.cgi' to `strp.cgi'.

Any coaching would be well appreciated.



Looks good. :)


The audit results so far:

 * patch is reversed. Please diff the files the other way around on the 
next one :)


 * _SQUID_INLINE_ static is not as portable as we would like. So we 
have to avoid it for now.


 * instead of debugs() level 0 or 1 we have a macro DBG_CRITICAL or 
DBG_IMPORTANT to indicate how bad the problem is. Most of what you have 
at level 28,0 only needs 28,DBG_IMPORTANT.


 * rather than naming the function all over the place please use the 
macro HERE to start the debugs text in debugs level 2 and higher.
  * also, stuff important enough for level 0 and 1 is usually user 
visible and should be describing a problem+solution clearly enough not 
to need the internal function name.


compileRE():
 * the bad pattern error message seems to work better like this:

  debugs(28, DBG_IMPORTANT, WARNING: Skipping invalid regex in  
 cfg_filename   line   config_lineno  : ' 
 config_input_line  ':   errbuf);
  debugs(28, 7, HERE  compiled regex:'  RE  ':   errbuf);


compileOptimisedREs():
 * the if (RElen  BUFSIZ-1) case is done by aclParseRegexList(). It is 
redundant here.


 * you follow if (RElen + largeREindex + 3  BUFSIZ-1) with four ++ 
operations and assignments. Can the fourth overflow the buffer?


 * at the end of the if-elif-else sequence you state do the loop again 
to add the RE to largeRE then continue;.
  What is that comment meaning? the compiled pattern is used as prefix 
for further appended patterns?



compileUnoptimisedREs():
 * the if (RElen  BUFSIZ-1) case is done by aclParseRegexList(). It is 
redundant here.



aclParseRegexList():
 * 

Re: [squid-users] lots of UDP connections

2011-06-05 Thread Amos Jeffries

On 05/06/11 16:55, Bal Krishna Adhikari wrote:

On 06/04/2011 12:59 PM, Amos Jeffries wrote:

Bal Krishna Adhikari 6/3/2011 6:13 AM


Hello,

I found a lot of UDP connections that is coming to my proxy servers.
I don't find the cause of such one-way traffics to my servers.
The sample UDP traffic is as :-

14:00:07.506612 IP 41.209.69.146.10027 x.x.x.x.65453: UDP, length 30
14:00:07.518118 IP 121.218.37.254.41597 x.x.x.x.64338: UDP, length
30
14:00:07.572559 IP 85.224.143.193.29978 x.x.x.x.62782: UDP, length
30
14:00:07.596554 IP 183.87.200.42.36895 x.x.x.x.15786: UDP, length 30
14:00:07.642820 IP 180.215.37.96.49977 x.x.x.x.49458: UDP, length 30
14:00:07.653055 IP 117.195.138.64.24314 x.x.x.x.44985: UDP, length
33
14:00:07.739963 IP 82.31.238.101.50534 x.x.x.x.52750: UDP, length 30
14:00:07.783452 IP 86.83.107.196.41870 x.x.x.x.62782: UDP, length 30
14:00:07.809677 IP 94.246.23.15.59003 x.x.x.x.27462: UDP, length 30
14:00:07.837415 IP 75.156.164.147.49398 x.x.x.x.34847: UDP, length
30
14:00:07.841668 IP 82.8.212.242.25931 x.x.x.x.24869: UDP, length 30
14:00:07.841697 IP 89.136.112.99.42182 x.x.x.x.52750: UDP, length 30
14:00:07.854215 IP 99.191.156.208.18162 x.x.x.x.64338: UDP, length
30
14:00:07.885386 IP 88.147.72.252.60224 x.x.x.x.19151: UDP, length 30
14:00:07.960841 IP 68.169.185.192.63480 x.x.x.x.58638: UDP, length
30
14:00:08.071763 IP 79.113.242.42.31998 x.x.x.x.33995: UDP, length 30
14:00:08.078260 IP 94.202.49.109.61957 x.x.x.x.26071: UDP, length 67
14:00:08.101495 IP 82.169.68.179.19605 x.x.x.x.45682: UDP, length 30
14:00:08.113238 IP 86.99.42.7.15086 x.x.x.x.11706: UDP, length 67
14:00:08.127979 IP 62.195.70.253.45266 x.x.x.x.37050: UDP, length 30
14:00:08.163992 IP 2.82.207.195.38343 x.x.x.x.26680: UDP, length 30
14:00:08.183453 IP 68.81.206.57.25923 x.x.x.x.18378: UDP, length 30
14:00:08.237689 IP 108.120.241.254.47249 x.x.x.x.39433: UDP, length
30
14:00:08.256906 IP 99.161.157.254.41719 x.x.x.x.26680: UDP, length
30
14:00:08.291885 IP 121.136.175.247.12577 x.x.x.x.16485: UDP, length
67
14:00:08.315427 IP 121.144.158.120.30845 x.x.x.x.61415: UDP, length
30
14:00:08.317404 IP 115.117.219.18.25817 x.x.x.x.59936: UDP, length
30

Anyone has any idea if the traffic is genuine or some kind of attack ?
x.x.x.x is my proxy server.

--- Bal Krishna



On 04/06/11 01:16, Chad Naugle wrote:
 Check the hostname of these IP addresses. They could be DNS replies,
 using random ports for source/destinations. Squid can generate tons of
 DNS traffic.


I don't think its genuine Squid traffic. DNS, ICP and HTCP all use a
fixed well-known port at one end and a rarely changing port at the other.

It could be anything else on the box though.

There are a few CVE attacks this could be, two using DNS and one HTCP.
If you have a Squid 2.7.STABLE8+, 3.0.STABLE23+ or 3.1.1+ you are safe
from those. They are just annoying.

If you have a Squid-3.1+ with an IPv6 address publicly advertised this
could be a sign of v6 connection attempts. Several IP tunnel protocols
involve UDP handshakes.

Amos


I'm currently using 2.7 STABLE9.
And the connection seems increased then earlier.
Blocking the UDP other then DNS and SNMP from outside can solve the
problem ?


We can't answer that. It may not be a problem. You need to find out what 
it actually is. Blocking it will stop it doing anything, but until you 
know what it is that may just be creating a different problem.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


[squid-users] dst vs dstdomain speed

2011-06-05 Thread E.S. Rosenberg
Hi,
Is dst easier/faster for squid then dstdomain to handle?
I'm asking this because I see a lot of the pre-made black/white lists
seem to be of the dst type while it seems to me that dstdomain is more
effective and easier to manage since you don't need to add an entry
for every single host on a domain you want to block/allow you just add
.domain.tld to the list.
Also as far as I understand when a user tries to use an IP instead of
a domain name if the IP is known to be matched to a domain in a list
whatever rule was applied to said list will be applied to the IP even
though it is not mentioned specifically in the list?
Thanks and regards,
Eli


Re: [squid-users] dst vs dstdomain speed

2011-06-05 Thread Amos Jeffries

On 05/06/11 22:56, E.S. Rosenberg wrote:

Hi,
Is dst easier/faster for squid then dstdomain to handle?
I'm asking this because I see a lot of the pre-made black/white lists
seem to be of the dst type while it seems to me that dstdomain is more
effective and easier to manage since you don't need to add an entry
for every single host on a domain you want to block/allow you just add
.domain.tld to the list.
Also as far as I understand when a user tries to use an IP instead of
a domain name if the IP is known to be matched to a domain in a list
whatever rule was applied to said list will be applied to the IP even
though it is not mentioned specifically in the list?


dstdomain is  bit dynamic. It is fast for domains and slow for raw-IP. 
It does a plain text match on the value the client gave (whether domain 
FQDN or textual IP representation). If there was a raw-IP AND it is 
working in a slow access list it will lookup and try to match on the rDNS.


dst must always lookup the IP. So is always slow category. On raw-IP 
requests it can be the faster one.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


[squid-users] About reply_body_max_size

2011-06-05 Thread Odhiambo WASHINGTON

Reading squid.conf.documented for 3.1.9, I see this portion:

TAG: reply_body_max_size size [acl acl...]
#   This option specifies the maximum size of a reply body. It can be
#   used to prevent users from downloading very large files, such as
#   MP3's and movies. When the reply headers are received, the
#   reply_body_max_size lines are processed, and the first line where
#   all (if any) listed ACLs are true is used as the maximum body size
#   for this reply.
#
#   This size is checked twice. First when we get the reply headers,
#   we check the content-length value.  If the content length value exists
#   and is larger than the allowed size, the request is denied and the
#   user receives an error message that says the request or reply
#   is too large. If there is no content-length, and the reply
#   size exceeds this limit, the client's connection is just closed
#   and they will receive a partial reply.
#
#   WARNING: downstream caches probably can not detect a partial reply
#   if there is no content-length header, so they will cache
#   partial responses and give them out as hits.  You should NOT
#   use this option if you have downstream caches.
#
#   WARNING: A maximum size smaller than the size of squid's error  
messages

#   will cause an infinite loop and crash squid. Ensure that the smallest
#   non-zero value you use is greater that the maximum header size plus
#   the size of your largest error page.
#
#   If you set this parameter none (the default), there will be
#   no limit imposed.
#
#   Configuration Format is:
#   reply_body_max_size SIZE UNITS [acl ...]
#   ie.
#   reply_body_max_size 10 MB

Now, having the following line causes squid (3.1.9) to grok:

reply_body_max_size 0 KB deny all


squid -k parse gives this:

2011/06/05 21:03:05| aclParseAclList: ACL name 'KB' not found.
FATAL: Bungled squid.conf line 60: reply_body_max_size 0 KB deny all
Squid Cache (Version 3.1.9): Terminated abnormally.
CPU Usage: 0.007 seconds = 0.007 user + 0.000 sys
Maximum Resident Size: 4376 KB
Page faults with physical i/o: 0


Where is the problem?


--
Best regards,
Odhiambo WASHINGTON,
Nairobi,KE



[squid-users] Squid TProxy Problem

2011-06-05 Thread Ali Majdzadeh
Hello All,
I have setup the following configuration:
Squid (3.1.12) (--enable-linux-netfilter passed as the one and only
configure option)
Kernel (2.6.38.3)
iptables (1.4.11)

I have added the following two directives in squid.conf:
http_port 3128
http_port 3129 tproxy

Also, I have configured iptables with the following rules:
iptables -t mangle -N DIVERT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129

Everything work as expected, I mean, the users can surf the web and
the proxy server is transparent. The problem is that actually there is
no caching. I mean, both cache.log and access.log files are empty. On
the other hand, if I manually set the proxy configuration in clients'
browsers (the IP address of the squid server and port number 3128)
everything is OK; the log files are incremented and objects are
cached.

Have anyone faced the same issue?

Warm Regards,
Ali Majdzadeh Kohbanani


[squid-users] need a simple transparent caching conf

2011-06-05 Thread MrNicholsB

Squid is caching content, but it  is NOT serving cache to my clients and
frankly its driving me nuts, I dont need a 101 on squid, I just need a
basic conf. I wish the devs would include a basic transparent cache
proxy conf with squid to save noobs like me the trouble. My clients are
MANUALLY aimed at the proxy at port 3128, they can surf just fine, so NAT is 
NOT required on the box, I just need a conf that actually WORKS. This is 
getting absurd, I dont understand why its not serving up cached content, I 
download ANYTHING you know 13mb exe files from a site, then go download the 
same file on another pc and BAM!! fresh content NOT served from the cache, 
wtf am I doing wrong here!?!?!?!


Ive tried several confs and they all FAIL to actually serve up cache, below 
is my latest attempt.


http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
access_log /var/log/squid3/access.log squid
hosts_file /etc/hosts
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563 # https, snews
acl SSL_ports port 873 # rsync
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
acl lan src 10.10.1.0/24
http_access allow localhost
http_access allow lan
http_access deny all
http_reply_access allow all
icp_access allow all
visible_hostname katmai.local
always_direct allow all
coredump_dir /var/spool/squid3
cache_dir ufs /var/spool/squid3 2 32 256
maximum_object_size 200 MB
maximum_object_size_in_memory 50 MB



Re: [squid-users] About reply_body_max_size

2011-06-05 Thread Amos Jeffries

On 06/06/11 06:24, Odhiambo WASHINGTON wrote:

Reading squid.conf.documented for 3.1.9, I see this portion:

TAG: reply_body_max_size size [acl acl...]
# This option specifies the maximum size of a reply body. It can be
# used to prevent users from downloading very large files, such as
# MP3's and movies. When the reply headers are received, the
# reply_body_max_size lines are processed, and the first line where
# all (if any) listed ACLs are true is used as the maximum body size
# for this reply.
#
# This size is checked twice. First when we get the reply headers,
# we check the content-length value. If the content length value exists
# and is larger than the allowed size, the request is denied and the
# user receives an error message that says the request or reply
# is too large. If there is no content-length, and the reply
# size exceeds this limit, the client's connection is just closed
# and they will receive a partial reply.
#
# WARNING: downstream caches probably can not detect a partial reply
# if there is no content-length header, so they will cache
# partial responses and give them out as hits. You should NOT
# use this option if you have downstream caches.
#
# WARNING: A maximum size smaller than the size of squid's error messages
# will cause an infinite loop and crash squid. Ensure that the smallest
# non-zero value you use is greater that the maximum header size plus
# the size of your largest error page.
#
# If you set this parameter none (the default), there will be
# no limit imposed.
#
# Configuration Format is:
# reply_body_max_size SIZE UNITS [acl ...]
# ie.
# reply_body_max_size 10 MB

Now, having the following line causes squid (3.1.9) to grok:

reply_body_max_size 0 KB deny all


squid -k parse gives this:

2011/06/05 21:03:05| aclParseAclList: ACL name 'KB' not found.
FATAL: Bungled squid.conf line 60: reply_body_max_size 0 KB deny all
Squid Cache (Version 3.1.9): Terminated abnormally.
CPU Usage: 0.007 seconds = 0.007 user + 0.000 sys
Maximum Resident Size: 4376 KB
Page faults with physical i/o: 0


Where is the problem?



There are two special cases for traffic size:

 0. Meaning no body is permitted.
 none. Meaning no limit applied or unlimited size.

Units are not relevant on these and Squid does not currently accept any. 
You can still add ACLs after these special values to indicate _when_ 
they apply.


NP: the default is not to limit any replies.
An implicit: reply_body_max_size none all.


The word deny is also not relevant in reply_body_max_size.

Squid ACL lines have a general syntax $directive $value $conditions. 
The $value applies only when the $conditions are all matching.
 In the case of http_access the $value is permission or rejection 
(allow/deny). In reply_body_max_size the $value is the limit being set. 
So what you would be used to as allow/deny elsewhere is written n KB here.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] Squid TProxy Problem

2011-06-05 Thread Amos Jeffries

On 06/06/11 06:32, Ali Majdzadeh wrote:

Hello All,
I have setup the following configuration:
Squid (3.1.12) (--enable-linux-netfilter passed as the one and only
configure option)
Kernel (2.6.38.3)
iptables (1.4.11)

I have added the following two directives in squid.conf:
http_port 3128
http_port 3129 tproxy

Also, I have configured iptables with the following rules:
iptables -t mangle -N DIVERT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129

Everything work as expected, I mean, the users can surf the web and
the proxy server is transparent. The problem is that actually there is
no caching. I mean, both cache.log and access.log files are empty. On


That would be transparency to the point of not going through the proxy. 
access.log should have entries for each request.



the other hand, if I manually set the proxy configuration in clients'
browsers (the IP address of the squid server and port number 3128)
everything is OK; the log files are incremented and objects are
cached.

Have anyone faced the same issue?


Some. Its usually boiled down to missing out some details omitted. 
building against libcap2 or routing packets to the squid box for example.


Are the packet counters on that -j TPROXY rule showing captures?

Did you follow the rest of the feature config?
 ie the special sub-routing table? OS packet filtering toggles? selinux 
updated to allow tproxy?


Is this box even routing or bridging port 80 traffic for the network?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] need a simple transparent caching conf

2011-06-05 Thread Amos Jeffries

On 06/06/11 11:55, MrNicholsB wrote:

Squid is caching content, but it is NOT serving cache to my clients and
frankly its driving me nuts, I dont need a 101 on squid, I just need a
basic conf. I wish the devs would include a basic transparent cache
proxy conf with squid to save noobs like me the trouble. My clients are


(rant warning)

We can't bundle it.
 * This TCP hijacking is no topic for noobs as you put it.
 * transparent rides a fine line of legality in most of the world. 
Just like downloading MP3s and AVIs, everybody noob tries it anyway.


We do distribute the 19 configs via the wiki.
 * http://wiki.squid-cache.org/ConfigExamples/#Interception

as you can see; a different config for every device, firewall software, 
and firewall feature on the market. That list is also only for the 
common ones we get told about.


/rant


MANUALLY aimed at the proxy at port 3128, they can surf just fine, so


good. Problem worked around then. Time to relax before looking at 
alternatives calmly.



NAT is NOT required on the box, I just need a conf that actually WORKS.


Good. Lets keep it completely out of the picture until the caching bit 
is figured out.



This is getting absurd, I dont understand why its not serving up cached
content, I download ANYTHING you know 13mb exe files from a site, then
go download the same file on another pc and BAM!! fresh content NOT


Ah, there is a sign that (a) the PC are each asking for different 
content (one URL has multiple variants in HTTP), or (b) the server is 
producing different content for each unique client.


Once you have a recent enough version of Squid we can give you debug 
settings to log the headers and see what is going on.



served from the cache, wtf am I doing wrong here!?!?!?!


Still doing all this with 3.0.STABLE1 ?   yes/no ?
Caching behaviour and HTTP compliance has undergone a LOT of good 
changes since then.




Ive tried several confs and they all FAIL to actually serve up cache,
below is my latest attempt.

http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
access_log /var/log/squid3/access.log squid
hosts_file /etc/hosts
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563 # https, snews
acl SSL_ports port 873 # rsync
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
acl lan src 10.10.1.0/24
http_access allow localhost
http_access allow lan
http_access deny all
http_reply_access allow all
icp_access allow all
visible_hostname katmai.local
always_direct allow all
coredump_dir /var/spool/squid3
cache_dir ufs /var/spool/squid3 2 32 256
maximum_object_size 200 MB
maximum_object_size_in_memory 50 MB




--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2