[squid-users] configuration of Squid for high cache gain

2011-07-19 Thread benjamin fernandis
Hello Friends,

We are going to deploy squid for getting high performance caching gain
capability.Currently we are going to plan a demo for squid caching at
one isp place.

isp has 1500-2000 users.

H/w specification:
Quad Core Xeon 3.06 Ghz Processor
32 GB ram
2 TB sata hdd

OS : CENTOS 6

Requirement:

We need to deploy full tproxy squid feature and want to gain high
cache performance.And we are trying squid for cache gain only.

So please guide me that what are the perameters , i must have to look
after to gain more caheing performance.And how much RAM i can assign
to squid.This box is only for squid cache so there is no other
processes occupy memory.And also suggest me other standard squid
parameter , which i must need to tune for enhancing more caching.



Current Setup :

As per ISP:

for bandwith mgmt., they are using commercial bandwith management
devices as NAS  and for AAA they are using radius.

So for TPROXY , do i need to deploy squid box at bridge mode ? or do i
deploy it as external device ( only forward web traffic from NAS to
squid box) ?

So please guide me for my above request.And also suggest your suggestions.

Regards,
Benjamin Fernandis


Re: [squid-users] SARG: The date range passed as argument is not formatted as dd/mm/yyyy-dd/mm/yyyy

2011-07-19 Thread Helmut Hullen
Hallo, chinner999,

Du meintest am 18.07.11:

 /usr/sbin/sarg-daily-report

 TODAY=$(date +%/%m/%d)
 YESTERDAY=$(date -date 1 day ago +%/%m/%d)

   --date

Viele Gruesse!
Helmut


[squid-users] Question about Connection: keep-alive and Proxy-Connection: keep-alive

2011-07-19 Thread Silamael
Hello there,

We have some application encountering problems if it's communicating via
Squid:
- the application sends a HTTP/1.1 HEAD-Request with Proxy-Connection:
keep-alive set
- Squid then forwards the request to the server with Connection: keep-alive
- the server replies with the headers and a Connection: close header -
Squid returns the response with only Proxy-Connection: keep-alive to the
client which afterwards hangs

Is that the correct behavior or have we encountered a problem with Squid
itself?
Thanks for your answers!

Greetings,
Matthias


[squid-users] squid for gzip supported

2011-07-19 Thread pangj

Hello,

Currently which version of squid works with the backend gziped content?
I mean it should send the compressed content to the clients.
I have been using Squid-3.1.12, it seems not does it.

Thanks.

Une messagerie gratuite, garantie à vie et des services en plus, ça vous tente ?
Je crée ma boîte mail www.laposte.net


Re: [squid-users] squid for gzip supported

2011-07-19 Thread Amos Jeffries

On 19/07/11 21:33, pangj wrote:


Hello,

Currently which version of squid works with the backend gziped content?
I mean it should send the compressed content to the clients.
I have been using Squid-3.1.12, it seems not does it.


There is an eCAP module for compression. Squid-3.1 and later support 
that module.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.9


Re: [squid-users] Question about Connection: keep-alive and Proxy-Connection: keep-alive

2011-07-19 Thread Amos Jeffries

On 19/07/11 19:19, Silamael wrote:

Hello there,

We have some application encountering problems if it's communicating via
Squid:
- the application sends a HTTP/1.1 HEAD-Request with Proxy-Connection:
keep-alive set
- Squid then forwards the request to the server with Connection: keep-alive
- the server replies with the headers and a Connection: close header -
Squid returns the response with only Proxy-Connection: keep-alive to the
client which afterwards hangs

Is that the correct behavior or have we encountered a problem with Squid
itself?


Both Squid and the app are slightly broken.

 The Proxy-Connection headers are not supposed to be sent. Your app and 
squid are both supposed to send only Connection: and accept either 
Proxy-Connection: or Connection: back.


 There are some versions of Squid-3 prior to 3.1.7 which unfortunately 
send Proxy-Connection so it will be around for a while. If you have one 
please upgrade or apply patch:


http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-10067.patch

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.9


[squid-users] squid with kerberos authentication

2011-07-19 Thread Franco, Battista
Hello

On Centos 6 I want used squid (version 3.1.4) with Kerberos
authentication so only AD Windows 2003 authenticated users can surfing.
Well I perform the steps (explained at link
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos)

but when users tried to surfing the IE require user and password and
didn't surfing.
Why?
Can you help me.

 MORE INFO 

I did the following steps:

Install  and configure samba 
modify krb5.conf
net ads join -U DOMAIN\administrator
kinit administrator@DOMAIN
export KRB5_KTNAME=FILE:/etc/squid/HTTP.keytab
net ads keytab CREATE -U DOMAIN\administrator
net ads keytab ADD HTTP -U DOMAIN\administrator
unset KRB5_KTNAME
chgrp squid /etc/squid/HTTP.keytab
chmod g+r /etc/squid/HTTP.keytab
modify squid startup file with :
KRB5_KTNAME=/etc/squid/HTTP.keytab
export KRB5_KTNAME



below squid.conf file:


auth_param negotiate program /usr/lib/squid/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on
acl auth proxy_auth REQUIRED
...
http_access deny !auth
http_access allow auth
http_access deny all



With command :
/usr/lib/squid/squid_kerb_auth_test proxyserver 
The token was displayed.

 


[squid-users] Re: configuration of Squid for high cache gain

2011-07-19 Thread benjamin fernandis
Hi All,

Any suggestion for this.

Thanks,
Benjamin

On Tue, Jul 19, 2011 at 8:16 AM, benjamin fernandis
benjo11...@gmail.com wrote:
 Hello Friends,

 We are going to deploy squid for getting high performance caching gain
 capability.Currently we are going to plan a demo for squid caching at
 one isp place.

 isp has 1500-2000 users.

 H/w specification:
 Quad Core Xeon 3.06 Ghz Processor
 32 GB ram
 2 TB sata hdd

 OS : CENTOS 6

 Requirement:

 We need to deploy full tproxy squid feature and want to gain high
 cache performance.And we are trying squid for cache gain only.

 So please guide me that what are the perameters , i must have to look
 after to gain more caheing performance.And how much RAM i can assign
 to squid.This box is only for squid cache so there is no other
 processes occupy memory.And also suggest me other standard squid
 parameter , which i must need to tune for enhancing more caching.



 Current Setup :

 As per ISP:

 for bandwith mgmt., they are using commercial bandwith management
 devices as NAS  and for AAA they are using radius.

 So for TPROXY , do i need to deploy squid box at bridge mode ? or do i
 deploy it as external device ( only forward web traffic from NAS to
 squid box) ?

 So please guide me for my above request.And also suggest your suggestions.

 Regards,
 Benjamin Fernandis



Re: [squid-users] SARG: The date range passed as argument is not formatted as dd/mm/yyyy-dd/mm/yyyy

2011-07-19 Thread chinner999
Hi guys,
That worked.

I was missing a - in the 2nd line of the /usr/sbin/sarg-daily-report


TODAY=$(date +%d/%m/%Y)
YESTERDAY=$(date --date 1day ago  +%d/%m/%Y)
sarg /var/log/squid3/access.log -o /var/www/squid-reports/daily -z -d 
$YESTERDAY-$TODAY
/usr/sbin/squid3 -k rotate
exit 0


I modified sarg-weekly-report and sarg-monthly-report accordingly.

 On Tue, 19 Jul 2011 00:26:00 -0600 Helmut Hullen  wrote  

Hallo, chinner999, 
 
Du meintest am 18.07.11: 
 
 /usr/sbin/sarg-daily-report 
 
 TODAY=$(date +%/%m/%d) 
 YESTERDAY=$(date -date 1 day ago +%/%m/%d) 
 
 --date 
 
Viele Gruesse! 
Helmut 



[squid-users] how to filter urls with the external_acl_type option?

2011-07-19 Thread Zael Rey


Hello there  I',m working in a script to filter with squid using:

external_acl_type myAclType %SRC %URI 
/home/konrad/testing/myexternalacltype.pl

acl MyAcl external myAclType
http_access allow MyAcl

this is the script I have and for some reason its not working   :

#!/usr/bin/perl -w
$|=1;
open(STDERR, /tmp/external_acl.log);
select(STDERR); $| = 1; # make unbuffered
select(STDOUT); $| = 1; # make unbuffered
print STDERR INI: $$\n\n;;


use MIME::Base64 ();
while () {
   print STDERR --- $_\n\n;;
   print ERR\n;
}

Its supposed to block when it has ERR but it does not, it allows always 
can you tell my whats wrong pls.



the log file is /tmp/external_acl.log
INI: 5101

192.168.100.131 http://clients1.google.com.mx/generate_204


owsing/downloads?client=googlechromeappver=12.0.742.124pver=2.2wrkey=AKEgNiuxyrXd6Ogn0--yqQphNRgBvtmHMJ6qOaSuWkcLGp37Xr7Q8yn8PC7E_P6KDikrGokHnnvHaS-39wHdVm6e1rt8ApQWSw==


192.168.100.131 
http://safebrowsing-cache.google.com/safebrowsing/rd/ChVnb29nLWJhZGJpbi1kaWdlc3R2YXIQABjlCCDmCCoFZgQAAAEyBWUEAAAB



192.168.100.131 
http://safebrowsing-cache.google.com/safebrowsing/rd/ChVnb29nLWJhZGJpbnVybC1zaGF2YXIQARj9DiD-DioFfgcAAAEyBX0HAAAB



192.168.100.131 
http://safebrowsing-cache.google.com/safebrowsing/rd/ChVnb29nLWJhZGJpbnVybC1zaGF2YXIQABj4DCD8DCoFegYAAAcyBXgGAAAD



192.168.100.131 
http://safebrowsing-cache.google.com/safebrowsing/rd/ChNnb29nLW1hbHdhcmUtc2hhdmFyEAEY-7QDIIS1AyoFftoAAH8yBXvaAAAH

.
.
.



and the log file cache.log
.
.
.

2011/07/19 11:42:32| cbdataValid: 0xb9448558
2011/07/19 11:42:32| helperHandleRead: 4 bytes from myAclType #1.
2011/07/19 11:42:32| commSetSelect: FD 7 type 1
2011/07/19 11:42:32| commSetEvents(fd=7)
2011/07/19 11:42:32| helperHandleRead: 'ERR
'
2011/07/19 11:42:32| helperHandleRead: end of reply found: ERR

2011/07/19 11:42:32| cbdataValid: 0xb970b4b0
2011/07/19 11:42:32| externalAclHandleReply: reply=ERR
2011/07/19 11:42:32| cbdataValid: 0xb94309a0
2011/07/19 11:42:32| external_acl_cache_add: Adding '192.168.100.131 
http://www.google.com.mx/csi?v=3s=webhpaction=e=17259,18168,28505,29819,30316,30727,30813,31406,31482,31493,31643ei=KNAlTvu0MpCEtgfww9mPDAexpi=17259,18168,28505,29819,30316,30727,30813,31406,31482,31493,31643imc=0imn=0imp=0rt=xjsls.84,prt.88,xjses.134,xjsee.255,xjs.258,ol.506,iml.88' 
= 0

2011/07/19 11:42:32| cbdataUnlock: 0xb94309a0
2011/07/19 11:42:32| cbdataValid: 0xb970b358
2011/07/19 11:42:32| cbdataLock: 0xb9771af8
2011/07/19 11:42:32| cbdataValid: 0xb94307e0
2011/07/19 11:42:32| aclCheck: checking 'http_access deny MyAcl'
2011/07/19 11:42:32| aclMatchAclList: checking MyAcl
2011/07/19 11:42:32| aclMatchAcl: checking 'acl M
.
.
.


Hope someone could lend me some help with this, thanks


[squid-users] how to filter urls with the external_acl_type option?

2011-07-19 Thread azael.reyes

Hello there  I',m working in a script to filter with squid using:

external_acl_type myAclType %SRC %URI 
/home/konrad/testing/myexternalacltype.pl

acl MyAcl external myAclType
http_access allow MyAcl

this is the script I have and for some reason its not working   :

#!/usr/bin/perl -w
$|=1;
open(STDERR, /tmp/external_acl.log);
select(STDERR); $| = 1; # make unbuffered
select(STDOUT); $| = 1; # make unbuffered
print STDERR INI: $$\n\n;;


use MIME::Base64 ();
while () {
   print STDERR --- $_\n\n;;
   print ERR\n;
}

Its supposed to block when it has ERR but it does not, it allows always 
can you tell my whats wrong pls.



the log file is /tmp/external_acl.log
INI: 5101

192.168.100.131 http://clients1.google.com.mx/generate_204


owsing/downloads?client=googlechromeappver=12.0.742.124pver=2.2wrkey=AKEgNiuxyrXd6Ogn0--yqQphNRgBvtmHMJ6qOaSuWkcLGp37Xr7Q8yn8PC7E_P6KDikrGokHnnvHaS-39wHdVm6e1rt8ApQWSw==


192.168.100.131 
http://safebrowsing-cache.google.com/safebrowsing/rd/ChVnb29nLWJhZGJpbi1kaWdlc3R2YXIQABjlCCDmCCoFZgQAAAEyBWUEAAAB



192.168.100.131 
http://safebrowsing-cache.google.com/safebrowsing/rd/ChVnb29nLWJhZGJpbnVybC1zaGF2YXIQARj9DiD-DioFfgcAAAEyBX0HAAAB



192.168.100.131 
http://safebrowsing-cache.google.com/safebrowsing/rd/ChVnb29nLWJhZGJpbnVybC1zaGF2YXIQABj4DCD8DCoFegYAAAcyBXgGAAAD



192.168.100.131 
http://safebrowsing-cache.google.com/safebrowsing/rd/ChNnb29nLW1hbHdhcmUtc2hhdmFyEAEY-7QDIIS1AyoFftoAAH8yBXvaAAAH

.
.
.



and the log file cache.log
.
.
.

2011/07/19 11:42:32| cbdataValid: 0xb9448558
2011/07/19 11:42:32| helperHandleRead: 4 bytes from myAclType #1.
2011/07/19 11:42:32| commSetSelect: FD 7 type 1
2011/07/19 11:42:32| commSetEvents(fd=7)
2011/07/19 11:42:32| helperHandleRead: 'ERR
'
2011/07/19 11:42:32| helperHandleRead: end of reply found: ERR

2011/07/19 11:42:32| cbdataValid: 0xb970b4b0
2011/07/19 11:42:32| externalAclHandleReply: reply=ERR
2011/07/19 11:42:32| cbdataValid: 0xb94309a0
2011/07/19 11:42:32| external_acl_cache_add: Adding '192.168.100.131 
http://www.google.com.mx/csi?v=3s=webhpaction=e=17259,18168,28505,29819,30316,30727,30813,31406,31482,31493,31643ei=KNAlTvu0MpCEtgfww9mPDAexpi=17259,18168,28505,29819,30316,30727,30813,31406,31482,31493,31643imc=0imn=0imp=0rt=xjsls.84,prt.88,xjses.134,xjsee.255,xjs.258,ol.506,iml.88' 
= 0

2011/07/19 11:42:32| cbdataUnlock: 0xb94309a0
2011/07/19 11:42:32| cbdataValid: 0xb970b358
2011/07/19 11:42:32| cbdataLock: 0xb9771af8
2011/07/19 11:42:32| cbdataValid: 0xb94307e0
2011/07/19 11:42:32| aclCheck: checking 'http_access deny MyAcl'
2011/07/19 11:42:32| aclMatchAclList: checking MyAcl
2011/07/19 11:42:32| aclMatchAcl: checking 'acl M
.
.
.


Hope someone could lend me some help with this, thanks



[squid-users] Browsing slow after adding squid proxy.

2011-07-19 Thread Gregory Machin
Hi.
Been a long time since I last looked at a squid proxy. After add a
proxy to the network , browsing seems to have slowed considerably.  I
have build a squid proxy , this is configured into the network on via
our Sonicwall using the proxy feature. When I looked into the
configuration I did a few optimizations based on what I found on a
couple of websites.  All though I opted not to tweak the OS more than
increase the ulimit as I would not expect it to be required given the
hardware. It is running out of a SSD drive.

When I run top the box is idle for the most part. there are about 100
users on this site.

So my question is what may I have configured incorrectly or missed
that would help?


The hardware is  :

4 Gig Ram
Intel(R) Xeon(R) CPU   E3110  @ 3.00GHz (dual core)
hard disk  is SSD 32 GB

The / file system is ext3
The /var system is ext4 (cache is /var/spool/squid).

The OS is Linux Ubuntu 10 LTS

the squid configuration file looks like

acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl localnet src 192.168.0.0/16 # TO BE correctly defined
acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access allow CONNECT
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
icp_access allow localnet
icp_access deny all
http_port 3128
hierarchy_stoplist cgi-bin ?
cache_mem 2048 MB
maximum_object_size_in_memory 256 KB
cache_replacement_policy heap LFUDA
cache_dir aufs /var/spool/squid 1 23 256
maximum_object_size 64 MB
cache_swap_low 90
cache_swap_high 95
access_log /var/log/squid/access.log squid
buffered_logs on
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Package(.gz)*)$0   20% 2880
refresh_pattern .   0   20% 4320
quick_abort_min 0 KB
quick_abort_max 0 KB
acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9]
upgrade_http0.9 deny shoutcast
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
extension_methods REPORT MERGE MKACTIVITY CHECKOUT
half_closed_clients off
always_direct allow all
hosts_file /etc/hosts
memory_pools off
coredump_dir /var/spool/squid


Thanks
G


[squid-users] Re: squid with kerberos authentication

2011-07-19 Thread Markus Moeller

What does the cache.log file say if you add -d to

auth_param negotiate program /usr/lib/squid/squid_kerb_auth

i.e.
auth_param negotiate program /usr/lib/squid/squid_kerb_auth -d

How did you configure IE ?

Can you see a ticket for HTTP/squid-fqdn in kerbtray 
(http://www.microsoft.com/download/en/details.aspx?displaylang=enid=23018)?


Regards
Markus


Franco, Battista battista.fra...@saint-gobain.com wrote in message 
news:0b0bf3f65f960a4b8be340e64290f4cd0696d...@a00exgec23.za.if.atcsg.net...

Hello

On Centos 6 I want used squid (version 3.1.4) with Kerberos
authentication so only AD Windows 2003 authenticated users can surfing.
Well I perform the steps (explained at link
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos)

but when users tried to surfing the IE require user and password and
didn't surfing.
Why?
Can you help me.

 MORE INFO 

I did the following steps:

Install  and configure samba
modify krb5.conf
net ads join -U DOMAIN\administrator
kinit administrator@DOMAIN
export KRB5_KTNAME=FILE:/etc/squid/HTTP.keytab
net ads keytab CREATE -U DOMAIN\administrator
net ads keytab ADD HTTP -U DOMAIN\administrator
unset KRB5_KTNAME
chgrp squid /etc/squid/HTTP.keytab
chmod g+r /etc/squid/HTTP.keytab
modify squid startup file with :
   KRB5_KTNAME=/etc/squid/HTTP.keytab
   export KRB5_KTNAME



below squid.conf file:


auth_param negotiate program /usr/lib/squid/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on
acl auth proxy_auth REQUIRED
...
http_access deny !auth
http_access allow auth
http_access deny all



With command :
/usr/lib/squid/squid_kerb_auth_test proxyserver
The token was displayed.






Re: [squid-users] Re: configuration of Squid for high cache gain

2011-07-19 Thread Amos Jeffries

On Tue, 19 Jul 2011 19:23:14 +0200, benjamin fernandis wrote:

Hi All,

Any suggestion for this.


All your Q belong to FAQ.



On Tue, Jul 19, 2011 at 8:16 AM, benjamin fernandis wrote:

Hello Friends,

We are going to deploy squid for getting high performance caching 
gain
capability.Currently we are going to plan a demo for squid caching 
at

one isp place.

isp has 1500-2000 users.

H/w specification:
Quad Core Xeon 3.06 Ghz Processor
32 GB ram
2 TB sata hdd

OS : CENTOS 6



http://wiki.squid-cache.org/KnowledgeBase/Benchmarks


Requirement:

We need to deploy full tproxy squid feature and want to gain high
cache performance.And we are trying squid for cache gain only.



http://wiki.squid-cache.org/Features/Tproxy4
http://wiki.squid-cache.org/BestOsForSquid

So please guide me that what are the perameters , i must have to 
look

after to gain more caheing performance.And how much RAM i can assign


 http://wiki.squid-cache.org/SquidFaq/SquidMemory
 http://wiki.squid-cache.org/SquidFaq/RAID


to squid.This box is only for squid cache so there is no other
processes occupy memory.And also suggest me other standard squid
parameter , which i must need to tune for enhancing more caching.



A several groups of these tuning directives:
 http://www.squid-cache.org/Doc/config/




Current Setup :

As per ISP:

for bandwith mgmt., they are using commercial bandwith management
devices as NAS  and for AAA they are using radius.


There is a huge amount of traffic that is not HTTP which ISP must deal 
with. Recent releases of Squid have several QoS and logging features 
that can feed additional data to external management systems in real 
time, but overall management is best done outside of Squid.

 http://www.squid-cache.org/Doc/config/qos_flows/
 http://www.squid-cache.org/Doc/config/tcp_outgoing_tos/
 http://www.squid-cache.org/Doc/config/access_log/
 http://wiki.squid-cache.org/Features/LogModules



So for TPROXY , do i need to deploy squid box at bridge mode ? or do 
i

deploy it as external device ( only forward web traffic from NAS to
squid box) ?


Traffic needs to pass through the Squid box. That is the only 
requirement. See the feature documentation for the above and other 
options.


Amos


Re: [squid-users] how to filter urls with the external_acl_type option?

2011-07-19 Thread Amos Jeffries

On Tue, 19 Jul 2011 12:57:27 -0700, Zael Rey wrote:

Hello there  I',m working in a script to filter with squid using:

external_acl_type myAclType %SRC %URI
/home/konrad/testing/myexternalacltype.pl
acl MyAcl external myAclType
http_access allow MyAcl

this is the script I have and for some reason its not working   :

#!/usr/bin/perl -w
$|=1;
open(STDERR, /tmp/external_acl.log);


STDERR gets sent to squid cache.log by default. No need for this.


select(STDERR); $| = 1; # make unbuffered
select(STDOUT); $| = 1; # make unbuffered
print STDERR INI: $$\n\n;;


use MIME::Base64 ();
while () {
   print STDERR --- $_\n\n;;
   print ERR\n;
}

Its supposed to block when it has ERR but it does not, it allows
always can you tell my whats wrong pls.


No. ERR means only that the ACL does not match. Will not be used, try 
another line, do not pass go.


 OK/ERR == true/false.

Syntax is:
 http_access $ACTION $BOOLEAN-CONDITION

So http_access ** MyAcl is a rule about what to do when MyACL is 
true. OK.


the NOT operator (!) can be added, or the allow/deny action can be 
changed. Giving you three outcomes of one line. ALLOWED/DENIED/SKIP.


snip

2011/07/19 11:42:32| helperHandleRead: 4 bytes from myAclType #1.
2011/07/19 11:42:32| commSetSelect: FD 7 type 1
2011/07/19 11:42:32| commSetEvents(fd=7)
2011/07/19 11:42:32| helperHandleRead: 'ERR
'
2011/07/19 11:42:32| helperHandleRead: end of reply found: ERR


snip

2011/07/19 11:42:32| cbdataValid: 0xb94307e0
2011/07/19 11:42:32| aclCheck: checking 'http_access deny MyAcl'


The cache.log also indicates that your earlier statement about 
squid.conf was wrong.


The rule http_access deny MyAcl will block, whenever the ACL matches 
(ie produces OK). Otherwise it will be SKIP.


Amos



Re: [squid-users] Browsing slow after adding squid proxy.

2011-07-19 Thread Amos Jeffries

On Wed, 20 Jul 2011 09:13:34 +1200, Gregory Machin wrote:

Hi.
Been a long time since I last looked at a squid proxy. After add a
proxy to the network , browsing seems to have slowed considerably.  I
have build a squid proxy , this is configured into the network on via
our Sonicwall using the proxy feature. When I looked into the
configuration I did a few optimizations based on what I found on a
couple of websites.  All though I opted not to tweak the OS more than
increase the ulimit as I would not expect it to be required given the
hardware. It is running out of a SSD drive.

When I run top the box is idle for the most part. there are about 100
users on this site.

So my question is what may I have configured incorrectly or missed
that would help?



Two things in general to be aware of.

 * Careful with SSD. Squid is a mostly-write software, SSD work best 
with mostly-read. So SSD lifetime and speed is reduced from the well 
advertised specs. That said, they can still improve caching HIT speeds.


 * Browsers will default to reducing their utilized connection count by 
99% when working through a proxy. This can make things appear much 
slower than normal given modern website tendency to require dozens or 
hundreds of objects at once for a simple page load.


 * ensure that no memory swapping is occurring. This will take a major 
bite out of squid performance.




The hardware is  :

4 Gig Ram
Intel(R) Xeon(R) CPU   E3110  @ 3.00GHz (dual core)
hard disk  is SSD 32 GB

The / file system is ext3
The /var system is ext4 (cache is /var/spool/squid).

The OS is Linux Ubuntu 10 LTS

the squid configuration file looks like


snip

http_access deny manager
http_access allow purge localhost
http_access deny purge


If you you don't actually need the purge ACL remove it. There is a 
lot of background CPU and RAM needed to support it.



http_access deny !Safe_ports
http_access allow CONNECT


PROBLEM: global unlimited tunnelling. 
http://wiki.squid-cache.org/SquidFaq/SecurityPitfalls


allow localnet below will already allow HTTPS traffic if it is not 
blocked by the SSL_Ports safety net.


If there actually are non-HTTPS ports to which you requires https:// 
access add them to the SSL_Ports definition as well as the Safe_Ports 
one. I see you have already doen this for several, although 563 is 
missing from Safe_Ports.



http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all

snip

memory_pools off


NOTE: memory optimization for squid usage patterns: DISABLED. This may 
be needed in some 64-bit systems with broken memory handling. if yours 
is not one of those, re-enable this.



That is it for general stuff. You will need to dig a bit deeper and 
find out what specifically are the slowest things going on.


Amos