[squid-users] Errors

2010-12-24 Thread benjamin fernandis
Hi Friends,

I m getting errors in cache.log file..

[r...@localhost.localdomain ~]# tail -f /var/log/squid/cache.log
2010/12/24 13:26:21| IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 316: (92) Protocol not
available
2010/12/24 13:26:21| IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 272: (92) Protocol not
available
2010/12/24 13:26:21| IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 279: (92) Protocol not
available
2010/12/24 13:26:22| IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 60: (92) Protocol not
available
2010/12/24 13:26:23| IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 316: (92) Protocol not
available
2010/12/24 13:26:23| IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 256: (92) Protocol not
available
2010/12/24 13:26:23| IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 331: (92) Protocol not
available
2010/12/24 13:26:23| IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 46: (92) Protocol not
available
2010/12/24 13:26:23| IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 150: (92) Protocol not
available


Please advice me for the same.

Thanks,
Benjo


Re: [squid-users] [HELP] As times passed, web browser didn't open.

2010-12-24 Thread Amos Jeffries

On 24/12/10 03:38, Seok Jiwoo wrote:

Dear all,
I have several problems of squid-cache server.

Firstly, the symptoms are
   o at first when I installed squid, it worked well as cache-server.
   o but as times passed, web browser didn't open.
  - there're some kinds of error messages.
  - one is 'time out'
  - the other is 'your proxy have some problems.'
  -  and so on..
   o I had had vry loong  access time.(when I checked 'access.log' file)
   o eventually, squid server dosen't work when I give the command, '
squid -X or -D or start'.

I already did
   o use [squid -k rotate] and rotated log-files.
   o it didn't work.

My squid-cache is squid-3.0.STABLE25-1.el5 and the OS is the RedHat 5-64bit.

I installed squid and set up 'squid.conf' file as below.

o visible_hostname localhost
o http_port 3128
o cache_dir ufs /web-cache/cache1 10 16 256
cache_dir ufs /web-cache/cache2 10 16 256
cache_dir ufs /web-cache/cache3 10 16 256


300GB of cache. I hope you have well over 4GB of RAM on that box 
dedicated to Squid. ~3GB of it will be sucked up by the disk index.


Add cache_mem on top of that, add another 10% of cache_mem for the index 
of that space, and then add about 64KB for you maximum peak client count.




o access_log /log/squid/access.log squid
o cache_log /log/squid/cache.log
o store_log /log/squid/store.log


You can drop store log under normal use, its not that useful unless 
debugging the storage:

 cache_store_log none


o logfile_rotate 9 ( and /etc/logrotate.d/squid file has been reviesed.)


By reviesed I hope you mean erased. logrotate.d and squid internal log 
rotation do not work well together. Pick one.



o shutdown_lifetime 1 seconds


Large cache + extremely short shutdown period = cache corruption.
Squid will handle it by doing a full scan of the entire disk space on 
startup to reload all the meta data from scratch. This is a period of 
slow proxy speed while it dedicates CPU cycles for the scan.
 When your 300GB of cache is full this will likely take somewhere 
between 4 and 10 hours to complete.



Things to check:
 * memory usage is not swapping. This will cause squid to significantly 
drop in speed.
 * check for crashes or other problems in cache.log. Note the 4-10 hour 
recovery time loading the index after each crash will be a slow period.


On top of that 3.0 is obsolete for nrearly a year now. There are a 
number of fatal bugs and leaks which are resolved in later releases.
 Some newer packages can be found linked from 
http://wiki.squid-cache.org/KnowledgeBase/RedHat. These still have some 
of the leaks only recently fixed but should be better than 3.0 on bugs.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


[squid-users] Squid 3.1.10 is available

2010-12-24 Thread Amos Jeffries

The Squid HTTP Proxy team is very pleased to announce the
availability of the Squid-3.1.10 release!


This release brings a long list of bug fixes and some further HTTP/1.1 
improvements into 3.1.


Some small but cumulative memory leaks were found and fixed in Digest 
authentication and adaptation ACL processing.


New limits are placed on memory consumption when uploading files and 
when using delay pools. Previously Squid would pull in as much as 
possible from the source and slowly deliver it. Which would lead to 
massive memory consumption and other stranger problems with upload tools 
and timeouts. A directive (client_request_buffer_max_size) has been 
added to limit consumption, a default of 512KB has been picked so as not 
to slow any small transfers.


cache_dir problems on 64-bit systems needing to store large (2GB) 
individual objects has been fixed. Along with a capacity accounting fix 
which expected to enable caches 2TB to be used now. The total object 
count limit remains unchanged, these fixes are for multi-TB caches 
dedicated to very large objects.



The squid.conf parser now reports useful messages when processing a 
config file with obsolete directives. Where these used to just get a 
bungled message they will now report what needs to be done to update 
config file. bungled will still occur on completely unknown directives.
Please run squid -k parse while upgrading to correct outstanding 
config garbage. Obsolete directives are not always fatal now.



HTTP/1.1 If-Match, If-None-Match and If-Modified-Since features are now 
supported. Early adopters in testing have noticed that some browsers may 
appear to receive a larger proportion of MISS'es while the rest are 
receiving HITs. Investigation has traced this to HTTP variants and is a 
separate older bug in Squid. These features are making Squid outputs 
more reliable.


HTTP extension Set-Cookie2 and Cookie2 headers are now registered as 
known and may be controlled with the header access controls.



See the ChangeLog for the list of other minor changes in this release.


Users of Squid-3 experiencing memory or large cache problems are urged 
to upgrade as soon as possible.


All users of older Squid are encouraged to upgrade as time permits.


Please refer to the release notes at
http://www.squid-cache.org/Versions/v3/3.1/RELEASENOTES.html
when you are ready to make the switch to Squid-3.1

This new release can be downloaded from our HTTP or FTP servers

  http://www.squid-cache.org/Versions/v3/3.1/
  ftp://ftp.squid-cache.org/pub/squid/
  ftp://ftp.squid-cache.org/pub/archive/3.1/

or the mirrors. For a list of mirror sites see

  http://www.squid-cache.org/Download/http-mirrors.dyn
  http://www.squid-cache.org/Download/mirrors.dyn

If you encounter any issues with this release please file a bug report.
  http://bugs.squid-cache.org/


Amos Jeffries


[squid-users] Squid 3.2.0.4 beta is available

2010-12-24 Thread Amos Jeffries

The Squid HTTP Proxy team is very pleased to announce the
availability of the Squid-3.2.0.4 beta release!


This release brings in a lot of polish and completes one more of the 
structural bugs/features before release 3.2 can be made.



Looking at the Changelog it seems like not much changed in this beta. 
However a number of bugs in memory consumption were fixed, most of these 
were also relevant to the 3.1 series and are thus recorded in the 
changelog for 3.1.10. As usual this beta contains all the fixes passed 
on to 3.1 series alongside its own changes.



Several regressions in the earlier 3.2 beta have been resolved:
 * a digest auth crash
 * HTCP/ICP request accounting in cachemgr SMP support
 * mem-pools accounting in cachemgr
 * HTTP/1.1 advertisement on CONNECT tunnels
 * the cache_dir min-size option has been ported from 2.x


ICAP and eCAP Adaptation features have improved error recovery and 
handling. The result should be less client-visible problems even if they 
are noted more often in cache.log.



Kerberos authentication support has been updated to build against recent 
changes in GSSAPI libraries.



Dynamic SSL certificate generation has been added. This extension to the 
ssl-bump feature reduces client certificate popups.



Logging configuration had an upgrade. The directives useragent_log and 
referer_log have been replaced by access_log built-in formats. The 
forward_log and log_fqdn directives have been obsoleted. The Apache 
combined format is now available for use as a built-in (this was 
documented earlier but not actually working).



Users of earlier 3.2 beta releases are encouraged to test this beta out 
and upgrade as soon as possible.



Please refer to the release notes at
http://www.squid-cache.org/Versions/v3/3.2/RELEASENOTES.html
when you are ready to make the switch to Squid-3.2

This new release can be downloaded from our HTTP or FTP servers

  http://www.squid-cache.org/Versions/v3/3.2/
  ftp://ftp.squid-cache.org/pub/squid/
  ftp://ftp.squid-cache.org/pub/archive/3.2/

or the mirrors. For a list of mirror sites see

  http://www.squid-cache.org/Download/http-mirrors.dyn
  http://www.squid-cache.org/Download/mirrors.dyn

If you encounter any issues with this release please file a bug report.
  http://bugs.squid-cache.org/


Amos Jeffries


Re: [squid-users] it was a slow death

2010-12-24 Thread Optimum Wireless Services
On Wed, 2010-12-22 at 16:54 +1300, Amos Jeffries wrote:
 On 22/12/10 04:49, donovan jeffrey j wrote:
  Greetings
  i discovered the culprit to my woes as my internet connections slowly died. 
  It was my 2 cache drives. As they would fill, and swap, and fill, and 
  swap.. well you get the picture. Both drives just burned up and won't mount.
 
  So im running a cache_less system, which we are finding is really quick.
 
  does this look right for intercept only no cache ? are there any 
  performance adjustments I can do ?
  squid 3.1.9
 
  http_port 10.0.1.1:3128 transparent
 
 not performance exactly but transparent should be written intercept 
 in 3.1+
 
Hi Amos.

Sorry to hijack this thread.

Are you saying that for version 3.1+ we should use:

http_port 10.0.1.1:3128 intercept

instead of transparent?

I'm currently using 3.1.9 like this:

http_port 172.16.0.1:3128 transparent disable-pmtu-discovery=off

Is that correct?


 Otherwise it looks just fine.
 
 The other performance adjustmenst would be kernel and TCP stack things 
 to make ports available more frequently (avoiding some TIME_WAIT) and 
 accept jumbo packets etc. I'm not sure on the exact sysctl knobs to 
 tweak, they should be easy to find if you have not done them already.
 
 FWIW: 3.1.10 is now in the release process with several memory 
 consumption fixes.
 
 Amos



Re: [squid-users] Squid 3.2 - Dynamic SSL certs that aren't self-signed

2010-12-24 Thread Amos Jeffries

On 24/12/10 04:15, Alex Ray wrote:

When using squid 3.2 beta with ssl-bump and dynamic certificate
generation, is it possible to have the generated certificates issued
by a trusted CA (trusted on each computer), so that browsers receive
neither the website does not match certificate CN or this
certificate is self-signed/untrusted errors?


Yes, if you have a trusted CA to sign with the Dynamic SSL certificate 
feature was just released in 3.2.0.4.  It can use a public CA authority 
or a self-signed CA installed with trust on the browsers.


 see http://wiki.squid-cache.org/Features/DynamicSslCert for how to 
configure Squid and generate self-signed CA for use.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Squid 3.2 - Dynamic SSL certs that aren't self-signed

2010-12-24 Thread Amos Jeffries

On 24/12/10 13:05, Henrik Nordström wrote:

tor 2010-12-23 klockan 13:56 -0800 skrev Alex Ray:


2010/12/23 13:54:55 kid1| Closing SSL FD 10 as lacking SSL context

in the cache.log, and in a browser bounces between Looking Up and Waiting For.


That means it failed to dynamically generate the cert, and since there
was no default cert assigned by cert= it could not continue.

You should get detailed trace if enabling debug section 33,5


Also; being a brand new feature in beta software. It is best to message 
the squid-dev mailing list about a fix.


To me this sounds like either bad permissions on the ssl_crtd storage 
area or some problem in the signing cert. If you can confirm its neither 
of those then a message to squid-dev is in order to talk with the 
feature authors.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Squid 3.1.10 is available

2010-12-24 Thread David Touzeau
A new version for the christmas 

Many  thanks for the gift !

Le samedi 25 décembre 2010 à 00:55 +1300, Amos Jeffries a écrit : 
 The Squid HTTP Proxy team is very pleased to announce the
 availability of the Squid-3.1.10 release!
 
 
 This release brings a long list of bug fixes and some further HTTP/1.1 
 improvements into 3.1.
 
 Some small but cumulative memory leaks were found and fixed in Digest 
 authentication and adaptation ACL processing.
 
 New limits are placed on memory consumption when uploading files and 
 when using delay pools. Previously Squid would pull in as much as 
 possible from the source and slowly deliver it. Which would lead to 
 massive memory consumption and other stranger problems with upload tools 
 and timeouts. A directive (client_request_buffer_max_size) has been 
 added to limit consumption, a default of 512KB has been picked so as not 
 to slow any small transfers.
 
 cache_dir problems on 64-bit systems needing to store large (2GB) 
 individual objects has been fixed. Along with a capacity accounting fix 
 which expected to enable caches 2TB to be used now. The total object 
 count limit remains unchanged, these fixes are for multi-TB caches 
 dedicated to very large objects.
 
 
 The squid.conf parser now reports useful messages when processing a 
 config file with obsolete directives. Where these used to just get a 
 bungled message they will now report what needs to be done to update 
 config file. bungled will still occur on completely unknown directives.
 Please run squid -k parse while upgrading to correct outstanding 
 config garbage. Obsolete directives are not always fatal now.
 
 
 HTTP/1.1 If-Match, If-None-Match and If-Modified-Since features are now 
 supported. Early adopters in testing have noticed that some browsers may 
 appear to receive a larger proportion of MISS'es while the rest are 
 receiving HITs. Investigation has traced this to HTTP variants and is a 
 separate older bug in Squid. These features are making Squid outputs 
 more reliable.
 
 HTTP extension Set-Cookie2 and Cookie2 headers are now registered as 
 known and may be controlled with the header access controls.
 
 
 See the ChangeLog for the list of other minor changes in this release.
 
 
 Users of Squid-3 experiencing memory or large cache problems are urged 
 to upgrade as soon as possible.
 
 All users of older Squid are encouraged to upgrade as time permits.
 
 
 Please refer to the release notes at
 http://www.squid-cache.org/Versions/v3/3.1/RELEASENOTES.html
 when you are ready to make the switch to Squid-3.1
 
 This new release can be downloaded from our HTTP or FTP servers
 
http://www.squid-cache.org/Versions/v3/3.1/
ftp://ftp.squid-cache.org/pub/squid/
ftp://ftp.squid-cache.org/pub/archive/3.1/
 
 or the mirrors. For a list of mirror sites see
 
http://www.squid-cache.org/Download/http-mirrors.dyn
http://www.squid-cache.org/Download/mirrors.dyn
 
 If you encounter any issues with this release please file a bug report.
http://bugs.squid-cache.org/
 
 
 Amos Jeffries





Re: [squid-users] it was a slow death

2010-12-24 Thread Amos Jeffries

On 25/12/10 01:32, Optimum Wireless Services wrote:

On Wed, 2010-12-22 at 16:54 +1300, Amos Jeffries wrote:

On 22/12/10 04:49, donovan jeffrey j wrote:

Greetings
i discovered the culprit to my woes as my internet connections slowly died. It 
was my 2 cache drives. As they would fill, and swap, and fill, and swap.. well 
you get the picture. Both drives just burned up and won't mount.

So im running a cache_less system, which we are finding is really quick.

does this look right for intercept only no cache ? are there any performance 
adjustments I can do ?
squid 3.1.9

http_port 10.0.1.1:3128 transparent


not performance exactly but transparent should be written intercept
in 3.1+


Hi Amos.

Sorry to hijack this thread.

Are you saying that for version 3.1+ we should use:

http_port 10.0.1.1:3128 intercept

instead of transparent?


Yes. Exactly so.



I'm currently using 3.1.9 like this:

http_port 172.16.0.1:3128 transparent disable-pmtu-discovery=off

Is that correct?


Same there.

The PMTU setting depends entirely on your local network situation. 
Disabling is a last resort. Traffic as a whole will work much better if 
the problem requiring it can be resolved. It's often just a matter of 
finding and fixing an ICMP config somewhere (hopefully under ones own 
control or a that of friendly external admin).





Otherwise it looks just fine.

The other performance adjustmenst would be kernel and TCP stack things
to make ports available more frequently (avoiding some TIME_WAIT) and
accept jumbo packets etc. I'm not sure on the exact sysctl knobs to
tweak, they should be easy to find if you have not done them already.

FWIW: 3.1.10 is now in the release process with several memory
consumption fixes.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Errors

2010-12-24 Thread Amos Jeffries

On 24/12/10 21:40, benjamin fernandis wrote:

Hi Friends,

I m getting errors in cache.log file..

[r...@localhost.localdomain ~]# tail -f /var/log/squid/cache.log
2010/12/24 13:26:21| IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 316: (92) Protocol not
available


Two possible causes:
 The less common one is NAT failure or overflow in the box TCP systems.

 This is more usually seen when receiving non-NAT requests in a port 
flagged to perform NAT processing on the traffic. It is a needless 
security hole opening CVE-2009-0801 to any client. Use two ports, one 
for regular traffic and one for NAT intercept traffic.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] it was a slow death

2010-12-24 Thread Optimum Wireless Services
On Sat, 2010-12-25 at 01:59 +1300, Amos Jeffries wrote:
 On 25/12/10 01:32, Optimum Wireless Services wrote:
  On Wed, 2010-12-22 at 16:54 +1300, Amos Jeffries wrote:
  On 22/12/10 04:49, donovan jeffrey j wrote:
  Greetings
  i discovered the culprit to my woes as my internet connections slowly 
  died. It was my 2 cache drives. As they would fill, and swap, and fill, 
  and swap.. well you get the picture. Both drives just burned up and won't 
  mount.
 
  So im running a cache_less system, which we are finding is really quick.
 
  does this look right for intercept only no cache ? are there any 
  performance adjustments I can do ?
  squid 3.1.9
 
  http_port 10.0.1.1:3128 transparent
 
  not performance exactly but transparent should be written intercept
  in 3.1+
 
  Hi Amos.
 
  Sorry to hijack this thread.
 
  Are you saying that for version 3.1+ we should use:
 
  http_port 10.0.1.1:3128 intercept
 
  instead of transparent?
 
 Yes. Exactly so.
 
 
  I'm currently using 3.1.9 like this:
 
  http_port 172.16.0.1:3128 transparent disable-pmtu-discovery=off
 
  Is that correct?
 

Ok.
Let me change my squid.conf file.

Thanks.

 Same there.
 
 The PMTU setting depends entirely on your local network situation. 
 Disabling is a last resort. Traffic as a whole will work much better if 
 the problem requiring it can be resolved. It's often just a matter of 
 finding and fixing an ICMP config somewhere (hopefully under ones own 
 control or a that of friendly external admin).
 
 
  Otherwise it looks just fine.
 
  The other performance adjustmenst would be kernel and TCP stack things
  to make ports available more frequently (avoiding some TIME_WAIT) and
  accept jumbo packets etc. I'm not sure on the exact sysctl knobs to
  tweak, they should be easy to find if you have not done them already.
 
  FWIW: 3.1.10 is now in the release process with several memory
  consumption fixes.
 
 Amos



Re: [squid-users] How to use cbq

2010-12-24 Thread Andrew Beverley
On Thu, 2010-12-23 at 19:05 +0100, lupuscramus wrote:
   Do you know someone who managed to use the squid marked packets
   to make a QoS based on ip source with classful queuing ? (cbq, htb)
  
  Yes, I do this. For an example you could have a look at my website. It
  is out of date and probably not exactly what you are looking for, but it
  would probably give you an idea:
  
  http://www.andybev.com/index.php/Fair_traffic_shaping_an_ADSL_line_for_a_lo
  cal_network_using_Linux
 
 On your website I don't see where you use Squid to mark packets.

Sorry, it's just an example of using HTB, I've not updated it yet with
my current Squid rules.

  Hum, I've 
 noticed something : when i make 
 tc class show dev eth0
 I can see there are packets which pass by the class I want : they are packets 
 marked by Squid : the source is the proxy and the destination is the web 
 server. They represent a little proportion of packets between my user and my 
 web server (there is mainly download with HTTP)
 
 However, I want to limit the download rate : packets from web server to proxy 
 server.

I'm a bit confused. Can you produce a diagram of your setup? Is your web
server on a different server to Squid? If you want to limit packets *to*
Squid, then you will need to set up HTB on the interface going to Squid
(you might need to use IFB) *or* rate limit the packets going *from*
Squid on the other interface to the one you are using now.

 Is there a way to do this ? Was the feature written for this ?
 

Please provide some more info of your setup and I'll have a look.

Andy