[squid-users] Sibling issue

2011-11-22 Thread Chia Wei LEE

Hi

I had an issue when configure the sibling.
below is my configuration in my proxy01 (10.1.1.2)

cache_peer 10.1.1.8 sibling 3128 0 proxy-only no-query
acl sibling1 src 10.1.1.8
cache_peer_access 10.1.1.8 deny sibling1



but when i go to browse Internet, the access.log in my proxy02 (10.1.1.8)
show the below log.

22/Nov/2011:17:15:43 +0800248 10.1.1.2 TCP_MISS/404 330 GET
internal://proxy01/squid-internal-periodic/store_digest - NONE/- text/plain

even my proxy02 already have the related cache, but my proxy01 keep cannot
find the cache and fetch the content from Internet.

Any idea on this ?

Cheers
Chia Wei


Notice
The information in this message is confidential and may be legally
privileged.  It is intended solely for the addressee.  Access to this
message by anyone else is unauthorized.  If you are not the intended
recipient, any disclosure, copying or distribution of the message, or any
action taken by you in reliance on it, is prohibited and may be unlawful.
This email is for communication purposes only. It is not intended to
constitute an offer or form a binding agreement.  Our company accepts no
liability for the content of this email, or for the consequences of any
actions taken on the basis of the information provided.  If you have
received this message in error,  please delete it and contact the sender
immediately.  Thank you.




Re: [squid-users] hey guise

2011-11-22 Thread Amos Jeffries

On 22/11/2011 8:17 p.m., someone wrote:

I want my squid to stop serving internet at specified times

I ADDED THESE LINES TO MY CONF then ran Squid -k reconfigure

acl hours time 06:00-23:00

http_access allow hours

yet it is now Mon Nov 21 23:16:10 PST 2011

and squid is still serving up internet hot and fresh


... indicating that one of the other rules you failed to mention is 
allowing access.


The whole list of http_access lines is one giant test script to 
determine the allow/deny choice. Order is important. So is the full set 
of lines.




I didnt try completely restarting squid yet, but I have a suspicion that
wont do it either


Maybe. Depends on the type of request is being serviced. http_access 
only gets tested for *new* requests. Tunnels and long-polled chat 
sessions can continue happily for days unless something interrupts tem.




Squid Cache: Version 3.0.STABLE8---yes I know its a bit outdated but
it works for me :)


Works is not really part of that equation. Security vulnerability 
and bugs are more appropriate considerations.


Amos


Re: [squid-users] Proxy Load Testing

2011-11-22 Thread Amos Jeffries

On 22/11/2011 8:11 p.m., ftiaronsem wrote:

Hello altogether

If you need to stress test your proxy before deployment, what tools do
you use? Do you have some script/program with which one could make a
huge amount of requests through the proxy and profile it?

Thanks in advance

B. Brandt


Web polygraph.

Amos


Re: [squid-users] What is the max number of Squirm redirect_children?

2011-11-22 Thread Leonardo
On Mon, Nov 21, 2011 at 10:35 PM, Amos Jeffries wrote:
 Also in the squid.conf:

  acl toSquirm url_regex ^http://www\.google\..*/(search|images)\?
  url_rewrite_access allow toSquirm
  url_rewrite_access deny all

 ... will make Squid not even bother to send URLs to Squirm if they wont be
 changed. Meaning your total traffic can be higher and not bottleneck at the
 URL-rewrite step.



 Hmmm... now I am wondering whether I could achieve the same effect
 through a Perl script to call via redirect_program...

 You could. Squirm is faster than the perl alternatives IIRC.


Clever.  Thank you.  I think I'll then stick with Squirm with the
Squid optimization you suggested.

Regards,

L.


[squid-users] %login in ACL without autentication configured

2011-11-22 Thread Luis Enrique Sanchez Arce

I try to configure external acl without autentication configured

external_acl_type redirprogram children=30 concurrency=10 ttl=300 %URI %SRC 
%LOGIN %METHOD redir

If i use the acl redir_program and the autentication is not configured the user 
logged is - 

How can i do that with external acl. I need use external acl to modified the 
entry log with %ea variable.

Best regard,
  Luis




Fin a la injusticia, LIBERTAD AHORA A NUESTROS CINCO COMPATRIOTAS QUE SE 
ENCUENTRAN INJUSTAMENTE EN PRISIONES DE LOS EEUU!
http://www.antiterroristas.cu
http://justiciaparaloscinco.wordpress.com


[squid-users] Kerberos auth and users in another AD domain

2011-11-22 Thread Emmanuel Lacour

I enabled kerberos auth on an AD domain with a fallback to ldap basic
auth.

It seems that if someone use the proxy from another lan in another AD
domain on which I have no control, the basic auth is not used.

Is this understandable? Any way to work around this?



[squid-users] [3.2.0.13]: DiskIO/IpcIo/IpcIoFile.cc for RockStore / No such file or directory

2011-11-22 Thread David Touzeau
Dear 

I have enabled RockStore on my squid 3.2.0.13-20111027-r11388

has this 

workers 2
cache_dir rock /var/cache/RockStore-0 256 max-size=32768
cache_dir rock /var/cache/RockStore-1 256 max-size=32768
cache_dir   ufs /var/cache/squid 2000 16 256

squid claim 

2011/11/22 17:25:31 kid2| DiskIO/IpcIo/IpcIoFile.cc(132) openCompleted:
error: timeout
FATAL: Rock cache_dir at /var/cache/RockStore-0/rock failed to open db
file: (2) No such file or directory

but the rock file exists: 

 stat /var/cache/RockStore-0/rock 
  File: «/var/cache/RockStore-0/rock»
  Size: 268435456   Blocks: 32 IO Block: 4096   fichier
Device: 801h/2049d  Inode: 262542  Links: 1
Access: (0755/-rwxr-xr-x)  Uid: ( 1001/   squid)   Gid: ( 1001/   squid)
Access: 2011-11-22 17:16:32.401809206 +0100
Modify: 2011-11-22 17:23:17.601809860 +0100
Change: 2011-11-22 17:25:17.601309064 +0100



Squid Cache (Version 3.2.0.13-20111027-r11388): Terminated abnormally.
CPU Usage: 0.036 seconds = 0.032 user + 0.004 sys
Maximum Resident Size: 31328 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:2448 KB
Ordinary blocks: 2308 KB 14 blks
Small blocks:   0 KB  0 blks
Holding blocks:  1024 KB  4 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 139 KB
Total in use:3332 KB 136%
Total free:   139 KB 6%



is it a misconfiguration ?

Here it is the full dump sequence :


Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:2448 KB
Ordinary blocks: 2308 KB 14 blks
Small blocks:   0 KB  0 blks
Holding blocks:  1024 KB  4 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 139 KB
Total in use:3332 KB 136%
Total free:   139 KB 6%
2011/11/22 17:25:28 kid1| Starting Squid Cache version
3.2.0.13-20111027-r11388 for i686-pc-linux-gnu...
2011/11/22 17:25:28 kid1| Process ID 31315
2011/11/22 17:25:28 kid1| Process Roles: worker
2011/11/22 17:25:28 kid1| With 1024 file descriptors available
2011/11/22 17:25:28 kid1| Initializing IP Cache...
2011/11/22 17:25:28 kid1| DNS Socket created at [::], FD 9
2011/11/22 17:25:28 kid1| DNS Socket created at 0.0.0.0, FD 10
2011/11/22 17:25:28 kid1| Adding nameserver 192.168.1.105
from /etc/resolv.conf
2011/11/22 17:25:28 kid1| Adding nameserver 192.168.1.1
from /etc/resolv.conf
2011/11/22 17:25:28 kid1| Adding domain touzeau.com
from /etc/resolv.conf
2011/11/22 17:25:28 kid1| Adding domain touzeau.com
from /etc/resolv.conf
2011/11/22 17:25:28 kid1| Logfile: opening log
daemon:/var/log/squid/access.log
2011/11/22 17:25:28 kid1| Logfile Daemon: opening
log /var/log/squid/access.log
2011/11/22 17:25:28 kid1| Logfile: opening log tcp:127.0.0.1:54424
2011/11/22 17:25:28 kid1| Logfile: opening log
daemon:/var/log/squid/sarg.log
2011/11/22 17:25:28 kid1| Logfile Daemon: opening
log /var/log/squid/sarg.log
2011/11/22 17:25:28 kid1| Unlinkd pipe opened on FD 19
2011/11/22 17:25:28 kid1| Local cache digest enabled; rebuild/rewrite
every 3600/3600 sec
2011/11/22 17:25:28 kid1| Logfile: opening log
stdio:/var/log/squid/store.log
2011/11/22 17:25:28 kid1| Swap maxSize 2048000 + 8192 KB, estimated
158168 objects
2011/11/22 17:25:28 kid1| Target number of buckets: 7908
2011/11/22 17:25:28 kid1| Using 8192 Store buckets
2011/11/22 17:25:28 kid1| Max Mem  size: 8192 KB [shared]
2011/11/22 17:25:28 kid1| Max Swap size: 2048000 KB
2011/11/22 17:25:28 kid1| Version 1 of swap file with LFS support
detected... 
2011/11/22 17:25:28 kid1| Rebuilding storage in /var/cache/squid (DIRTY)
2011/11/22 17:25:28 kid1| Using Least Load store dir selection
2011/11/22 17:25:28 kid1| Set Current Directory to /var/squid/cache
2011/11/22 17:25:28 kid1| Loaded Icons.
2011/11/22 17:25:28 kid1| HTCP Disabled.
2011/11/22 17:25:28 kid1| Squid plugin modules loaded: 0
2011/11/22 17:25:28 kid1| Adaptation support is off.
2011/11/22 17:25:28 kid1| Ready to serve requests.
2011/11/22 17:25:28 kid1| Done reading /var/cache/squid swaplog (39
entries)
2011/11/22 17:25:31 kid2| DiskIO/IpcIo/IpcIoFile.cc(132) openCompleted:
error: timeout
FATAL: Rock cache_dir at /var/cache/RockStore-0/rock failed to open db
file: (2) No such file or directory
Squid Cache (Version 3.2.0.13-20111027-r11388): Terminated abnormally.
CPU Usage: 0.036 seconds = 0.032 user + 0.004 sys
Maximum Resident Size: 31328 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:2448 KB
Ordinary blocks: 2308 KB 14 blks
Small blocks:   0 KB  0 blks
Holding blocks:  1024 KB  4 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 139 KB
Total in use:3332 KB 136%
  

[squid-users] How to configure squid so it serves stale web pages when Internet Down

2011-11-22 Thread Doug Karl
We are trying to configure Squid for installation in school labs in 
Belize, Central America where the Internet routinely goes down for 
several minutes and sometimes an hour at a time.  We are very happy to 
serve up stales pages to the children for their classroom session. So we 
need to either: (1) Configure squid to handle such situations where 
cached pages are simply served stale when the Internet is down (i.e. 
don't have Internet to verify freshness) or (2) Have squid respond to a 
script that detects the Internet to be down telling it to serve up 
stales pages when Internet down.  As configured, our Squid 
implementation will not serve stale pages because it tries to access the 
original Web site and the cached pages are not served up at all.


NOTE: We have tried Squid Off-line mode and that does not work as you 
would think as several others reported. SO ARE THERE config parameters 
that can make caching work in the presence of bad Internet connection?


Thank you,
Doug Karl  Mary Willette


[squid-users] New user - few questions

2011-11-22 Thread Sw@g

Hi all,

First of all, I would like to thank you for your time and effort for 
providing such a great tool.


I am a new user on archlinux, using Squid locally.  I have a few 
questions, regarding the setup most of all.


- Is it possible to change the information logged into access.log? I 
would like it like that


= date +%#F_%T address_visited (I would like to replace the timestamps 
with a human readable time/date and just the website visited)


= Is it possible to limit the size of the logs from within the 
squid.conf file?


And the last question, I have that error coming up from the cache.log

IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 29: (92) Protocol not available


And the browsing become really slow, even page aren't opening anymore? 
Any advice?


Again, many thanks for your time and effort

Looking forward for your replies,

Kind regards,


Re: [squid-users] %login in ACL without autentication configured

2011-11-22 Thread Amos Jeffries

On 23/11/2011 3:04 a.m., Luis Enrique Sanchez Arce wrote:

I try to configure external acl without autentication configured

external_acl_type redirprogram children=30 concurrency=10 ttl=300 %URI %SRC 
%LOGIN %METHOD redir

If i use the acl redir_program and the autentication is not configured the user logged is 
-

How can i do that with external acl. I need use external acl to modified the 
entry log with %ea variable.

Best regard,
   Luis



%LOGIN is for passing the autentication helper credentials to the 
external ACL helper. Doing a full login if needed.


For extenral ACL to produce credentials it needs to do whatever  to 
locate them in the background and passes the username back to Squid like so:


OK user=username
or
ERR user=suername

Amos



Re: [squid-users] Kerberos auth and users in another AD domain

2011-11-22 Thread Amos Jeffries

On Tue, 22 Nov 2011 15:34:53 +0100, Emmanuel Lacour wrote:

I enabled kerberos auth on an AD domain with a fallback to ldap basic
auth.

It seems that if someone use the proxy from another lan in another AD
domain on which I have no control, the basic auth is not used.

Is this understandable? Any way to work around this?



Yes this is common. The client application is in complete control over 
which authentication methods it uses. All Squid does is offer a set of 
possibilities.


Also, Basic auth is sent to the client with a realm= parameter stating 
which domain/realm it Squid supports that method from. NTLM and Kerberos 
were built around SSO principles, in which a client only has one set of 
credentials which are globally accepted or not. The validating process 
(Squid) needs access to the DC (AD server) for that users credentials.


Marcus has updated the Kerberos wiki pages with a great overview of how 
both of those work.

http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos


Amos



Re: [squid-users] How to configure squid so it serves stale web pages when Internet Down

2011-11-22 Thread Amos Jeffries

On Tue, 22 Nov 2011 11:57:15 -0500, Doug Karl wrote:

We are trying to configure Squid for installation in school labs in
Belize, Central America where the Internet routinely goes down for
several minutes and sometimes an hour at a time.  We are very happy 
to

serve up stales pages to the children for their classroom session. So
we need to either: (1) Configure squid to handle such situations 
where

cached pages are simply served stale when the Internet is down (i.e.
don't have Internet to verify freshness) or (2) Have squid respond to
a script that detects the Internet to be down telling it to serve up
stales pages when Internet down.  As configured, our Squid
implementation will not serve stale pages because it tries to access
the original Web site and the cached pages are not served up at all.

NOTE: We have tried Squid Off-line mode and that does not work as
you would think as several others reported. SO ARE THERE config
parameters that can make caching work in the presence of bad Internet
connection?


Yes and no.

The key directive _is_ offline_mode. Although the confusing bit is that 
the mode must be always set to ON for situations like yours. Don't 
toggle it on/off. All it does is expand the type of things Squid caches 
to include some which would normally be discarded immediately. It 
prepares the cache content as best as possible for the second 
directive...


max_stale - once items already in cache (via offline_mode), this 
controls how long they may be served for after the Internet connection 
starts failing.
  Also there are refresh_pattern max-stale=N options in Squid 2.7 and 
3.2 to provide per-URL staleness control.
  Also HTTP responses from websites can contain max-stale controls 
telling your Squid its safe to cache and serve while stale.



Note that all of this is determined by the cacheability of the site 
objects from the start. If an object is not safe to cache and re-use the 
page which depends on it will break in some way while offline. A lot of 
sites webmasters do not send cache friendly header and so create sites 
which break very easily.



Amos



Re: [squid-users] New user - few questions

2011-11-22 Thread Amos Jeffries

On Tue, 22 Nov 2011 18:11:26 +, Sw@g wrote:

Hi all,

First of all, I would like to thank you for your time and effort for
providing such a great tool.

I am a new user on archlinux, using Squid locally.  I have a few
questions, regarding the setup most of all.

- Is it possible to change the information logged into access.log? I
would like it like that

= date +%#F_%T address_visited (I would like to replace the
timestamps with a human readable time/date and just the website
visited)


http://wiki.squid-cache.org/SquidFaq/SquidLogs
http://www.squid-cache.org/Doc/config/access_log
http://www.squid-cache.org/Doc/config/logformat



= Is it possible to limit the size of the logs from within the
squid.conf file?



No. You need to integrate log management tools like logrotate.d or cron 
jobs to control when log rotation occurs.



And the last question, I have that error coming up from the 
cache.log


IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 29: (92) Protocol not
available

And the browsing become really slow, even page aren't opening
anymore? Any advice?


Squid is unable to locate the client details in the kernel NAT table. 
NAT *must* be done on the Squid box.


Also ensure that you have separate http_port lines for the different 
types of traffic arriving at your Squid.


Amos


[squid-users] compilation error

2011-11-22 Thread benjamin fernandis
Hi,

I am trying to compile squid code on Linux, compilation is done
properly but after that when i checked config.log ,  i can see some
errors and warning.So i wonder that does it related with OS side or
from squid side ?


my squid configuration paramters while compilation process:

Squid Cache: Version 3.1.16
configure options:  '--prefix=/opt/squid/'
'--with-logdir=/var/log/squid/' '--with-pidfile=/var/run/squid.pid'
'--enable-icmp' '--enable-cache-digest' '--enable-forward-log'
'--enable-follow-x-forwarded-for' '--enable-snmp'
'--enable-linux-netfilter' '--enable-wccp2' '--enable-http-violations'
'--enable-storeio=aufs,ufs' '--with-large-files'
'--with-filedescriptors=22400' '--enable-async-io=128'
'--enable-removal-policies=lru,heap' '--enable-useragent-log'
'--enable-referer-log' '--enable-err-languages=English'
'--enable-default-err-language=English' '--enable-zph-qos'
'--enable-icap-client' --with-squid=/opt/squid-3.1.16
--enable-ltdl-convenience


cat config.log | grep -i warning

cc1: warning: command line option -fno-rtti is valid for C++/ObjC++
but not for C
configure:20134: WARNING: cppunit does not appear to be installed.
squid does not require this, but code testing with 'make check' will
fail.
conftest.c:246: warning: conflicting types for built-in function 'rint'
conftest.c:246: warning: conflicting types for built-in function 'rint'
conftest.c:246: warning: conflicting types for built-in function 'log'
/opt/squid-3.1.16/conftest.cpp:334: warning: the use of `tempnam' is
dangerous, better use `mkstemp'


cat config.log | grep -i error

conftest.c:12:28: error: ac_nonexistent.h: No such file or directory
conftest.c:12:28: error: ac_nonexistent.h: No such file or directory
| /* Override any GCC internal prototype to avoid an error.
| /* Override any GCC internal prototype to avoid an error.
| /* Override any GCC internal prototype to avoid an error.
conftest.cpp:24:28: error: ac_nonexistent.h: No such file or directory
conftest.cpp:24:28: error: ac_nonexistent.h: No such file or directory
| /* Override any GCC internal prototype to avoid an error.
configure:15646: checking for dlerror
| #define HAVE_DLERROR 1
| /* Override any GCC internal prototype to avoid an error.
configure:16099: checking for error_t
conftest.cpp:38: error: expected primary-expression before ')' token
| #define HAVE_DLERROR 1
| if (sizeof ((error_t)))
conftest.cpp:76:18: error: ltdl.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
conftest.cpp:76:16: error: dl.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
conftest.cpp:76:20: error: sys/dl.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
conftest.cpp:76:17: error: dld.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
conftest.cpp:76:25: error: mach-o/dyld.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| /* Override any GCC internal prototype to avoid an error.
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| /* Override any GCC internal prototype to avoid an error.
configure:16646: checking for dlerror
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
| /* Override any GCC internal prototype to avoid an error.
conftest.cpp:110:25: error: sys/devpoll.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
conftest.cpp:77:25: error: sys/devpoll.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
conftest.cpp:78:25: error: sys/devpoll.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
|perror(devpoll_create:);
conftest.c:88:28: error: ac_nonexistent.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
conftest.cpp:126:21: error: bstring.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
conftest.cpp:93:21: error: bstring.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
conftest.cpp:137:23: error: gnumalloc.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
conftest.cpp:104:23: error: gnumalloc.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
conftest.cpp:141:23: error: ip_compat.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
conftest.cpp:108:23: error: ip_compat.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
conftest.cpp:141:27: error: ip_fil_compat.h: No such file or directory
| #define HAVE_DLERROR 1
| #define HAVE_ERROR_T 1
| #define HAVE_DLERROR 1
conftest.cpp:108:27: error: ip_fil_compat.h: No such file or 

[squid-users] Squid loosing connectivity for 30 seconds

2011-11-22 Thread Elie Merhej

 Hi,

I am currently facing a problem that I wasn't able to find a solution 
for in the mailing list or on the internet,
My squid is dying for 30 seconds every one hour at the same exact time, 
squid process will still be running,
I lose my wccp connectivity, the cache peers detect the squid as a dead 
sibling, and the squid cannot server any requests
The network connectivity of the sever is not affected (a ping to the 
squid's ip doesn't timeout)


The problem doesn't start immediately when the squid is installed on the 
server (The server is dedicated as a squid)

It starts when the cache directories starts to fill up,
I have started my setup with 10 cache directors, the squid will start 
having the problem when the cache directories are above 50% filled
when i change the number of cache directory (9,8,...) the squid works 
for a while then the same problem

cache_dir aufs /cache1/squid 9 140 256
cache_dir aufs /cache2/squid 9 140 256
cache_dir aufs /cache3/squid 9 140 256
cache_dir aufs /cache4/squid 9 140 256
cache_dir aufs /cache5/squid 9 140 256
cache_dir aufs /cache6/squid 9 140 256
cache_dir aufs /cache7/squid 9 140 256
cache_dir aufs /cache8/squid 9 140 256
cache_dir aufs /cache9/squid 9 140 256
cache_dir aufs /cache10/squid 8 140 256

I have 1 terabyte of storage
Finally I created two cache dircetories (One on each HDD) but the 
problem persisted


Can any one help me?

Thank you
Elie Merhej