Hello list,
I'm using squid 2.5 and squidguard with a transparent proxy setup.
internet acces is granted to users using lists of client's IPs in squidguard. these lists are automatically generated with samba logon scripts, allowing real user access control with transparent proxy and without authentification.
squidgard is reloading the ip_src lists using #killall -HUP squidGuard command
This setup is working really fine, except a bug :
sometimes, when the command #killall -HUP squidGuard is run, downloads (ftp or http ) in progress are stopped, but without sending an error message, so the download is marked as completed, resulting in corrupted files. Even worse, the object in cache is considered as valid, so a new download of the same file results in a cache hit.
this problem was encountered whith all the clients (MSIE 5-6, Mozilla*, wget) with http and ftp downloads.
The problem can be viewed from two points :
- a squidguard problem : squidguard behavior on HUP signal is not really clean
- a squid problem : interrupted downloads not correctly cleaned in cache when redirector fail.
Any ideas ? maybe upgrading to 3.0 ?
Or implementing ip source list ACLs directly in squid instead of squidguard ? using external_acl helper to pick source ip from simple text files ?
Is this kind of setup already experiemented ?
Thanks, Denis
