Re: [squid-users] Add a prefix/suffix if a domain is not resolved?

2009-08-10 Thread Olivier Sannier

Henrik Nordstrom wrote:

lör 2009-08-08 klockan 15:54 +0200 skrev Kinkie:
  

Many browsers already do this.

With Squid you can use a redirector scriptan but you'll have to write
your custom script, such a functionality is not bundled with squid.



Indeed.. and didn't I write such a script some many years ago? Or was it
someone else who posted one.. don't remember. 
  
Well, I must not have used the proper search terms then. Would you have 
any clues as to what I should look for to find this script?


  




Re: [squid-users] When will SSLBump be included on the production version?

2009-08-10 Thread Amos Jeffries

SSCR Internet Admin wrote:

Hello,

I would like to ask as to when will SSLBump be included in the main stream
of squid stable version? It seems that SSLBump can help use admin specially
in schools where ultrasurf is used on some laboratories (on usb memory
sticks) mostly on WiFi users.

Regards
 


SSLBump is already included in the Squid-3.1 releases.

That version will be mainstream and stable when all the bugs are closed. 
 The bugs that do remain are fairly limited to certain uses and 
environments now. So try it out and see if it works for you.

If it does not work we would like to know why so we can fix those problems.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE18
  Current Beta Squid 3.1.0.13


Re: [squid-users] refresh_pattern configuration

2009-08-10 Thread Amos Jeffries

Muhammad Sharfuddin wrote:

On Sun, 2009-08-09 at 14:19 +1200, Amos Jeffries wrote:

Muhammad Sharfuddin wrote:

Squid Cache: Version 2.5.STABLE12
and
Squid Cache: Version 2.7.STABLE5

I am using following refresh_patterns and never encounter any problem.
e.g once I visit a website, on next visit usually squid serves it from
cache, and TCP_HIT or TCP_MEM_HIT or TCP_REFRESH_HIT etc are so common
in '/var/log/squid/access'

But a person(who I beleive is a Linux/Squid Guru) critcize on the
refresh_pattern I am using in squid.

(One of my posts or someone else?).


So please pass your comments and corrections on the following configs

#Suggested default:
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern . 0   20% 4320

refresh_pattern -i \.ico$  43200 100% 43200 override-lastmod
override-expire ignore-reload
The problem with these commonly used patterns is that websites are now 
obfuscating the URL with query strings more and more often. Not always 
intentionally.


Example; the above pattern will not match any website with:
   http://example.com/some.ico?sid=user-session-idtrack=fukn-cookie-id

changing the hard $ to softer (\?.*)?$  catches all of those websites 
and keeps Squid doing what you meant to configure.



Other than that the only thing to draw real criticism is the use of 
non-compliant override options. It's not nice netizen behaviour, ... but 
... everyone else does it.



[warning rant ahead: (not your fault I know)]

Personally as a webmaster I set realistic expiry info on every website I 
touch in order to maximize speed and cacheability, particularly since 
getting to now Squid. It really annoys me that admin like yourself are 
forced to do this by a horribly large amount of clueless websites and 
CMS software developers.
Such rules will in fact _decrease_ the cacheability times and benefits 
for many of the websites I and other clued-on people setup. We are 
forced to cope by changing filenames and sometimes URL links on every 
single edit, no matter how trivial.
I'm sick of people complaining why can Y see their user icon in forum X 
but I can't? ... what?! cant fix it till next month just because I live 
in country/ISP X? always the webmaster to blame, never the browser 
author or transparent proxy admin.

/rant

Amos


So in other words its not a healthy practice to use 'refresh_patterns'
other then the defaults(in squid.conf 'Suggested default') ?.


The patterns and min/pct/max settings are fine. They do help.
It's the optional extensions ignore-* and override-* which break HTTP 
and cause issues.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE18
  Current Beta Squid 3.1.0.13


Re: [squid-users] blocking gtalk from gmail (https)

2009-08-10 Thread Leonardo Carneiro

It is possible, but you'll need squid + firewall (iptables in my case)

Rules i used in iptables:

   iptables -A FORWARD -p tcp -m tcp --dport 5222 -j DROP
   iptables -A FORWARD -p tcp -m tcp -d 72.14.217.189 -j DROP


Acl i'd blocked in squid:

   acl gtalk url_regex -i .*talk.google.com .*chatenabled.mail.google.com

By some reason, some users still can connect using the executable 
client. I'll try to debug this later.


Amos Jeffries escreveu:

Yatin Shah wrote:

I searched alot to find the solution but could not get any working
answer. How to block gtalk from google mail without blocking gmail
itself. We are able to do it for http://gmail.com but not successful
for https://gtalk.com.


May or may not be possible. With HTTPS all Squid has to decide with is 
the domain name.

From the above you should be able to block CONNECT to .gtalk.com

If that does not work please show what you ave tried, etc.

Amos


--

*Leonardo de Souza Carneiro*
*Veltrac - Tecnologia em Logística.*
lscarne...@veltrac.com.br mailto:lscarne...@veltrac.com.br
http://www.veltrac.com.br http://www.veltrac.com.br/
/Fone Com.: (43)2105-5601/
/Av. Higienópolis 1601 Ed. Eurocenter Sl. 803/
/Londrina- PR/
/Cep: 86015-010/





[squid-users] [suiqd-2.7STABLE6-1] Problem RPC via HTTP S‏

2009-08-10 Thread hdyugoplastika hdyugoplastika

Hi at all
I have a problem with authentication RPC over HTTPS with
squid-2.7STABLE6-1 (rpm downloaded from squid-cache.org).
I have squid server(version 2.5STABLE14-1 + owa patch) where RPC over HTTPS
authetication works fine. With both version now problem via OWA.
These are the log:

access.log
10.223.0.71 - - [10/Aug/2009:11:03:56 +0200] RPC_IN_DATA 
https://webmail.XXXx.it/rpc/rpcproxy.dll?EXPROMO1.nf.xXXX.it:6002 
HTTP/1.1 401 509 TCP_MISS:SOURCEHASH_PARENT
10.223.0.71 - - [10/Aug/2009:11:03:56 +0200] RPC_OUT_DATA 
https://webmail.XXXx.it/rpc/rpcproxy.dll?EXPROMO1.nf.xXXX.it:6002 
HTTP/1.1 401 509 TCP_MISS:SOURCEHASH_PARENT


cache.log(I insert just, for me, rilevant)
2009/08/10 11:03:52| httpAppendBody: Request not yet fully sent RPC_IN_DATA
https://webmail.XXXx.it/rpc/rpcproxy.dll?EXPROMO1.nf.xXXX.it:6002;
2009/08/10 11:03:52| fwdComplete:
https://webmail.XXXx.it/rpc/rpcproxy.dll?EXPROMO1.nf.xXXX.it:6002
2009/08/10 11:03:52| fwdReforward:
https://webmail.XXXx.it/rpc/rpcproxy.dll?EXPROMO1.nf.xXXX.it:6002?
2009/08/10 11:03:52| fwdReforward: No, ENTRY_FWD_HDR_WAIT isn't set
2009/08/10 11:03:52| fwdComplete: not re-forwarding status 401

and useful(?) exchange log:
2009-08-10 09:00:07 W3SVC1 MI1EXPROM1 10.223.247.61 GET
/exchweb/bin/auth/owalogon.asp
url=https://webmail.XXXx.it/exchange/reason=0 443 - 192.168.21.245
HTTP/1.1 libwww-perl/5.823 - - webmail.XXXx.it 200 0 0 9070 205 0
2009-08-10 09:00:38 W3SVC1 MI1EXPROM1 10.223.247.61 RPC_OUT_DATA
/rpc/rpcproxy.dll EXPROMO1.nf.xXXX.it:6002 443 - 10.223.247.201 HTTP/1.0
MSRPC - - webmail.XXXx.it 401 2 2148074254 375 451 46
2009-08-10 09:00:38 W3SVC1 MI1EXPROM1 10.223.247.61 RPC_IN_DATA
/rpc/rpcproxy.dll EXPROMO1.nf.xXXX.it:6002 443 - 10.223.247.201 HTTP/1.0
MSRPC - - webmail.XXXx.it 401 2 2148074254 375 448 124
2009-08-10 09:02:08 W3SVC1 MI1EXPROM1 10.223.247.61 GET
/exchweb/bin/auth/owalogon.asp
url=https://webmail.XXXx.it/exchange/reason=0 443 - 192.168.21.245
HTTP/1.1 libwww-perl/5.823 - - webmail.XXXx.it 200 0 0 9070 205 15
2009-08-10 09:03:52 W3SVC1 MI1EXPROM1 10.223.247.61 RPC_IN_DATA
/rpc/rpcproxy.dll EXPROMO1.nf.xXXX.it:6002 443 - 10.223.247.201 HTTP/1.0
MSRPC - - webmail.XXXx.it 401 2 2148074254 375 344 0
2009-08-10 09:03:52 W3SVC1 MI1EXPROM1 10.223.247.61 RPC_OUT_DATA
/rpc/rpcproxy.dll EXPROMO1.nf.xXXX.it:6002 443 - 10.223.247.201 HTTP/1.0
MSRPC - - webmail.XXXx.it 401 2 2148074254 375 451 0
2009-08-10 09:03:56 W3SVC1 MI1EXPROM1 10.223.247.61 RPC_IN_DATA
/rpc/rpcproxy.dll EXPROMO1.nf.xXXX.it:6002 443 - 10.223.247.201 HTTP/1.0
MSRPC - - webmail.XXXx.it 401 2 2148074254 375 344 0
2009-08-10 09:03:56 W3SVC1 MI1EXPROM1 10.223.247.61 RPC_OUT_DATA
/rpc/rpcproxy.dll EXPROMO1.nf.xXXX.it:6002 443 - 10.223.247.201 HTTP/1.0
MSRPC - - webmail.XXXx.it 401 2 2148074254 375 451 0
2009-08-10 09:04:07 W3SVC1 MI1EXPROM1 10.223.247.61 GET
/exchweb/bin/auth/owalogon.asp
url=https://webmail.XXXx.it/exchange/reason=0 443 - 192.168.21.245
HTTP/1.1 libwww-perl/5.823 - - webmail.XXXx.it 200 0 0 9070 205 0

Below the configuration:
squid 2.5STABLE14-1 + owa patch

http_port  80
extension_methods RPC_IN_DATA RPC_OUT_DATA
https_port 10.223.243.26:443 cert=/etc/squid/cert/wm.XXXx.it.cert
key=/etc/squid/cert/wm.XXXx.it.private.key
cafile=/etc/squid/cert/cafile.cert
 ssl_unclean_shutdown on
cache_peer mail.XXXx.it parent 443 0 ssl
sslcert=/etc/squid/cert/mi1exprom1.cert sslflags=DONT_VERIFY_PEER proxy-only
no-query no-digest front-end-https=on
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
 emulate_httpd_log on
 log_ip_on_direct on
 debug_options ALL,1,83,2
hosts_file /etc/hosts
 redirect_rewrites_host_header on
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
 shutdown_lifetime 0 seconds
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl x src 192.168.55.0/24
acl easy_bb src xxx.xxx.64.0/19
acl easy_bb src xxx.xxx.224.0/19
acl easy_bb src xxx.xxx.16.0/20
acl easy_bb src xxx.xxx.81.0/24
acl easy_bb src xxx.xxx.87.0/24
acl easy_bb src xxx.xxx.26.0/24
acl easy_bb src xxx.xxx.144.0/20
acl easy_bb src xxx.xxx.240.0/20
acl destination dst 10.223.243.24/32
acl access_mail urlpath_regex -i /etc/squid/users/access_mail.txt
acl access_url url_regex -i /etc/squid/url_valid.txt
acl acl_pfa dstdomain webmail.XXXx.it
http_access deny easy_bb
http_access allow x
http_access allow access_mail
http_access allow access_url
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow all
cache_peer_access mail.XXXx.it 

[squid-users] Squid cache

2009-08-10 Thread Никоноров Григорий
Hi squid users!

I'm running Squid 2.5 on FreeBSD 6.4.
The problem with caching doc files,please help!

Squid downloads a file and puts it in the cache. The file is then
updated on the server and downloading again.
File size: old - 2,5 Mb, new - 7 Mb.
There is no restrictions on the size of the files in squid.conf
Squid offers to upload the old file from the cache.
Emptying the cache in the browser and restarting did not help.
Also no problems with images,excel,txt and html files.

access.log:
(manual.doc)
1249892125.696 22 192.168.164.111 TCP_MISS/404 520 GET 
http://qzar.spb.ru/test/manua.doc grigoryn DIRECT/213.182.169.10 text/html
1249892125.839  7 192.168.164.111 TCP_HIT/200 700 GET 
http://qzar.spb.ru/favicon.ico grigoryn NONE/- image/x-icon
1249892134.036   3659 192.168.164.111 TCP_MISS/200 521599 GET 
http://qzar.spb.ru/test/manual.doc grigoryn DIRECT/213.182.169.10 
application/msword
1249892376.653 78 192.168.164.111 TCP_HIT/200 521608 GET 
http://qzar.spb.ru/test/manual.doc grigoryn NONE/- application/msword
1249892422.479 44 192.168.164.111 TCP_HIT/200 521608 GET 
http://qzar.spb.ru/test/manual.doc grigoryn NONE/- application/msword
1249892434.849 99 192.168.164.111 TCP_HIT/200 521608 GET 
http://qzar.spb.ru/test/manual.doc grigoryn NONE/- application/msword
1249892439.664105 192.168.164.111 TCP_HIT/200 521608 GET 
http://qzar.spb.ru/test/manual.doc grigoryn NONE/- application/msword
1249892459.612  7 192.168.164.111 TCP_DENIED/407 1715 GET 
http://qzar.spb.ru/test/manual.doc - NONE/- text/html
1249892461.341 88 192.168.164.111 TCP_HIT/200 521608 GET 
http://qzar.spb.ru/test/manual.doc grigoryn NONE/- application/msword
1249892487.566 64 192.168.164.111 TCP_HIT/200 521608 GET 
http://qzar.spb.ru/test/manual.doc grigoryn NONE/- application/msword
1249892549.111  1 192.168.164.111 TCP_DENIED/407 1715 GET 
http://qzar.spb.ru/test/manual.doc - NONE/- text/html
1249892550.482 83 192.168.164.111 TCP_HIT/200 521608 GET 
http://qzar.spb.ru/test/manual.doc grigoryn NONE/- application/msword
1249892789.651  60523 192.168.164.111 TCP_REFRESH_MISS/200 7398273 GET 
http://qzar.spb.ru/test/manual.doc grigoryn DIRECT/213.182.169.10 
application/msword

(saxalin.doc)
1249895451.176  24616 192.168.164.111 TCP_MISS/200 2456961 GET 
http://qzar.spb.ru/test/saxalin.doc grigoryn DIRECT/213.182.169.10 
application/msword
1249896904.336 19 213.182.169.4 TCP_NEGATIVE_HIT/404 682 GET 
http://www.ru/favicon.ico grigoryn NONE/- text/html
1249896911.785   1718 213.182.169.4 TCP_HIT/200 11925525 GET 
http://qzar.spb.ru/test/saxalin.doc grigoryn NONE/- application/msword
1249896927.657 12 213.182.169.4 TCP_DENIED/407 1834 GET 
http://qzar.spb.ru/test/saxalin.doc - NONE/- text/html
1249896932.773   3938 213.182.169.4 TCP_HIT/200 5730838 GET 
http://qzar.spb.ru/test/saxalin.doc grigoryn NONE/- application/msword
1249896962.922 28 213.182.169.4 TCP_DENIED/407 1834 GET 
http://qzar.spb.ru/test/saxalin.doc - NONE/- text/html
1249896965.842   1474 213.182.169.4 TCP_HIT/200 11925526 GET 
http://qzar.spb.ru/test/saxalin.doc grigoryn NONE/- application/msword
1249897008.540   1036 213.182.169.4 TCP_HIT/200 11925526 GET 
http://qzar.spb.ru/test/saxalin.doc grigoryn NONE/- application/msword
1249897031.110 20 213.182.169.4 TCP_DENIED/407 1834 GET 
http://qzar.spb.ru/test/saxalin.doc - NONE/- text/html
1249897034.312   1300 213.182.169.4 TCP_HIT/200 11925526 GET 
http://qzar.spb.ru/test/saxalin.doc grigoryn NONE/- application/msword

There is no access control and passwords for thise doc files. Without
squid all work fine and no problem with downloading files.

Tried to do the same for the Squid Cache 2.7 STABLE3 and 3.0.STABLE18
Debian Lenny with negative result.










Re: [squid-users] [suiqd-2.7STABL E6-1]Problem RPC via HTTPS‏

2009-08-10 Thread Amos Jeffries

hdyugoplastika hdyugoplastika wrote:

Hi at all
I have a problem with authentication RPC over HTTPS with
squid-2.7STABLE6-1 (rpm downloaded from squid-cache.org).
I have squid server(version 2.5STABLE14-1 + owa patch) where RPC over HTTPS
authetication works fine. With both version now problem via OWA.
These are the log:

access.log
10.223.0.71 - - [10/Aug/2009:11:03:56 +0200] RPC_IN_DATA 
https://webmail.XXXx.it/rpc/rpcproxy.dll?EXPROMO1.nf.xXXX.it:6002 HTTP/1.1 
401 509 TCP_MISS:SOURCEHASH_PARENT
10.223.0.71 - - [10/Aug/2009:11:03:56 +0200] RPC_OUT_DATA 
https://webmail.XXXx.it/rpc/rpcproxy.dll?EXPROMO1.nf.xXXX.it:6002 HTTP/1.1 
401 509 TCP_MISS:SOURCEHASH_PARENT


cache.log(I insert just, for me, rilevant)
2009/08/10 11:03:52| httpAppendBody: Request not yet fully sent RPC_IN_DATA
https://webmail.XXXx.it/rpc/rpcproxy.dll?EXPROMO1.nf.xXXX.it:6002;
2009/08/10 11:03:52| fwdComplete:
https://webmail.XXXx.it/rpc/rpcproxy.dll?EXPROMO1.nf.xXXX.it:6002
2009/08/10 11:03:52| fwdReforward:
https://webmail.XXXx.it/rpc/rpcproxy.dll?EXPROMO1.nf.xXXX.it:6002?
2009/08/10 11:03:52| fwdReforward: No, ENTRY_FWD_HDR_WAIT isn't set
2009/08/10 11:03:52| fwdComplete: not re-forwarding status 401

and useful(?) exchange log:
2009-08-10 09:00:07 W3SVC1 MI1EXPROM1 10.223.247.61 GET
/exchweb/bin/auth/owalogon.asp
url=https://webmail.XXXx.it/exchange/reason=0 443 - 192.168.21.245
HTTP/1.1 libwww-perl/5.823 - - webmail.XXXx.it 200 0 0 9070 205 0
2009-08-10 09:00:38 W3SVC1 MI1EXPROM1 10.223.247.61 RPC_OUT_DATA
/rpc/rpcproxy.dll EXPROMO1.nf.xXXX.it:6002 443 - 10.223.247.201 HTTP/1.0
MSRPC - - webmail.XXXx.it 401 2 2148074254 375 451 46
2009-08-10 09:00:38 W3SVC1 MI1EXPROM1 10.223.247.61 RPC_IN_DATA
/rpc/rpcproxy.dll EXPROMO1.nf.xXXX.it:6002 443 - 10.223.247.201 HTTP/1.0
MSRPC - - webmail.XXXx.it 401 2 2148074254 375 448 124
2009-08-10 09:02:08 W3SVC1 MI1EXPROM1 10.223.247.61 GET
/exchweb/bin/auth/owalogon.asp
url=https://webmail.XXXx.it/exchange/reason=0 443 - 192.168.21.245
HTTP/1.1 libwww-perl/5.823 - - webmail.XXXx.it 200 0 0 9070 205 15
2009-08-10 09:03:52 W3SVC1 MI1EXPROM1 10.223.247.61 RPC_IN_DATA
/rpc/rpcproxy.dll EXPROMO1.nf.xXXX.it:6002 443 - 10.223.247.201 HTTP/1.0
MSRPC - - webmail.XXXx.it 401 2 2148074254 375 344 0
2009-08-10 09:03:52 W3SVC1 MI1EXPROM1 10.223.247.61 RPC_OUT_DATA
/rpc/rpcproxy.dll EXPROMO1.nf.xXXX.it:6002 443 - 10.223.247.201 HTTP/1.0
MSRPC - - webmail.XXXx.it 401 2 2148074254 375 451 0
2009-08-10 09:03:56 W3SVC1 MI1EXPROM1 10.223.247.61 RPC_IN_DATA
/rpc/rpcproxy.dll EXPROMO1.nf.xXXX.it:6002 443 - 10.223.247.201 HTTP/1.0
MSRPC - - webmail.XXXx.it 401 2 2148074254 375 344 0
2009-08-10 09:03:56 W3SVC1 MI1EXPROM1 10.223.247.61 RPC_OUT_DATA
/rpc/rpcproxy.dll EXPROMO1.nf.xXXX.it:6002 443 - 10.223.247.201 HTTP/1.0
MSRPC - - webmail.XXXx.it 401 2 2148074254 375 451 0
2009-08-10 09:04:07 W3SVC1 MI1EXPROM1 10.223.247.61 GET
/exchweb/bin/auth/owalogon.asp
url=https://webmail.XXXx.it/exchange/reason=0 443 - 192.168.21.245
HTTP/1.1 libwww-perl/5.823 - - webmail.XXXx.it 200 0 0 9070 205 0

Below the configuration:
squid 2.5STABLE14-1 + owa patch

http_port  80
extension_methods RPC_IN_DATA RPC_OUT_DATA
https_port 10.223.243.26:443 cert=/etc/squid/cert/wm.XXXx.it.cert
key=/etc/squid/cert/wm.XXXx.it.private.key
cafile=/etc/squid/cert/cafile.cert
 ssl_unclean_shutdown on
cache_peer mail.XXXx.it parent 443 0 ssl
sslcert=/etc/squid/cert/mi1exprom1.cert sslflags=DONT_VERIFY_PEER proxy-only
no-query no-digest front-end-https=on
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
 emulate_httpd_log on
 log_ip_on_direct on
 debug_options ALL,1,83,2
hosts_file /etc/hosts
 redirect_rewrites_host_header on
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
 shutdown_lifetime 0 seconds
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl x src 192.168.55.0/24
acl easy_bb src xxx.xxx.64.0/19
acl easy_bb src xxx.xxx.224.0/19
acl easy_bb src xxx.xxx.16.0/20
acl easy_bb src xxx.xxx.81.0/24
acl easy_bb src xxx.xxx.87.0/24
acl easy_bb src xxx.xxx.26.0/24
acl easy_bb src xxx.xxx.144.0/20
acl easy_bb src xxx.xxx.240.0/20
acl destination dst 10.223.243.24/32
acl access_mail urlpath_regex -i /etc/squid/users/access_mail.txt
acl access_url url_regex -i /etc/squid/url_valid.txt
acl acl_pfa dstdomain webmail.XXXx.it
http_access deny easy_bb
http_access allow x
http_access allow access_mail
http_access allow access_url
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow 

Re: [squid-users] Squid - Not replace source IP address

2009-08-10 Thread Matus UHLAR - fantomas
 Amos Jeffries-2 wrote:
  NP: Your word 'transparently redirected' appears to mean 'routed' in that
  paragraph. Please use the word 'transparent' less
  /rant.

On 04.08.09 17:24, casket88 wrote:
 The useage of the word transparent is in reference to the users, it is
 transparent to them. Transparent is a good word, I think I'll use it more.

Transparent is a good word but it means something completely different:

  A transparent proxy is a proxy that does not modify the request or
  response beyond what is required for proxy authentication and
  identification.

This is a citation from RFC2616 which defines the HTTP protocol. Since we
are talking about HTTP protocol, we should use words as they are defined
there, otherwise it could lead to misunderstandings.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Support bacteria - they're the only culture some people have. 


Re: [squid-users] Squid - Not replace source IP address

2009-08-10 Thread Leonardo Rodrigues



Transparent is a good word but it means something completely different:

  A transparent proxy is a proxy that does not modify the request or
  response beyond what is required for proxy authentication and
  identification.

This is a citation from RFC2616 which defines the HTTP protocol. Since we
are talking about HTTP protocol, we should use words as they are defined
there, otherwise it could lead to misunderstandings.
  


   proxies working on so-called 'transparent' fashion do not alter 
requests and responses, thus acchieving what RFC2616 says. It's true 
they alter source IP addresses ... but that is not quoted on RFC2616.


   So it's a matter of interpretation  the highlighted quote, in my 
opinion, does not prohibite a proxy to alter the source IP address, as 
long the request and the responde is keep intact.


   That way, the so-called 'transparent' proxy setup is completly 
RFC-2616 compliant.





--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






[squid-users] various squid instances on same server

2009-08-10 Thread Enrique

i can install various squid instances on same server?
for example: i wnat  to some users one  squid response by someip:8080 port 
and external ip A

ACL, directives etc...
Other squid response to users  otherip:3128 and external ip B
somes times  happen when  some users is downloading a file from megaupload, 
rapidshare  ...  i  can't not
now i can  configure somes ip addres to my squid proxy and downloading files 
from thas sites them megaupload will see  other ip






Re: [squid-users] various squid instances on same server

2009-08-10 Thread Leonardo Rodrigues

Enrique escreveu:

i can install various squid instances on same server?
for example: i wnat  to some users one  squid response by someip:8080 
port and external ip A

ACL, directives etc...
Other squid response to users  otherip:3128 and external ip B
somes times  happen when  some users is downloading a file from 
megaupload, rapidshare  ...  i  can't not
now i can  configure somes ip addres to my squid proxy and downloading 
files from thas sites them megaupload will see  other ip





   yes you can ... but that's apparently not necessary for your needs.

   if you want different external IPs for different requests, you can 
use tcp_outgoing_address statements.


   there's no need for N instances only for that.


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] various squid instances on same server

2009-08-10 Thread Gontzal
Yes, you can, more info: http://wiki.squid-cache.org/MultipleInstances

I've running 3 different instances on the same server with different
authentication modes and it's working fine. But I just change the
port, I don't use different ips

2009/8/10 Enrique enri...@banmet.cu:
 i can install various squid instances on same server?
 for example: i wnat  to some users one  squid response by someip:8080 port
 and external ip A
 ACL, directives etc...
 Other squid response to users  otherip:3128 and external ip B
 somes times  happen when  some users is downloading a file from megaupload,
 rapidshare  ...  i  can't not
 now i can  configure somes ip addres to my squid proxy and downloading files
 from thas sites them megaupload will see  other ip






Re: [squid-users] various squid instances on same server

2009-08-10 Thread Enrique
thats ok, now i can use same cache ( cache_dir aufs /var/cache/squid 10240 
256 256 ) on all squid instances???

i will use with same authentication method
thanks
- Original Message - 
From: Gontzal gontz...@gmail.com

To: Enrique enri...@banmet.cu
Cc: squid-users@squid-cache.org
Sent: Monday, August 10, 2009 10:28 AM
Subject: Re: [squid-users] various squid instances on same server


Yes, you can, more info: http://wiki.squid-cache.org/MultipleInstances

I've running 3 different instances on the same server with different
authentication modes and it's working fine. But I just change the
port, I don't use different ips

2009/8/10 Enrique enri...@banmet.cu:

i can install various squid instances on same server?
for example: i wnat to some users one squid response by someip:8080 port
and external ip A
ACL, directives etc...
Other squid response to users otherip:3128 and external ip B
somes times happen when some users is downloading a file from megaupload,
rapidshare ... i can't not
now i can configure somes ip addres to my squid proxy and downloading 
files

from thas sites them megaupload will see other ip









Re: [squid-users] various squid instances on same server

2009-08-10 Thread Leonardo Rodrigues

Enrique escreveu:
thats ok, now i can use same cache ( cache_dir aufs /var/cache/squid 
10240 256 256 ) on all squid instances???
i will use with same authentication method 


   different authentications methods is one of the only reasons for 
running multiple instances on the same machine.


   if you're using the same authentication method, then you probably 
dont need to run multiple instances.


   No, you cannot DIRECTLY have different instances sharing the same 
cache_dir. You can choose one instance for being the 'parent' proxy and 
all the other instances getting chained to this one, so you'd end up 
with all proxies using the same cache_dir. But DIRECTLY on squid.conf, 
you cannot do that.


   please, study about tcp_outgoing_address as it seems to be enough to 
meet your initial requirements. I really dont think you need multiple 
instances at all by your initial email.


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






RE: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?

2009-08-10 Thread Daniel
Kinkie,

I'm using the default settings, so I don't have any specific max 
request sizes specified. I guess I'll hold out until someone else running 3.1 
can test this.

Thanks!

-Original Message-
From: Kinkie [mailto:gkin...@gmail.com] 
Sent: Saturday, August 08, 2009 6:44 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?

Maybe the failure could depend on some specific settings, such as max
request size?

On 8/8/09, Heinz Diehl h...@fancy-poultry.org wrote:
 On 08.08.2009, Daniel wrote:

 Would anyone else using Squid mind doing this same bandwidth test and
 seeing
 if they have the same issue(s)?

 It works flawlessly using both 2.7-STABLE6 and 3.0-STABLE18 here.




-- 
/kinkie



Re: [squid-users] delay_access line

2009-08-10 Thread Dayo Adewunmi

Amos Jeffries wrote:

On Sun, 09 Aug 2009 15:03:10 +0100, Dayo Adewunmi contactd...@gmail.com
wrote:
  

Amos Jeffries wrote:


Dayo Adewunmi wrote:
  

Amos Jeffries wrote:


Dayo Adewunmi wrote:
  

Hi

Is this a valid config line?

delay_access 6 allow lan-students magic_words url_words



Maybe.
Are lan-students, magic_words and url_words the names of 
defined ACL?


  

Or do I need one for each acl?

You imply that they are, which makes the answer to the first 
question yes. And the second question:


   maybe yes, maybe no.

Since question 2 requires that we are psychic and can understand 
both what you intend to do with that single line and what the rest 
of your configuration looks like. There is no way we can do any 
better answers.


Amos
  
Sorry about that. Yes, the three are ACLs. lan-students is a /24 IP 
range


acl magic_words url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip 
.rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav


acl url_words url_regex -i ictp

Um, yeas those really are words, with regex like that they can 
appear anywhere in the URL at all.


For example www.prettyavians.com will match magic_words, as will 
example.com/drawings/index.html and 
http://google.com/search?q=foots=asuhihvrpmsvsd


  

This is the complete delay pool definition for these ACLs:

delay_class 6 3
delay_parameters 6 800/4000 1000/1000 600/800
delay_access 6 allow lan-students magic_words url_words
delay_access 6 deny all

I want lan-students to never use more than 4000bytes of my bandwidth, 
and for the same
to apply to users (including those in a different delay pool) who 
download .mp3s, .zips,  or
use FTP to have this same restriction. This 4000bytes limit should 
also apply to those who

access websites with 'ictp' in the URL.
So, basically, any user who downloads mp3s and such, use FTP, 
navigates to ictp domains,
should have their requests handled by the 6th delay pool: 800/4000 
1000/1000 600/800, i.e.

actually 600bytes refresh/800bytes max.

Dayo


Take what you just explained and write your access lines that way...

(delay lan-students)
delay_access 6 allow lan-student

(or anyone using FTP)
acl ftp proto FTP
delay_access 6 allow FTP

(or anyone downloading .mp3s etc)
acl bad_downloads url_regex -i \.mp3(\?.*)$
delay_access 6 allow bad_downloads

(or any URL with ictp in it)
delay_access 6 allow url_words

(but thats all)
delay_access 6 deny all


Note the regex I use above to match .mp3 file extensions. With all 
extra code characters it will only match at the end of a URL file name.


Amos
  

Would the below delay pool definition work?



No. The regex is not valid. see below.

  
Is there a 
difference/advantage of putting each

ACL in its own line, or is it all the same?



Yes there is a difference.
http://wiki.squid-cache.org/SquidFaq/SquidAcl#head-57610c67cac987182f6055118dd6d29e1ccd4445
All the items listed in an ACL name are OR'd together. (any _one_ may
match)
All items on the same *_access line are AND'd together. (_all_ must match)


  
acl bad_downloads url_regex -i 



[.mp3$|.exe$|.mp3$|.vqf$|.tar.gz$|.gz$|.rpm$|.zip$|.rar$|.avi$|.mpeg$|.mpe$|.mpg$|.qt$|.ram$|.rm
  

$|.iso$|.raw$|.wav$]



[] means any single character between.
meaning your regex may as well be written [.mp3$exvqftarpimsow|] and
matches every URL possible.

What I think you meant is:
acl bad_downloads url_regex -i
\.(mp(3|g|eg?)|exe|vqf|.gz|rpm|zip|avi|qt|ra?m|iso|raw|wav)(\?.*)$

  

acl ftp_downloads proto FTP

delay_class 6 3
delay_parameters 6 800/4000 1000/1000 600/800
delay_access 6 allow lan-students bad_downloads ftp_downloads


lan-students

Will block the bad word files only if being downloaded via FTP by a
student.

Student downloading via HTTP will be non-delayed, any people who are not
student will be non-delayed, any FTP access which is not a bad download
will be non-delayed.

  

delay_access 6 deny all

Dayo



Amos

  

Thank you, Amos. You've been a huge help with this! :-)

Dayo


[squid-users] moved permanently loop detection

2009-08-10 Thread Mike Mitchell
Would it be possible to add a simple loop detection for moved permanently 
response codes?
We've been hit a couple of times with loops from URLs like 
http://wwwcache.localtechwire.com/favicon.png.
I know the fix should really go into the browser, but IE is broken and 
Microsoft won't fix it.
See http://connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=357905

I was thinking a simple string compare of the requested URL and the contents of 
the 'Location:' field.  If the two are the same it's a loop.


Mike Mitchell
SAS Institute Inc.
mike.mitch...@sas.com
(919) 531-6793





[squid-users] Kerberos Authentication - Squid 3.1.0.13

2009-08-10 Thread Daniel
Good afternoon,

In my attempt to get Squid on our SLES 11 box authenticating with
Kerberos (negotiate), I used the following to re-configure:

./configure --prefix=/usr/local/squid --enable-cachemgr-hostname=sclthdq01w
--enable-auth=negotiate --enable-negotiate-auth-helpers=squid_kerb_auth

The configure appears to run without any issues. However, upon running
make all I receive the following errors:

squid_kerb_auth.c:507: error: implicit declaration of function
âgss_display_nameâ
make[5]: *** [squid_kerb_auth.o] Error 1
make[5]: Leaving directory
`/tmp/squid-3.1.0.13/helpers/negotiate_auth/squid_kerb_auth'
make[4]: *** [all-recursive] Error 1
make[4]: Leaving directory
`/tmp/squid-3.1.0.13/helpers/negotiate_auth/squid_kerb_auth'
make[3]: *** [all] Error 2
make[3]: Leaving directory
`/tmp/squid-3.1.0.13/helpers/negotiate_auth/squid_kerb_auth'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory
`/tmp/squid-3.1.0.13/helpers/negotiate_auth'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/tmp/squid-3.1.0.13/helpers'
make: *** [all-recursive] Error 1

Any ideas?? As always, T.I.A.

-Daniel



Re: [squid-users] various squid instances on same server

2009-08-10 Thread Amos Jeffries
On Mon, 10 Aug 2009 10:20:04 -0500, Enrique enri...@banmet.cu wrote:
 i can install various squid instances on same server?

NP: The word order in English i can indicates something _is_ possible,
with a side implication that it has already been proven so by doing it.
You meant to say can i, which means uncertain possibility, and is used to
start questions.  HTH.

 for example: i wnat  to some users one  squid response by someip:8080
port 
 and external ip A
 ACL, directives etc...
 Other squid response to users  otherip:3128 and external ip B
 somes times  happen when  some users is downloading a file from
megaupload,
 

The answer is yes. It is possible to run multiple squid on the same
machine.

However the first part of your example case does not need it to be done.
Only that squid be configured with multiple http_port lines and some myport
ACL which use tcp_outgoing_addr based on the receiving http_port.

 rapidshare  ...  i  can't not
 now i can  configure somes ip addres to my squid proxy and downloading
 files 
 from thas sites them megaupload will see  other ip

The example sites still won't work well with either setup.  To bypass the
rapidshare security with a proxy you are best off with the tproxy feature.
Which uses the client IP address on outbound links to the server.

Amos



Re: [squid-users] Problem with Squid + Tproxy and Rapdishare

2009-08-10 Thread Carlos Botejara
OK.

Ok. I did what you told me, modify the rule, but nothing happened ..
everything remains the same
Rule amended
iptables-t mangle-A PREROUTING-p tcp - dport 80-j TPROXY - tproxy-mark
0x1/0x1 - on-port 3129

2009/8/9 Amos Jeffries squ...@treenet.co.nz:
 On Sun, 9 Aug 2009 10:58:23 -0300, Carlos Botejara cbotej...@gmail.com
 wrote:
 hi, this is my first post here.
 I have a problem, but first I describe the scenario
 I have clients with public IP
 Mikrotik router redirecting traffic to SQUID
 Squid 3.1 with support for TPROXY
 Iptables 1.4.4 with support for TPROXY
 Debian Lenny / Kernel 2.6.28 with support for TPROXY

 well.
 The proxy works as well, and when I made some test pages whatismyip,
 shows that the ip is the CLIENT.
 However. I can not get my clients with public IP address
 simultaneously downloading from RapidShare / Megaupload ETC. The error
 shown within these pages is the typical already are downloading from
 that ip, so if viewing RapidShare IP SQUID in reality and not the
 client. How fix this?

 the configuration file of squid in the harbor is well

 http_port 81 tproxy

 Iptables:

 iptables -t mangle -N DIVERT
 iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
 iptables -t mangle -A DIVERT -j MARK --set-mark 1
 iptables -t mangle -A DIVERT -j ACCEPT
 iptables -t mangle -A PREROUTING -p tcp --dport 3128 -j TPROXY
 --tproxy-mark 0x1/0x1 --on-port 81

 You have this rule ass-backwards.

 TPROXY is intended to intercept port 80 traffic, not port 3128 traffic.
 When the client is NOT configured to use the proxy. The HTTP request
 formats are noticeably different. It's trivially easy to detect those
 differences and probably what rapidshare is doing.

 Please go back and use the http://wiki.squid-cache.org/Features/Tproxy4
 documentation and configuration example.


 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100

 echo 1  /proc/sys/net/ipv4/ip_forward


 Mikrotik:
 Have a rule in the firewall to redirect all traffic to port 80 of the
 SQUID to the IP, port 3128

 All clients create sessions PPPOE in Router Mikrotik

 May help?

 Regards

 Amos




-- 
Carlos Botejara
Area Sistemas
cbotej...@gmail.com
NEUQUEN - ARGENTINA
C: 0299-154060127
MSN:carlos.botej...@hotmail.com
http://www.linkedin.com/in/carlosbotejara

Este correo está dirigido únicamente a la persona o entidad que figura
en el destinatario y puede contener información confidencial y/o
privilegiada.
La copia, reenvío, o distribución de este mensaje por personas o
entidades diferentes al destinatario está prohibido.
Si Ud. ha recibido este correo por error, por favor contáctese con el
remitente inmediatamente y borre el material de cualquier computadora.
Este correo puede estar siendo monitoreado en cumplimiento de esta política.


Re: [squid-users] moved permanently loop detection

2009-08-10 Thread Henrik Nordstrom
mån 2009-08-10 klockan 16:58 -0400 skrev Mike Mitchell:

 I was thinking a simple string compare of the requested URL and the contents 
 of the 'Location:' field.  If the two are the same it's a loop.

If it's cacheble yes.

Regards
Henrik



Re: [squid-users] Problem with Squid + Tproxy and Rapdishare

2009-08-10 Thread Amos Jeffries
On Mon, 10 Aug 2009 20:30:05 -0300, Carlos Botejara cbotej...@gmail.com
wrote:
 OK.
 
 Ok. I did what you told me, modify the rule, but nothing happened ..
 everything remains the same
 Rule amended
 iptables-t mangle-A PREROUTING-p tcp - dport 80-j TPROXY - tproxy-mark
 0x1/0x1 - on-port 3129

Hm, okay. Then you need to find out exactly how the clients are connecting
to that site and why its not working.

Amos

 
 2009/8/9 Amos Jeffries squ...@treenet.co.nz:
 On Sun, 9 Aug 2009 10:58:23 -0300, Carlos Botejara cbotej...@gmail.com
 wrote:
 hi, this is my first post here.
 I have a problem, but first I describe the scenario
 I have clients with public IP
 Mikrotik router redirecting traffic to SQUID
 Squid 3.1 with support for TPROXY
 Iptables 1.4.4 with support for TPROXY
 Debian Lenny / Kernel 2.6.28 with support for TPROXY

 well.
 The proxy works as well, and when I made some test pages whatismyip,
 shows that the ip is the CLIENT.
 However. I can not get my clients with public IP address
 simultaneously downloading from RapidShare / Megaupload ETC. The error
 shown within these pages is the typical already are downloading from
 that ip, so if viewing RapidShare IP SQUID in reality and not the
 client. How fix this?

 the configuration file of squid in the harbor is well

 http_port 81 tproxy

 Iptables:

 iptables -t mangle -N DIVERT
 iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
 iptables -t mangle -A DIVERT -j MARK --set-mark 1
 iptables -t mangle -A DIVERT -j ACCEPT
 iptables -t mangle -A PREROUTING -p tcp --dport 3128 -j TPROXY
 --tproxy-mark 0x1/0x1 --on-port 81

 You have this rule ass-backwards.

 TPROXY is intended to intercept port 80 traffic, not port 3128 traffic.
 When the client is NOT configured to use the proxy. The HTTP request
 formats are noticeably different. It's trivially easy to detect those
 differences and probably what rapidshare is doing.

 Please go back and use the http://wiki.squid-cache.org/Features/Tproxy4
 documentation and configuration example.


 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100

 echo 1  /proc/sys/net/ipv4/ip_forward


 Mikrotik:
 Have a rule in the firewall to redirect all traffic to port 80 of the
 SQUID to the IP, port 3128

 All clients create sessions PPPOE in Router Mikrotik

 May help?

 Regards

 Amos



Re: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?

2009-08-10 Thread Henrik Lidström

Daniel skrev:

Kinkie,

I'm using the default settings, so I don't have any specific max 
request sizes specified. I guess I'll hold out until someone else running 3.1 
can test this.

Thanks!

-Original Message-
From: Kinkie [mailto:gkin...@gmail.com] 
Sent: Saturday, August 08, 2009 6:44 AM

To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?

Maybe the failure could depend on some specific settings, such as max
request size?

On 8/8/09, Heinz Diehl h...@fancy-poultry.org wrote:
  

On 08.08.2009, Daniel wrote:



Would anyone else using Squid mind doing this same bandwidth test and
seeing
if they have the same issue(s)?
  

It works flawlessly using both 2.7-STABLE6 and 3.0-STABLE18 here.






  

Squid Cache: Version 3.1.0.13

Working without a problem, tested multiple sites on the list.
Nothing special in the config except maybe pipeline_prefetch on

/Henrik


[squid-users] Squid and YahooMail

2009-08-10 Thread Rick Chisholm
what's up with Squid and https://login.yahoo.com?

Our marketing dept. at work needs access to Yahoo Analytics, but they
have to login via Yahoo! regular login.  Squid complains about a DNS
resolution issue but names the link as

http://443

It's quite odd.

-- 
Rick Chisholm
sysadmin
Parallel42
e. rchish...@parallel42.ca
m. 519-325-8630
w. www.parallel42.ca


Re: [squid-users] Squid and YahooMail

2009-08-10 Thread Amos Jeffries
On Mon, 10 Aug 2009 22:01:40 -0400, Rick Chisholm rchish...@parallel42.ca
wrote:
 what's up with Squid and https://login.yahoo.com?
 
 Our marketing dept. at work needs access to Yahoo Analytics, but they
 have to login via Yahoo! regular login.  Squid complains about a DNS
 resolution issue but names the link as
 
 http://443
 
 It's quite odd.

The strange link is due to some older Squid (3.x?) not generating the error
page link correctly for HTTPS.

The problem is still Squid being unable to perform DNS lookups or getting
no results back for the domain.
Try a newer release if you can and figure out why its not getting any DNS
results back from the resolver.

Amos


Re: [squid-users] Script Check

2009-08-10 Thread michel

Henrik Nordstrom hen...@henriknordstrom.net ha escrito:


fre 2009-08-07 klockan 21:34 -0400 skrev mic...@casa.co.cu:


Using squid 2.6 on my work, I have a group of users who connect by
dial-up access to a NAS and a server freeradius to authenticate each
time they log my users are assigned a dynamic IP address, making it
impossible to create permissions without authentication by IP address.


Ok.


I want to create a script for when you get a request to the squid from
the block of IP addresses, run a script that reads the username and IP
address from the server freeradius radwho tool that shows users
connected + ip address or mysql  from which you can achieve the same
process


The user= result interface of external acls is intended for exacly this
purpose.

What you need is a small script which reads IP addresses on stdin (one
at a time) and prints the following on stdout:

OK user=radiususername

if the user is authenticated via radius, or

ERR

if the user is not and should fall back on other authentication methods.

You can then plug this into Squid using external_acl_type, and bind an
acl to that using the external acl type. Remember to set ttl=nnn and
negative_ttl=nnn as suitable for your purpose.

Regards
Henrik





Hello

This Script could be in Perl?


Could get some example how to be able to guide me?

Sorry for the inconvenience

Thanks

--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.



[squid-users] squid 3.1: How to setup a Squid SSL reverse proxy for a parent SSL Squid proxy?

2009-08-10 Thread fulan Peng
Hi,

I have a Squid reverse proxy running with SSL support.  People can
access it with https://domainA.com. No problem.
Now I want to set up another Squid proxy server to proxy it  with SSL support.
That means https://domainA -- https://domainB.

My configuration file is similar like this for the parent.
Please help to set up the child squid to proxy this parent.

https_port 443 cert=/usr/newrprgate/CertAuth/testcert.cert
key=/usr/newrprgate/CertAuth/testkey.pem
defaultsite=mywebsite.mydomain.com vhost

cache_peer 10.112.62.20 parent 80 0 no-query originserver login=PASS
name=websiteA

acl sites_server_1 dstdomain websiteA.mydomain.com
cache_peer_access websiteA allow sites_server_1
http_access allow sites_server_1

http_access deny all


[squid-users] Squid crashes abnormally

2009-08-10 Thread Nyamul Hassan

Hi,

I've been using Squid.2.7.STABLE6 quite well till now.  However, for the 
past 4 - 5 days, it started crashing everytime it finished loading the store 
directories (aufs + coss).  Looking through the cache.log, the following 
message is show at the very end, right before the crash:


assertion failed: refresh.c:331: !sf.max

I tried to enable -k debug, but still nothing more than the message above 
looks suspicious.


I also downloaded the latest snapshot from the website (Aug/10), and still 
found the same error.  I tried googling this error, but the few results that 
I get were either to obscure for my understanding, or not conclusive.


Can someone please explain what is going wrong?

Regards
HASSAN