[squid-users] Transparent proxying and forwarding loop detected

2014-07-10 Thread Peter Smith
Hi list,

I'm running Squid 3.3 on Linux as part of a wireless hotspot solution.

The box has two network interfaces: one to the outside world, the
other a private LAN with IP 10.0.0.1. On the LAN I'm using CoovaChilli
as an active portal.

I'd like to transparently intercept and cache web traffic from wifi
clients. Coova has a configuration option for the IP and port of an
optional proxy - all web traffic from wireless clients will be routed
through this. I've set it to 10.0.0.1:3128

Here's my squid config:

acl localnet src 10.0.0.0/255.0.0.0   # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow localnet
http_access deny all

http_port 10.0.0.1:3128 transparent
http_port 10.0.0.1:3127

coredump_dir /var/spool/squid3
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320

Unfortunately this throws WARNING: Forwarding loop detected warnings
(and in the client's browser an Access Denied error from Squid) and
I can't figure out why.

Running Squid in debugging mode (level 2), here's what I see when one
of the clients generates some Windows-related traffic


2014/07/10 13:43:57.438| client_side.cc(2316) parseHttpRequest: HTTP
Client local=10.0.0.1:3128 remote=10.0.0.4:60976 FD 8 flags=33
2014/07/10 13:43:57.438| client_side.cc(2317) parseHttpRequest: HTTP
Client REQUEST:
-
GET /ncsi.txt HTTP/1.1
Connection: Close
User-Agent: Microsoft NCSI
Host: www.msftncsi.com


--
2014/07/10 13:43:57.449| client_side_request.cc(786)
clientAccessCheckDone: The request GET
http://www.msftncsi.com/ncsi.txt is ALLOWED, because it matched
'localnet'
2014/07/10 13:43:57.449| client_side_request.cc(760)
clientAccessCheck2: No adapted_http_access configuration. default:
ALLOW
2014/07/10 13:43:57.449| client_side_request.cc(786)
clientAccessCheckDone: The request GET
http://www.msftncsi.com/ncsi.txt is ALLOWED, because it matched
'localnet'
2014/07/10 13:43:57.450| forward.cc(121) FwdState: Forwarding client
request local=10.0.0.1:3128 remote=10.0.0.4:60976 FD 8 flags=33,
url=http://www.msftncsi.com/ncsi.txt
2014/07/10 13:43:57.451| peer_select.cc(289) peerSelectDnsPaths: Found
sources for 'http://www.msftncsi.com/ncsi.txt'
2014/07/10 13:43:57.451| peer_select.cc(290) peerSelectDnsPaths:  
always_direct = DENIED
2014/07/10 13:43:57.451| peer_select.cc(291) peerSelectDnsPaths:   
never_direct = DENIED
2014/07/10 13:43:57.451| peer_select.cc(295) peerSelectDnsPaths:  
   DIRECT = local=0.0.0.0 remote=10.0.0.1:3128 flags=1
2014/07/10 13:43:57.451| peer_select.cc(304) peerSelectDnsPaths:  
 timedout = 0
2014/07/10 13:43:57.454| http.cc(2204) sendRequest: HTTP Server
local=10.0.0.1:35439 remote=10.0.0.1:3128 FD 11 flags=1
2014/07/10 13:43:57.455| http.cc(2205) sendRequest: HTTP Server REQUEST:
-
GET /ncsi.txt HTTP/1.1
User-Agent: Microsoft NCSI
Host: www.msftncsi.com
Via: 1.1 c3me-pete (squid/3.3.8)
X-Forwarded-For: 10.0.0.4
Cache-Control: max-age=259200
Connection: keep-alive


--
2014/07/10 13:43:57.456| client_side.cc(2316) parseHttpRequest: HTTP
Client local=10.0.0.1:3128 remote=10.0.0.1:35439 FD 13 flags=33
2014/07/10 13:43:57.456| client_side.cc(2317) parseHttpRequest: HTTP
Client REQUEST:
-
GET /ncsi.txt HTTP/1.1
User-Agent: Microsoft NCSI
Host: www.msftncsi.com
Via: 1.1 c3me-pete (squid/3.3.8)
X-Forwarded-For: 10.0.0.4
Cache-Control: max-age=259200
Connection: keep-alive


--
2014/07/10 13:43:57.459| client_side_request.cc(786)
clientAccessCheckDone: The request GET
http://www.msftncsi.com/ncsi.txt is ALLOWED, because it matched
'localnet'
2014/07/10 13:43:57.459| client_side_request.cc(760)
clientAccessCheck2: No adapted_http_access configuration. default:
ALLOW
2014/07/10 13:43:57.459| client_side_request.cc(786)
clientAccessCheckDone: The request GET
http://www.msftncsi.com/ncsi.txt is ALLOWED, because it matched
'localnet'
2014/07/10 13:43:57.459| WARNING: Forwarding loop detected for:
GET /ncsi.txt HTTP/1.1
User-Agent: Microsoft NCSI
Host: www.msftncsi.com
Via: 1.1 c3me-pete (squid/3.3.8)
X-Forwarded-For: 10.0.0.4
Cache-Control: max-age=259200
Connection: keep-alive


2014/07/10 13:43:57.460| errorpage.cc(1281) BuildContent: No existing
error page language negotiated for ERR_ACCESS_DENIED. Using default
error file.
2014/07/10 13:43:57.463| client_side_reply.cc(1974)

[squid-users] Configuring Squid to pass through HTTP authentication

2008-09-23 Thread Peter Smith
Hi,

I manage a Squid installation that acts as a proxy (but not cache) for
a set of HTTP servers.  I recently encountered a need to password
protect a directory full of content using HTTP authentication
(preferably digest, but open to basic if necessary).  For various
reasons, it's more convenient to perform the authentication on the
proxied servers rather than in Squid.  Is there a way to configure
Squid to pass authentication headers through to the proxied servers?
Right now, it's stripping authentication-related headers set by
clients.  I tried using the cache deny directive with a
urlpath_regex ACL matching the protected directory, but it made no
difference.

Thanks.


[squid-users] access.log no longer functioning, after --with-large-files

2006-10-16 Thread Peter Smith
I tested this before pushing to production, on a Redhat Linux v7.2 
server.  It worked fine.  I recompiled on the Redhat AS 3 (update 8) 
servers and installed.  Now Squid doesn't even open its access.log at 
all.  It does, however, open its cache.log .  This is a fully updated 
Redhat AS 3 config, simply i386, SMP, nothing really weird.  But 
currently, no logging.  I'm using a modified Redhat AS RPM spec file for 
compilation.  What follows is some of the configuration.  The only thing 
different is the addition of the --with-large-files and a recompile.  
Only other thing is I'm going from a functional Squid-2.5.STABLE6 to a 
(non-logging) Squid-2.6.STABLE4 .  Like I said, this same RPM is working 
fine on a test (Redhat v7.2) server.  There does not appear to be 
anything odd in the cache.log, other than naughty clients making some 
bogus requests.  The Squid(s) appear functional other than the logging.


%configure \
--program-prefix= \
--prefix=/usr \
--exec-prefix=/usr \
--bindir=/usr/bin \
--sbindir=/usr/sbin \
--sysconfdir=/etc \
--datadir=/usr/lib/squid \
--includedir=/usr/include \
--libdir=/usr/lib \
--libexecdir=/usr/libexec \
--localstatedir=/var \
--sharedstatedir=/usr/com \
--mandir=/usr/share/man \
--infodir=/usr/share/info \
--exec_prefix=/usr \
--bindir=/usr/sbin \
--libexecdir=/usr/lib/squid \
--localstatedir=/var \
--sysconfdir=/etc/squid \
--enable-poll \
--enable-snmp \
--enable-removal-policies=heap,lru \
--enable-storeio=aufs,coss,diskd,ufs,null \
--enable-delay-pools \
--enable-linux-netfilter \
--enable-carp \
--with-pthreads \
--enable-cache-digests \
--enable-underscores \
--enable-basic-auth-helpers=LDAP,NCSA,PAM,SMB,MSNT \
--with-large-files

Thank you,
Peter


[squid-users] Fixed--Re: [squid-users] access.log no longer functioning, after --with-large-files

2006-10-16 Thread Peter Smith

Fixed.  I added the following line to my /etc/squid/squid.conf file.

access_log /var/log/squid/access.log squid

This was after looking at the default squid.conf file for, on a hunch, 
any access.log config entries.  I don't think this was mentioned in the 
updates info on the website--I did look to see if anything new had 
happened between v2.5 to v2.6 .  Perhaps I just missed it.  Since I did 
not re-write the squid.conf based on any updated, default, squid.conf, 
it seems strange that if there is *no* access_log line, that squid 
defaults to *no access.log*.  This definitely bit me.  I hope it makes 
sense, but at least it is functioning now.  After lunch I'll do research 
to see if there is yet anything else I've missed.


Peter

Peter Smith wrote:

I tested this before pushing to production, on a Redhat Linux v7.2 
server.  It worked fine.  I recompiled on the Redhat AS 3 (update 8) 
servers and installed.  Now Squid doesn't even open its access.log at 
all.  It does, however, open its cache.log .  This is a fully updated 
Redhat AS 3 config, simply i386, SMP, nothing really weird.  But 
currently, no logging.  I'm using a modified Redhat AS RPM spec file 
for compilation.  What follows is some of the configuration.  The only 
thing different is the addition of the --with-large-files and a 
recompile.  Only other thing is I'm going from a functional 
Squid-2.5.STABLE6 to a (non-logging) Squid-2.6.STABLE4 .  Like I said, 
this same RPM is working fine on a test (Redhat v7.2) server.  There 
does not appear to be anything odd in the cache.log, other than 
naughty clients making some bogus requests.  The Squid(s) appear 
functional other than the logging.


%configure \
--program-prefix= \
--prefix=/usr \
--exec-prefix=/usr \
--bindir=/usr/bin \
--sbindir=/usr/sbin \
--sysconfdir=/etc \
--datadir=/usr/lib/squid \
--includedir=/usr/include \
--libdir=/usr/lib \
--libexecdir=/usr/libexec \
--localstatedir=/var \
--sharedstatedir=/usr/com \
--mandir=/usr/share/man \
--infodir=/usr/share/info \
--exec_prefix=/usr \
--bindir=/usr/sbin \
--libexecdir=/usr/lib/squid \
--localstatedir=/var \
--sysconfdir=/etc/squid \
--enable-poll \
--enable-snmp \
--enable-removal-policies=heap,lru \
--enable-storeio=aufs,coss,diskd,ufs,null \
--enable-delay-pools \
--enable-linux-netfilter \
--enable-carp \
--with-pthreads \
--enable-cache-digests \
--enable-underscores \
--enable-basic-auth-helpers=LDAP,NCSA,PAM,SMB,MSNT \
--with-large-files

Thank you,
Peter




Re: [squid-users] wpad.dat (proxy.pac) and testing

2006-10-16 Thread Peter Smith
Simply telnet to your webserver's port 80 and make a request like the 
following.


GET /proxy.pac HTTP/1.0enter
enter

Be sure to actually hit your ENTER key there--twice--where I have 
enter.  You can also request /wpad.dat in place of /proxy.pac 
..  You should anticipate serving both files (as you've mentioned already.)


The response, on my systems, is similar to the following.


GET /proxy.pac HTTP/1.0

HTTP/1.1 200 OK
Date: Mon, 16 Oct 2006 20:38:35 GMT
Server: Apache/2.0.46 (Red Hat)
Connection: close
Content-Type: application/x-ns-proxy-autoconfig


function FindProxyForURL(url, host) {
if (
 isPlainHostName( host ) ||
 dnsDomainIs( host, .utsouthwestern.edu ) ||
 host == localhost ||
 host == 127.0.0.1 || ) {
  return DIRECT; }
else {
 return PROXY proxy:3128; DIRECT; }
}


Hope that helps,
Peter

Simone Rossato wrote:


Hi everybody,
I know that is a theme that is not strictly related to squid, but I 
know that you're very expert in this arguments, so I hope that you can 
help.
We have made an our own wpad.dat... but I can't find a real way to 
test it! How can I check?
The only way is to check the squid log? Isn't There a way to test the 
file and see if the syntax is correct or no?

Someone can help me?

Many thanks,

Bye,

SR




Re: [squid-users] Question about access log write speed and a possible DOS-attack (client-side)

2006-10-11 Thread Peter Smith
Wow.  What a rookie I've turned out to be.  I hadn't even considered 
that being the reason for Squid shutting off.  Thankfully I haven't hit 
that in a very long time.  You are correct, both Squids were affected by 
their access.log file being 2147483647 (2^31 -1) bytes in size.


Now I'm off to look at the FAQs, etc, on perhaps compiling Squid w/ 
large-file-access..


Thanks, Henrik, you always come through.

Peter

Henrik Nordstrom wrote:


tis 2006-10-10 klockan 10:30 -0500 skrev Peter Smith:
 

Recently I had two of our four Squid [1] proxy servers die.  What 
appears to have happened is a user was making requests to the proxy so 
quickly, that it died with the following message.


FATAL: logfileWrite: /var/log/squid/access.log: (11) Resource 
temporarily unavailable
   



A quite common cause is access.log reaching the magic 2GB barrier of
32-bit applications..

Regards
Henrik
 



[squid-users] Question about access log write speed and a possible DOS-attack (client-side)

2006-10-10 Thread Peter Smith
Recently I had two of our four Squid [1] proxy servers die.  What 
appears to have happened is a user was making requests to the proxy so 
quickly, that it died with the following message.


FATAL: logfileWrite: /var/log/squid/access.log: (11) Resource 
temporarily unavailable

Squid Cache (Version 2.5.STABLE12): Terminated abnormally.

I'm thinking that if a client is able to, possibly, overload Squid's 
main disks (or whatever drives the access log is being written to), it 
may just simply shutdown.  Am I correct?  At the moment Squid shutdown, 
it looks like this client was making many requests, very fast.  So my 
question is, is there a finite point at which there are too many 
requests/sec for Squid to log and thus cause Squid to die?  And finally, 
how can you keep this from happening?  Perhaps there needs to be a 
feedback loop between the logging mechanism and the polling 
interface--so that Squid will be held in check by its logging speed?


Thanks,
Peter Smith

[1]  Squid Cache (Version 2.5.STABLE12)


Re: [squid-users] throughput limitation from cache

2006-01-19 Thread Peter Smith
Richard, I was wondering if you've gotten anywhere with this?  I did 
some testing on my fairly busy squid cache..  Here are the results, from 
Squid's perspective (access.log)..


stimeA  47639 clientA TCP_MISS/200 49075472 GET 
http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.14.tar.gz - 
DIRECT/204.152.191.5 application/x-gzip
stimeB  50438 clientA TCP_HIT/200 49075479 GET 
http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.14.tar.gz - NONE/- 
application/x-gzip
stimeC  44111 clientA TCP_HIT/200 49075480 GET 
http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.14.tar.gz - NONE/- 
application/x-gzip
stimeD  39758 clientA TCP_HIT/200 49075480 GET 
http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.14.tar.gz - NONE/- 
application/x-gzip


stime is basically the Squid server's system time.  If I take the 2nd 
ulong value, or the request length (490754XX) and divide that by the 
first ulong value, or the service time (in MS, X), I get the 
following rates.


stimeA 1030153 B/s
stimeB  972986 B/s
stimeC  1112545 B/s
stimeD  1234354 B/s

What do you see on your system(s) ?

I was using the following client command as a test...

http_proxy=http://proxy:3128 wget 
http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.14.tar.gz -O-  
/dev/null


After you have it cached on the proxy, you might try this *on* the proxy 
itself.


http_proxy=http://localhost:3128 wget 
http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.14.tar.gz -O-  
/dev/null


Peter


Re: [squid-users] Re:

2006-01-19 Thread Peter Smith

Try changing the line

{return PROXY proxy1:3128; PROXY proxy2:3128;}

to

{return PROXY proxy1:3128; PROXY proxy2:3128;;}

HTH,
Peter



Re: [squid-users] How big should be cache_dir ?

2006-01-12 Thread Peter Smith

Martin Sevigny wrote:


Hello all,
snip
- is it better to have 10 cache-dir of 100GB or 1 cache-dir of 1TB 
(for instance)?


- expected average size of entries in the cache will be around 100KB, 
which means that a 100GB cache will hold around 1 million entries... 
are there any issues (memory?) with these numbers? I've tested 100 000 
entries with success (and very good speed) but 1 million or more?




I could be wrong, but I think the number of directories is configurable 
is so that you as an administrator can have greater control over the 
layout of the cache-dir..  It would depend on how often you access the 
cache-dir, what type of filesystem you are running on it, and also how 
many files you expect to have in each directory.  Some filesystems 
prefer different structures, and some even have limits (for instance the 
maximum number of files in a directory.)  All of these things can 
determine how quickly Squid can rebuild or shutdown the cache, and open, 
create, or remove cache files..  For your setup I'd imagine you probably 
don't need to worry too much.


HTH,
Peter


Re: [squid-users] throughput limitation from cache

2006-01-12 Thread Peter Smith

Richard Mittendorfer wrote:


Hello *,

When downloading a cached file from the local squid, I just get about
250 - 280kB/s. Even on localhost. Is this a limitation with diskd
serving files from cache or some intern limit? I also tried aufs, but
didn't get a better rate. I found a thread here about this, but it got
more into a diskd/aufs discussion :) and didn't provide a solution or
explains it. 
It doesn't look like it's disk/system related, vmstat/top doesn't show

high load.

linux 2.6 celeron 600MHz
squid-2.5.STABLE12 Debian GNU/Linux
2 x cache_dir diskd 1G 8 128 on 2 x SCSI reiserfs
32M cache_mem, acls, .. need more info?

TIA, ritch
 



I too have experience this. I believe this is because Squid has built-in 
safeguards to keep users from flooding the disk and to even the load 
across all disks for all users. Another reason is Squid handles all 
requests in sequential order--client requests, server requests, and 
memory/disk cache reads/writes.. It is like a virtualized network 
handler or VM. It really doesn't want one client to kill all others..


HTH,
Peter


Re: [squid-users] throughput limitation from cache

2006-01-12 Thread Peter Smith

Peter Smith wrote:


Richard Mittendorfer wrote:


Hello *,

When downloading a cached file from the local squid, I just get about
250 - 280kB/s. Even on localhost. Is this a limitation with diskd
serving files from cache or some intern limit? I also tried aufs, but
didn't get a better rate. I found a thread here about this, but it got
more into a diskd/aufs discussion :) and didn't provide a solution or
explains it. It doesn't look like it's disk/system related, 
vmstat/top doesn't show

high load.

linux 2.6 celeron 600MHz
squid-2.5.STABLE12 Debian GNU/Linux
2 x cache_dir diskd 1G 8 128 on 2 x SCSI reiserfs
32M cache_mem, acls, .. need more info?

TIA, ritch
 



I too have experience this. I believe this is because Squid has 
built-in safeguards to keep users from flooding the disk and to even 
the load across all disks for all users. Another reason is Squid 
handles all requests in sequential order--client requests, server 
requests, and memory/disk cache reads/writes.. It is like a 
virtualized network handler or VM. It really doesn't want one client 
to kill all others..


HTH,
Peter



Btw, it still does have a lot to do with diskd/aufs...  Read here  
http://squid-docs.sourceforge.net/latest/html/x220.html#AEN318


Peter


Re: [squid-users] Squid Woes

2006-01-12 Thread Peter Smith

Douglas Sterner wrote:

Using Squid 2.5 Stable 9 on Suse 9.3 and NTLM auth we are having 
trouble rendering larger pdf files when using automatically detect on 
IE 6. If we hard code the proxy it seems to work fine. Could someone 
look over my new wpad.dat and point out my errors. IP's were changed 
to protect the innocent.


Thanks

Douglas Sterner


function FindProxyForURL(url, host)
{
//if name is in our domain, or starts with 192.168.999. don't use proxy
if(isPlainHostName(host) ||
dnsDomainIs(host,.mydomain.lcl) ||
host.substring(0,12)==192.168.999. ))
return DIRECT;
else if (isInNet(myIpAddress(), 192.168.998.0, 255.255.255.0))//A
return PROXY 192.168.998.3:3128;
else if (isInNet(myIpAddress(), 192.168.997.0, 255.255.255.0))//B
return PROXY 192.168.997.3:3128;
else if (isInNet(myIpAddress(), 192.168.996.0, 255.255.255.0))//C
return PROXY 192.168.996.3:3128;
else if (isInNet(myIpAddress(), 192.168.995.0, 255.255.255.0))//D
return PROXY 192.168.996.3:3128;
else if (isInNet(myIpAddress(), 192.168.994.0, 255.255.255.0))//E
return PROXY 192.168.996.3:3128;
else
return DIRECT;
}

Have you tried encapsulating each if/else if block within '{' and '}' ?  
Like follows.


if(isPlainHostName(host) ||
dnsDomainIs(host,.mydomain.lcl) ||
host.substring(0,12)==192.168.99. )) {
 return DIRECT; }

etc etc.

Peter


[squid-users] acl to combat Sober-Z or Sober.X

2006-01-05 Thread Peter Smith
My attempt at potentially controlling Sober-Z or Sober.X on my network 
(based off of the Symantec data [1] ):


squid.conf:
acl virus url_regex -i /etc/squid/virus
http_access deny virus

/etc/squid/virus:
http://home.pages.at/Gruppfelhuber(/.*)?$
http://people.freenet.de(/.*)?$
http://scifi.pages.at(/.*)?$
http://home.pages.at(/.*)?$
http://free.pages.at(/.*)?$
http://home.arcor.de(/.*)?$
http://free.pages.at/jexsxjpsccpjz(/.*)?$
http://home.arcor.de/sjgdqverwom(/.*)?$
http://home.arcor.de/chjzfoay(/.*)?$
http://home.arcor.de/gtvqgphqsgpjk(/.*)?$
http://home.arcor.de/waxshhzsdwsz(/.*)?$
http://home.arcor.de/zhowmxwozay(/.*)?$
http://people.freenet.de/mookflolfctm(/.*)?$
http://people.freenet.de/aohobygi(/.*)?$
http://people.freenet.de/wlpgskmv(/.*)?$
http://people.freenet.de/svclxatmlhavj(/.*)?$
http://people.freenet.de/jpjpoptwql(/.*)?$
http://people.freenet.de/iohgdhkzfhdzo(/.*)?$
http://people.freenet.de/eetbuviaebe(/.*)?$
http://scifi.pages.at/vvvjkhmbgnbbw(/.*)?$
http://home.pages.at/twfofrfzlugq(/.*)?$
http://free.pages.at/sfhfksjzsfu(/.*)?$
http://home.arcor.de/qlqqlbojvii(/.*)?$
http://home.arcor.de/fulmxct(/.*)?$
http://home.arcor.de/fowclxccdxn(/.*)?$
http://home.arcor.de/lnzzlnbk(/.*)?$
http://home.arcor.de/rprpgbnrppb(/.*)?$
http://people.freenet.de/iufilfwulmfi(/.*)?$
http://people.freenet.de/xbqyosoe(/.*)?$
http://people.freenet.de/nkxlvcob(/.*)?$
http://people.freenet.de/svclxatmlhavj(/.*)?$
http://people.freenet.de/bnymomspyo(/.*)?$
http://people.freenet.de/jbevgezfmegwy(/.*)?$
http://people.freenet.de/gdvsotuqwsg(/.*)?$
http://scifi.pages.at/eveocczmthmmq(/.*)?$
http://home.pages.at/doarauzeraqf(/.*)?$
http://free.pages.at/hsdszhmoshh(/.*)?$
http://home.arcor.de/dyddznydqir(/.*)?$
http://home.arcor.de/iyxegtd(/.*)?$
http://home.arcor.de/oakmanympnw(/.*)?$
http://home.arcor.de/riggiymd(/.*)?$
http://home.arcor.de/jhjhgquqssq(/.*)?$
http://people.freenet.de/zmnjgmomgbdz(/.*)?$
http://people.freenet.de/smtmeihf(/.*)?$
http://people.freenet.de/qisezhin(/.*)?$
http://people.freenet.de/fseqepagqfphv(/.*)?$
http://people.freenet.de/urfiqileuq(/.*)?$
http://people.freenet.de/wjpropqmlpohj(/.*)?$
http://people.freenet.de/mclvompycem(/.*)?$
http://scifi.pages.at/zzzvmkituktgr(/.*)?$
http://home.pages.at/npgwtjgxwthx(/.*)?$
http://free.pages.at/emcndvwoemn(/.*)?$
http://home.arcor.de/ocllceclbhs(/.*)?$
http://home.arcor.de/dixqshv(/.*)?$

Comments?

HTH,
Peter

[1]  http://www.sarc.com/avcenter/venc/data/[EMAIL PROTECTED]


Re: [squid-users] acl to combat Sober-Z or Sober.X

2006-01-05 Thread Peter Smith
I don't know if I read the Symantec data correctly as it was a tiny bit 
ambiguous, but it is possible that only the 5 domains listed are needed, 
rather than any pseudo-specific URLs..


Peter

Peter Smith wrote:

My attempt at potentially controlling Sober-Z or Sober.X on my network 
(based off of the Symantec data [1] ):


squid.conf:
acl virus url_regex -i /etc/squid/virus
http_access deny virus

/etc/squid/virus:
http://home.pages.at/Gruppfelhuber(/.*)?$
http://people.freenet.de(/.*)?$
http://scifi.pages.at(/.*)?$
http://home.pages.at(/.*)?$
http://free.pages.at(/.*)?$
http://home.arcor.de(/.*)?$
http://free.pages.at/jexsxjpsccpjz(/.*)?$
http://home.arcor.de/sjgdqverwom(/.*)?$
http://home.arcor.de/chjzfoay(/.*)?$
http://home.arcor.de/gtvqgphqsgpjk(/.*)?$
http://home.arcor.de/waxshhzsdwsz(/.*)?$
http://home.arcor.de/zhowmxwozay(/.*)?$
http://people.freenet.de/mookflolfctm(/.*)?$
http://people.freenet.de/aohobygi(/.*)?$
http://people.freenet.de/wlpgskmv(/.*)?$
http://people.freenet.de/svclxatmlhavj(/.*)?$
http://people.freenet.de/jpjpoptwql(/.*)?$
http://people.freenet.de/iohgdhkzfhdzo(/.*)?$
http://people.freenet.de/eetbuviaebe(/.*)?$
http://scifi.pages.at/vvvjkhmbgnbbw(/.*)?$
http://home.pages.at/twfofrfzlugq(/.*)?$
http://free.pages.at/sfhfksjzsfu(/.*)?$
http://home.arcor.de/qlqqlbojvii(/.*)?$
http://home.arcor.de/fulmxct(/.*)?$
http://home.arcor.de/fowclxccdxn(/.*)?$
http://home.arcor.de/lnzzlnbk(/.*)?$
http://home.arcor.de/rprpgbnrppb(/.*)?$
http://people.freenet.de/iufilfwulmfi(/.*)?$
http://people.freenet.de/xbqyosoe(/.*)?$
http://people.freenet.de/nkxlvcob(/.*)?$
http://people.freenet.de/svclxatmlhavj(/.*)?$
http://people.freenet.de/bnymomspyo(/.*)?$
http://people.freenet.de/jbevgezfmegwy(/.*)?$
http://people.freenet.de/gdvsotuqwsg(/.*)?$
http://scifi.pages.at/eveocczmthmmq(/.*)?$
http://home.pages.at/doarauzeraqf(/.*)?$
http://free.pages.at/hsdszhmoshh(/.*)?$
http://home.arcor.de/dyddznydqir(/.*)?$
http://home.arcor.de/iyxegtd(/.*)?$
http://home.arcor.de/oakmanympnw(/.*)?$
http://home.arcor.de/riggiymd(/.*)?$
http://home.arcor.de/jhjhgquqssq(/.*)?$
http://people.freenet.de/zmnjgmomgbdz(/.*)?$
http://people.freenet.de/smtmeihf(/.*)?$
http://people.freenet.de/qisezhin(/.*)?$
http://people.freenet.de/fseqepagqfphv(/.*)?$
http://people.freenet.de/urfiqileuq(/.*)?$
http://people.freenet.de/wjpropqmlpohj(/.*)?$
http://people.freenet.de/mclvompycem(/.*)?$
http://scifi.pages.at/zzzvmkituktgr(/.*)?$
http://home.pages.at/npgwtjgxwthx(/.*)?$
http://free.pages.at/emcndvwoemn(/.*)?$
http://home.arcor.de/ocllceclbhs(/.*)?$
http://home.arcor.de/dixqshv(/.*)?$
snip




[squid-users] Weighted redundant proxy auto-config script.

2004-08-02 Thread Peter Smith
For those who are interested, I have crafted a dynamic 
Auto-Configuration script for use in Apache using Perl.  Based on a 
random number, it will provide a user with a listing of proxies which 
may be weighted to determine their allocation.

Please see http://www.squid-cache.org/Doc/FAQ/FAQ-5.html#ss5.4
My script consists, basically, of two files--proxy.pac.pl and proxy; 
the names are mostly arbitrary.

proxy.pac.pl:
#!/usr/bin/perl
if( $ENV{ SCRIPT_NAME } =~ /\.pac$/ ) {
print Content-type: application/x-ns-proxy-autoconfig\n;
} else {
print Content-type: text/html\n;
}
print \n;
my %proxy = (), $total = 0;
open A, proxy or die Can not access proxy list.\n;
while( A ) {
chomp;
/^(\S+)\s+(\S+)$/;
$proxy{ $1 } = $2;
}
foreach $p ( sort keys %proxy ) {
$total += $proxy{ $p };
}
$val = rand() * $total;
$last = 0;
foreach $p ( sort keys %proxy ) {
if( $val  ( $proxy{ $p } + $last )  $val =$last ) {
 $primary=$p;
 push @list, $p;
} elsif( $val  ( $proxy{ $p } + $last ) ) {
 push @list, $p;
} elsif( $val =$last ) {
 push @tail, $p;
}
$last += $proxy{ $p };
}
for( $i=0; $i=$#tail; $i++ ) {
push @list, $tail[ $i ];
}
print '
function FindProxyForURL(url, host)
   {
   if (isPlainHostName(host) ||
dnsDomainIs(host, .domain.com) 
!localHostOrDomainIs(host, www.domain.com))
   return DIRECT;
   else
';
print ' return ';
for( $i=0; $i=$#list; $i++ ) {
print PROXY .$list[ $i ].:3128; 
}
print '
DIRECT;
}
';
proxy:
proxy1.domain.com:8080 25
proxy2.domain.com:8081 75
The 'proxy' file simply contains proxy entries with a weight seperated 
by a space. The weight value is arbitrary, the weights are added 
together to determine the value to be randomized against.

I hope this proves useful to someone.  As has been mentioned before 
there are many other scripts for doing proxy selection.  Please see 
http://www.squid-cache.org/Doc/FAQ/FAQ-5.html#ss5.5

Peter


signature.asc
Description: OpenPGP digital signature


[squid-users] CONNECTs limitted to 15 minutes (or 900000 milliseconds)? (Squid 2.5 Stable 4)

2004-05-20 Thread Peter Smith
I am currently seeing this with Squid 2.5.STABLE4 and also Squid 
3.0-PRE3-20040519 .  Basically I am attempting to rsync a copy of a 
Fedora Linux mirror.  After 15 minutes, the connection dies.  I can not 
find any occurance of 900 in the source, or squid.conf.default .  Why 
is this?

References:
http://www.squid-cache.org/mail-archive/squid-users/200402/0674.html
http://www.squid-cache.org/Doc/FAQ/
Thanks in advance,
Peter Smith


signature.asc
Description: OpenPGP digital signature


[squid-users] Never mind... Re: [squid-users] CONNECTs limitted to 15 minutes (or 900000 milliseconds)? (Squid 2.5 Stable 4)

2004-05-20 Thread Peter Smith
Seems like this might be an OS or Squid conf issue.  I have a Redhat 
Enterprise Linux 3 machine running Squid 2.5.STABLE4 which has a 
different config than the other S 2.5.STABLE4 and S 3.0 that I was using 
which were 15-minute limitted.  It is having NO PROBLEMS and has an 
rsync running for  30 minutes.  I'll figure it out now that I have one 
that works.

Peter Smith
Peter Smith wrote:
I am currently seeing this with Squid 2.5.STABLE4 and also Squid 
3.0-PRE3-20040519 .  Basically I am attempting to rsync a copy of a 
Fedora Linux mirror.  After 15 minutes, the connection dies.  I can 
not find any occurance of 900 in the source, or squid.conf.default 
.  Why is this?

References:
http://www.squid-cache.org/mail-archive/squid-users/200402/0674.html
http://www.squid-cache.org/Doc/FAQ/
Thanks in advance,
Peter Smith



signature.asc
Description: OpenPGP digital signature


Re: [squid-users] ftp file sizes, squid, large file sizes, and listings--minor problem

2004-04-05 Thread Peter Smith
So are you suggesting that perhaps the ncftp I am using in the example 
is compiled on a 64 bit platform?  I've seen plenty of cases where 
32-bit apps are able to at least stat 2GB.  When I get a free minute 
I'll compare the code bases.  I imagine you have squid using standard 
file structs as far as ftp file info goes.  However, ncftp seems to 
handle it happily.  And besides, it looks like squid would, but the 
number goes negative.  Any possibility of there being a _signed_ value 
in there?  Or maybe that is on output only?

Peter

Henrik Nordstrom wrote:

On Fri, 2 Apr 2004, Peter Smith wrote:

 

I just noticed that it looks like Squid might not handle large file 
sizes correctly in its FTP interface:
   

Actually not limited to the FTP interface..

If Squid is compiled on a 32 bit platform with 32-bit file I/O then only 
filesizes up to 2GB is supported.

Regards
Henrik
 



signature.asc
Description: OpenPGP digital signature


[squid-users] ftp file sizes, squid, large file sizes, and listings--minor problem

2004-04-02 Thread Peter Smith
I just noticed that it looks like Squid might not handle large file 
sizes correctly in its FTP interface:

** Via a Linux client:

[EMAIL PROTECTED] ~]# ncftp 
ftp://download.fedora.redhat.com/pub/fedora/linux/core/test/1.91/i386/iso/
NcFTP 3.1.6 (Aug 25, 2003) by Mike Gleason (http://www.NcFTP.com/contact/).
Connecting to 209.132.176.20...
Fedora FTP server ready. All transfers are logged.
Logging in...
Login successful. Have fun.
Sorry, I don't have help.
Logged in to download.fedora.redhat.com.
Current remote directory is /pub/fedora/linux/core/test/1.91/i386/iso.
ncftp ...ore/test/1.91/i386/iso  dir
-rw-r--r--1 ftp  ftp514031616   Mar 25 21:33   
FC2-test2-SRPMS-disc1.iso
-rw-r--r--1 ftp  ftp514031616   Mar 25 21:36   
FC2-test2-SRPMS-disc2.iso
-rw-r--r--1 ftp  ftp514031616   Mar 25 21:38   
FC2-test2-SRPMS-disc3.iso
-rw-r--r--1 ftp  ftp514031616   Mar 25 21:40   
FC2-test2-SRPMS-disc4.iso
-rw-r--r--1 ftp  ftp   4306010112   Mar 25 21:56   
FC2-test2-i386-DVD.iso
-rw-r--r--1 ftp  ftp665059328   Mar 25 21:23   
FC2-test2-i386-disc1.iso
-rw-r--r--1 ftp  ftp665550848   Mar 25 21:26   
FC2-test2-i386-disc2.iso
-rw-r--r--1 ftp  ftp665747456   Mar 25 21:30   
FC2-test2-i386-disc3.iso
-rw-r--r--1 ftp  ftp180813824   Mar 25 21:31   
FC2-test2-i386-disc4.iso
-rw-r--r--1 ftp  ftp  769   Mar 26 01:37   MD5SUM
ncftp ...ore/test/1.91/i386/iso  quit
[EMAIL PROTECTED] ~]#

** Via Squid:

FTP Directory: 
ftp://download.fedora.redhat.com/pub/fedora/linux/core/test/1.91/i386/iso/
Parent Directory
FC2-test2-SRPMS-disc1.iso. . . . Mar 25 21:33 501984k
FC2-test2-SRPMS-disc2.iso. . . . Mar 25 21:36 501984k
FC2-test2-SRPMS-disc3.iso. . . . Mar 25 21:38 501984k
FC2-test2-SRPMS-disc4.iso. . . . Mar 25 21:40 501984k
FC2-test2-i386-DVD.iso . . . . . Mar 25 21:56 -2097152k
FC2-test2-i386-disc1.iso . . . . Mar 25 21:23 649472k
FC2-test2-i386-disc2.iso . . . . Mar 25 21:26 649952k
FC2-test2-i386-disc3.iso . . . . Mar 25 21:30 650144k
FC2-test2-i386-disc4.iso . . . . Mar 25 21:31 176576k
MD5SUM . . . . . . . . . . . . . Mar 26 01:37 1k

Generated Fri, 02 Apr 2004 21:50:32 GMT by squid.swmed.edu 
(squid/2.5.STABLE4)

Peter


signature.asc
Description: OpenPGP digital signature


[squid-users] resend: squidGuard and/or authentication?

2004-03-09 Thread Peter Smith
I understand this question has been asked before, and without any response.

http://www.squid-cache.org/mail-archive/squid-users/200310/0699.html

Any pointers?  I'm thinking of tearing up Squid 2.5.STABLE4 code to get this to work.  Probably alter the basic authentication code: if the user is authenticated to the NOBLOCK group, then do not use the redirector (squidGuard.); if the user is not authenticated, then route traffic to the redirector; if the redirector indeed redirects it then process the authentication; if the user authenticates correctly then go back to the top and let them through; if the user fails authentication send them to the redirected page.

Peter

Original message:
Hello all.  Just a quick question--I have a need to funnel proxy traffic 
through squidGuard and if a site is blacklisted, allow access to the 
site if the user can provide appropriate authentication.  If the site is 
not blacklisted it will go through anyways regardless of authentication.

I'm currently using redirector_access, redirect_program, and 
redirect_children to implement squidGuard.  I'm looking at external 
acl as a possibility for handling this--perhaps a Perl app that runs 
the URL through squidGuard and if it is blacklisted then consider 
whether or not the user is currently authenticated.  If they are not 
then return the 407 to get authentication, if they are then pass them 
through.

Any ideas are greatly appreciated.

Peter Smith


Re: [squid-users] resend: squidGuard and/or authentication?

2004-03-09 Thread Peter Smith
Would the access controls you are talking of include using the 
external_acl_type?  (http://squid.sourceforge.net/external_acl)

I've toyed with the idea of using a script in an external acl which 
would consider URLs using squidGuard and provide ERR/OK based on that, 
with a combination of the auth_param basic and its associated ACL with 
poor results.

Thanks,
Peter
Henrik Nordstrom wrote:

On Tue, 9 Mar 2004, Peter Smith wrote:

 

I'm thinking of tearing up Squid 2.5.STABLE4 code to get this to work.  
Probably alter the basic authentication code: if the user is
authenticated to the NOBLOCK group, then do not use the redirector
(squidGuard.); if the user is not authenticated, then route traffic to
the redirector; if the redirector indeed redirects it then process the
authentication; if the user authenticates correctly then go back to the
top and let them through; if the user fails authentication send them to
the redirected page.
   

Use Squid access controls instead of SquidGuard and you get this
capability to selectively request authenticatoin for free.
Regards
Henrik
 



Re: [squid-users] Tuning Squid for large user base

2004-03-08 Thread Peter Smith
Henrik, again I am (and always) impressed by your ability to provide 
good and accurate support for Squid.  I applaud your efforts.  Thank 
you!  Btw, what else do you do?

Peter Smith

Henrik Nordstrom wrote:

On Fri, 5 Mar 2004, Peter Smith wrote:

 

Another thing that might be driving up the # of file desc usage is the 
half_closed_clients off you have.  On my squids I run with this as in 
the default config, on.
   

Setting half_closed_clients off considerably reduses the filedescriptor 
usage by Squid.

Leaving it in the default on setting may cause excessively high 
filedescriptor usage if a lot of clients non-responsive sites etc.

 

Also, on my systems we are running named instead of using squid's fqdn 
cache, this may help things out a bit--ymmv..
   

Having a DNS server nearby defenitely helps. There is not a big difference 
in having the DNS server on the LAN or on the Squid server itself.

 

I notice you have a redirect_children 30--are you running a 
redirector?  This can significantly alter your numbers.
   

Indeed. How much depends on the redirector used.

 

I'll give you my settings.  Again I am very conservative on the 
cache_dir size as I think this will eat up quite a bit of RAM to keep 
in-memory-indices.
   

The guideline for memory usage found in the Squid FAQ chapter on memory 
usage is quite good.

Regards
Henrik
 



[squid-users] squidGuard and/or authentication?

2004-03-05 Thread Peter Smith
Hello all.  Just a quick question--I have a need to funnel proxy traffic 
through squidGuard and if a site is blacklisted, allow access to the 
site if the user can provide appropriate authentication.  If the site is 
not blacklisted it will go through anyways regardless of authentication.

I'm currently using redirector_access, redirect_program, and 
redirect_children to implement squidGuard.  I'm looking at external 
acl as a possibility for handling this--perhaps a Perl app that runs 
the URL through squidGuard and if it is blacklisted then consider 
whether or not the user is currently authenticated.  If they are not 
then return the 407 to get authentication, if they are then pass them 
through.

Any ideas are greatly appreciated.

Peter Smith


Re: [squid-users] Tuning Squid for large user base

2004-03-05 Thread Peter Smith
 currently in use:946
   Number of file desc currently in use:  352
   Files queued for open:   1
   Available number of file descriptors: 3743
   Reserved number of file descriptors:   100
   Store Disk files open:   1
Internal Data Structures:
   343410 StoreEntries
   217719 StoreEntries with MemObjects
   217703 Hot Object Cache Items
   337566 on-disk objects
HTH,
Peter Smith
James MacLean wrote:

Hi Folks,

We have tried many suggestions that were found in mail-archives and on 
different sites, but are having a difficult time getting Squid to handle 
our workload. The obvious example of this to me is if we set 
no_cache deny all, we can start going to sites and notice that they keep 
getting slower.

The Squid server is a 2xP4 Xeon with hyperthreading 2.4Ghz with 1G of RAM.  
They have SCSI drives which we have tried various cache sizes on. There is
no load on the server before squid begins, but it of course does drive up
the CPU as it starts to churn.

diskd seems to work the best, but once we start using it, the page load 
delays increase to a point that users are noticing. This setup is used in 
a transparent proxy via the typical Linux redirect.

In this Educational setting we have approx. 12,000 active student/staff if
I accept the Largest file desc currently in use: param, so we realize we
need some power to deal with the actual disk cache effectiveness. There
are actually more clients in total, but this seems to give a good guess of
current activity.
Our Internet feed is a 6Mbs max. 

Would like to ask if there are any tuning suggestions we can try to boost 
Squid in this environment?

2.5-STABLE5 cachemgr output follows for a short run :

Connection information for squid:
Number of clients accessing cache:  799
Number of HTTP requests received:   261742
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   5769.8
Average ICP messages per minute since start:0.0
Select loop called: 8599 times, 316.530 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 17.9%, 60min: 13.7%
Byte Hit Ratios:5min: 3.8%, 60min: 1.9%
Request Memory Hit Ratios:  5min: 25.1%, 60min: 30.8%
Request Disk Hit Ratios:5min: 32.0%, 60min: 24.9%
Storage Swap size:  854948 KB
Storage Mem size:   128036 KB
Mean Object Size:   10.22 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   2.37608  1.71839
Cache Misses:  2.79397  1.91442
Cache Hits:0.68577  0.61549
Near Hits: 2.37608  1.91442
Not-Modified Replies:  0.64968  0.49576
DNS Lookups:   0.04048  0.03374
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:2721.839 seconds
CPU Time:   506.162 seconds
CPU Usage:  18.60%
CPU Usage, 5 minute avg:78.16%
CPU Usage, 60 minute avg:   18.46%
Process Data Segment Size via sbrk(): 219948 KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
Total space in arena:  219948 KB
Ordinary blocks:   219855 KB375 blks
Small blocks:   0 KB  0 blks
Holding blocks:  6788 KB  3 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  92 KB
Total in use:  226643 KB 100%
Total free:92 KB 0%
Total size:226736 KB
Memory accounted for:
Total accounted:   208789 KB
memPoolAlloc calls: 38921604
memPoolFree calls: 38222649
File descriptor usage for squid:
Maximum number of file descriptors:   32768
Largest file desc currently in use:   1
Number of file desc currently in use: 3641
Files queued for open:   1
Available number of file descriptors: 29126
Reserved number of file descriptors:   100
Store Disk files open:  46
Internal Data Structures:
 88190 StoreEntries
 17725 StoreEntries with MemObjects
 17534 Hot Object Cache Items
 83648 on-disk objects
Lines from sample squid.conf:

request_body_max_size 1000 MB
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
icp_port 0
cache_mem  128 MB
cache_swap_low  90
cache_swap_high 95
cache_dir diskd /var/cache/squid/ 2000 64 64
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
pid_filename /var/run/squid.pid
debug_options ALL,1
log_fqdn

[squid-users] squid peer ID and Forwarding loop detected warnings

2004-02-27 Thread Peter Smith
I have 4 load-balanced Round-Robin DNS'd squid caches.  From time to 
time I have to IP alias one of the squids onto another squid (let's say 
the hardware is aging and it goes offline.)  The problem is that when 
one or more peer IPs exist on a particular squid machine while its conf 
defines one of those IPs as a cache peer (until I get in and change it 
manually,) I start to get WARNING: Forwarding loop detected warnings.  
It seems that if squid had some sort of peer ID that it would be able 
to tell that one of its peers is in fact itself this problem could be 
avoided.  I don't believe this functionality exists currently, but I 
don't doubt the possibility that it could exist in patches somewhere.  
Can anyone help?

Peter Smith


Re: [squid-users] Zero Sized Reply [attn: long post]

2003-12-09 Thread Peter Smith
I would like to say that I am using ~4 Squid-2.5.STABLE4's which have 
about 190-250 users connected to each on average and haven't had any 
problems with Zero Sized Replys.  I would probably suspect my 
connection if that were the case.  I'll post my squid.conf, however, so 
you can look at it: (Hope it helps, sorry for the censoreds)

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
acl virus1 urlpath_regex /readme\.exe$
http_access deny virus1
acl virus2a dstdomain us.f1.yahoofs.com
acl virus2b urlpath_regex /timeupdate\.exe$
http_access deny virus2a virus2b
acl virus3 urlpath_regex /explorer\.exe$
http_access deny virus3
acl virus4 dstdomain mars.raketti.net
http_access deny virus4
acl all src 0.0.0.0/0.0.0.0
acl CONNECT method CONNECT
http_access allow all
# ICP Peer:
acl peer1 src censored
cache_peer censored sibling 3128 3130 proxy-only weight=3
cache_peer_access censored deny QUERY
cache_peer_access censored allow all
acl peer2 src censored
cache_peer censored sibling 3128 3130 proxy-only weight=2
cache_peer_access censored deny QUERY
cache_peer_access censored allow all
acl peer3 src censored
cache_peer censored sibling 3128 3130 proxy-only weight=1
cache_peer_access censored deny QUERY
cache_peer_access censored allow all
icp_access allow all
# SNMP:
acl snmppublic censored
snmp_access allow snmppublic
snmp_access deny all
# Timeouts:
read_timeout 5 minutes
request_timeout 30 seconds
pconn_timeout 60 seconds
half_closed_clients on
shutdown_lifetime 0 seconds
negative_ttl 30 seconds
icp_query_timeout 1000 milliseconds
cache_effective_user squid
cache_effective_group squid
cache_dir aufs /cache1 4096 64 64
cache_dir aufs /cache2 4096 64 64
cache_dir aufs /cache3 4096 64 64
cache_mem 256 MB
maximum_object_size 256 MB
minimum_object_size 0
request_body_max_size 0
request_header_max_size 64 KB
# Just in case, skew the cache registration to go NOWHERE:
announce_host localhost
visible_hostname censored

http_port 3128

cache_mgr censored

append_domain censored

cache_store_log none
strip_query_terms off
ftp_user censored

acl admin src censored

fqdncache_size 0

extension_methods SEARCH BPROPFIND
acl PURGE method PURGE
acl localhost src 127.0.0.1
http_access allow PURGE localhost
http_access deny PURGE
coredump_dir /
log_icp_queries off
Peter

Trevor wrote:

Hello,

We use squid 2.5-STABLE-3 (port 3128) to connect to the Internet via
traditional browser proxy configuration.  Everything works great except for
specific sites (yahoo mail, aol mail, hotmail, and sometimes mapquest).
These sites return a Zero Sized Reply error message.  Disabling squid
allows traffic through.
Was this problem ever addressed?  HotMail fails with that message every
time.  Sometimes mapquest works, but other times it fails as well.
I have looked at http://www.squid-cache.org/Doc/FAQ/FAQ-11.html#ss11.51 to
see if there is any information that can solve my Zero Sized Reply
problem.  It looks like nobody knows what is going on, as it's full of
speculation.  I have deleted cookies, disabled persitance connections, and
ensured that ECN is set to 0.  Nothing.  The problem is still here.
I think I have read every single post on Zero Sized Reply on google.  Why
is this problem all over the place and why can't anybody figure it out?
It's a very common problem.  Hopefully, someone here can spot what's wrong.
I really would appreciate solving this issue.  I'd be happy do document and
post any additional information to the FAQ so that other people in the
future can fix their configurations.
Below is my squid.conf configuration:
SNIP
Regards,
Trevor.


 




Re: [squid-users] Zero Sized Reply [attn: long post]

2003-12-09 Thread Peter Smith
[EMAIL PROTECTED] root]# *telnet 171.67.89.148 8080*
Trying 171.67.89.148...
Connected to 171.67.89.148.
Escape character is '^]'.
*GET / HTTP/1.0*
Connection closed by foreign host.
[EMAIL PROTECTED] root]#
That server is not working correctly.  It really does give a zero-sized 
reply.  It is not a blank page.  That would be html/html.

Peter

Trevor wrote:

This is an example of a link that does not connect (same message squid gives
me when trying to connect to hotmail):
http://171.67.89.148:8080/

I get:

ERROR
The requested URL could not be retrieved


While trying to retrieve the URL: http://171.67.89.148:8080/

The following error was encountered:

Zero Sized Reply
Squid did not receive any data for this request.
Your cache administrator is [EMAIL PROTECTED]


Generated Tue, 09 Dec 2003 00:11:35 GMT by squid.xx.org
(squid/2.5.STABLE3)
You should get a blank page if squid is disabled (or is working).

Regards,
Trevor.
 




Re: [squid-users] Zero Sized Reply [attn: long post]

2003-12-09 Thread Peter Smith
I would second the need for it.  After this came up I immediately 
started looking in the default squid.conf for a allow zero sized reply 
message (or similar) option.  Obviously, I didn't find one.

Peter

Henrik Nordstrom wrote:

On Tue, 9 Dec 2003, Peter Smith wrote:

 

[EMAIL PROTECTED] root]# *telnet 171.67.89.148 8080*
Trying 171.67.89.148...
Connected to 171.67.89.148.
Escape character is '^]'.
*GET / HTTP/1.0*
Connection closed by foreign host.
[EMAIL PROTECTED] root]#
That server is not working correctly.  It really does give a zero-sized 
reply.  It is not a blank page.  That would be html/html.
   

As this error is causing a lot of confusion I am considering removing this 
error from Squid, and instead just do the same to the user as was done to 
Squid. I.e. if a server won't give us a reply then don't give one to the 
client.

Is there any opinions on this?

Doing so however collides with one of my Squid applications where 
zero-sized replies is catched, so if this is done it will be a 
configurable option.

Regards
Henrik
 




[squid-users] Squid overloading when RAID drive cache in use?

2003-07-10 Thread Peter Smith
I am wondering if having cache_dir drives on a RAID controller that has 
Read/Write cache turned on might cause problems?  I'm fairly sure that 
Squid manages the latency, etc of its cache_dir drives.  The drives that 
my Squids use are all on RAID controllers as single volumes.  However I 
recently found that if I enable Read/Write cache on the cache_dir drives 
that load on the processor goes off the scale.  Could it be that Squid 
gets such a quick response from the drive that it thinks the drive is 
super fast and thus slams it, causing it to run out of Read/Write cache 
and then gets overloaded as the requests backlog?  I'm hoping that this 
would explain a number of my Squid servers that can be very unstable.  
The cache I'm talking about is in the order of only about 128MB worth, 
and from 3-4 cache_dir drives.

Peter Smith





[squid-users] Squid-2.5.STABLE2 compile

2003-03-24 Thread Peter Smith
I copied and altered a squid-2.5.STABLE1-2.src.rpm which I'd cobbled 
together to be a squid-2.5.STABLE2-1.src.rpm.  However, upon building, 
I now get a 'aufs/aiops.c:36:2: #error _REENTRANT MUST be defined to 
build squid async io support. ' error.  Any ideas as to why I would get 
this with Squid-2.5.STABLE2 and not with Squid-2.5.STABLE1?

Btw, here is my %configure line for the SRPM...  (note that pthreads is 
enabled.)

%configure \
  --exec_prefix=/usr --bindir=/usr/sbin --libexecdir=/usr/lib/squid \
  --localstatedir=/var --sysconfdir=/etc/squid --datadir=/usr/lib/squid \
  --enable-poll --enable-snmp --enable-removal-policies=heap,lru \
  --enable-delay-pools --enable-linux-netfilter \
  --enable-carp --with-pthreads \
  --enable-basic-auth-helpers=LDAP,NCSA,PAM,SMB,MSNT \
  --enable-storeio=aufs,coss,diskd,ufs,null
This doesn't make sense as I've read 
'http://www.squid-cache.org/Versions/v2/2.5/bugs/#squid-2.5.STABLE1-aufs_reentrant' 
and already have pthreads enabled.

Peter Smith



Re: [squid-users] Squid-2.5.STABLE2 compile

2003-03-24 Thread Peter Smith
Yes, this is most likely the case.  Thanks for the tip!  I am working on 
making the CFLAGS more transparent..

Peter

Henrik Nordstrom wrote:

Maybe the spec file overrides CFLAGS when running make, inhibiting the
settings done by configure..
Regards
Henrik




Re: [squid-users] Strange problem ! Some sites doesn't work...

2003-03-12 Thread Peter Smith
I'm not sure what type of network your Squid is talking through to get 
to the internet, but I had similar problems with my web browsers 
connecting via a less-than-1500-mtu link of encapsulated 
PPP-over-Ethernet.  I found I was breaking the frame by using 1490byte 
packets and what I experienced were similar problems..  I can elaborate 
more if you think that could be your problem.

Peter

O-Zone wrote:

On Wednesday 12 March 2003 14:27, Marc Elsen wrote:
 

O-Zone wrote:
   

Hi all,
i've installed Squid v2.5STABLE to some servers. All works perfectly
 

 

On which platform are you using squid ?
OS/version ?
   

Linux x86 Kernel 2.4.20 all compliled ;-)

Oz


O-Zone - TDSiena System Administrator 
Home @ www.zerozone.it

Questa email e' stata controllata dal RAV AntiVirus by TDSiena Srl [siena.tdsiena.it].