Re: [squid-users] CARP setup

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 09:42 +0530, Paras Fadte wrote:
 Hi Henrik,
 
 In CARP setup, if one uses same weightage for all the parent caches
 how would the requests be handled ? will the requests be equally
 forwarded to all the parent caches ? if the weightages differ then
 won't all the requests be forwarded to a particular parent cache only
 which has the highest weightage ?

CARP is a hash algorithm. For each given URL there is one CARP parent
that is the designated one.

The weights control how large portion of the URL space is assigned to
each member.

 Also if I do not use the proxy-only option in the squid which
 forwards the requests to parent caches, won't less number of requests
 be forwarded to parent caches since it will be already cached by squid
 in front of the parent caches?

Correct. And it's completely orthogonal to the use of CARP. As I said
most setups do not want to use proxy-only. proxy-only is only useful in
some very specific setups. These setups MAY be using CARP or some other
peering method, the choice of peering method is unrelated to proxy-only.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Complicate ACL affect performance?

2008-10-16 Thread Henrik Nordstrom
On ons, 2008-10-15 at 17:14 +0300, Henrik K wrote:
  Avoid using regex based acls.
 
 It's fine if you use Perl + Regexp::Assemble to optimize them. And link
 Squid with PCRE. Sometimes you just need to block more specific URLs.

No it's not. Even optimized regexes is several orders of magnitude more
complex to evaluate than the structured acls.

The lookup time of dstdomain is logaritmic to the number of entries.

The lookup time of regex acls is linear to the number of entries.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Using Squid as a reverse-proxy to SSL origin?

2008-10-16 Thread Henrik Nordstrom
On ons, 2008-10-15 at 16:42 -0400, Todd Lainhart wrote:
 I've looked in the archives, site, and Squid book, but I can't find
 the answer to what I'm looking to do.  I suspect that it's not
 supported.

It is.

 My origin server accepts Basic auth over SSL (non-negotiable).  I'd
 like to stick a reverse proxy/surrogate in front of it for
 caching/acceleration, and have it accept non-SSL connections w/ Basic
 auth, directing those requests as https to the origin.  The origin's
 responses will be cached, to be used in subsequent GETs to the proxy.
 Both machines are in a closed IP environment.  Both use the same
 authentication mechanism.

The basic setup is a plain reverse proxy.
http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-7fa129a6528d9a5c914f8dd5671668173e39e341

As the backend runs https you need to adjust the cache_peer line a bit
to enable ssl (port 443, and the ssl option).

When authentication is used you also need to tell Squid to trust the web
server with auth credentials

http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-c59962b21bb8e2a437beb149bcce3190ee1c03fd

 I see that Squid 3.0 has an ssl-bump option, but I don't think that
 does what I described.  If it does, that's cool - I can change the
 requirement of the proxy to accept Basic/SSL.

sslbump is a different thing. Not needed for what you describe.


But you may need to use https:// to the reverse proxy as well. This is
done by using https_port instead of http_port (and requires a suitable
certificate). 

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] How to block teamviewer in squid

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 09:01 +0530, Tharanga wrote:
 I need to block team viewer (remote access software) on squid. I analyse the
 connection establishmet . it goes through port 80 to teamviewer server ( ip
 is dynamic).
 
 Team viewer clinetport 80 -- Team viewer main server (dynamic
 ip's) --- (port 80) team viewer server
 
 did anyone succesfully block the team viewer access in squid acl.

What does access.log say with log_mime_hdrs on?

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 13:49 +0500, Anton wrote:
 Hello!
 
 was trying for a few hours to have a certain site 
 (http://www.nix.ru) to be not cacheable - but squid always 
 gives me an object which is in cache!
 
 My steps:
 
 acl DIRECTNIX url_regex ^http://www.nix.ru/$
 no_cache deny DIRECTNIX
 always_direct allow DIRECTNIX

This only matches the exact URL of root page of the server, not any
other objects on that web server (including any inlined objects or
stylesheets).

What you probably want is:

acl DIRECTNIX dstdomain www.nix.ru
no_cache deny DIRECTNIX

which matches the whole site.

always_direct is unrelated to caching. But if you want Squid to bypass
any peers you may have (cache_peer) then it's the right directive.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 14:34 +0500, Anton wrote:
 Just realized that i have 
 
 reload_into_ims on
 
 this was making me to be not able to refresh the given page 
 or site, since refresh request was changed - but anyway - 
 it should not affect no_cache? 

It doesn't.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Disabling error pages

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 13:02 +0100, Robert Morrison wrote:

 I've found lots of references online (in this list's archives, other
 sites and the FAQ) to customising error pages in squid, but haven't
 yet found reference to removing error pages completely.

You can't. Oce the request has reached the proxy the proxy must respond
with someting. If it fails retreiving the requested object the polite
thing is to respond with an error message explaining what happened and
what the user can do to fix the peoblem.

If you fo not want to be polite to the users then you MAY change the
error pages to just a blank page with no visible content, but there
still needs to be somr kind of response.

 Is this possible without editing source code? I think I saw reference
 to setting font color in error messages to the same as background, but
 I'd prefer something a little less hackish ;)

Yes. Just replace the error pages with a file containing just the
following line:

!-- %s --

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Re-distributing the cache between multiple servers

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 14:39 +0100, James Cohen wrote:
 I have two reverse proxy servers using each other as neighbours. The
 proxy servers are load balanced (using a least connections
 algorithm) by a Netscaler upstream of them.

Ok.

 A small amount of URLs account for around 50% or so of the requests.

Ok.

 At the moment there's some imbalance in the hit rates on the two
 caches because I brought up server A before server B and it's holding
 the majority of the objects which make that 50% of request traffic.

This should even out very quickly, unless you are using proxy-only in
the peering relation..

If you are using proxy-only then it will take longer time as it then
takes much longer for the active content to get replicated on the
servers.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] recovering an object from the cache -- trimming off the squid header

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 19:06 +0200, lartc wrote:
 hi all,
 
 i've googled, but have been unable to find a simple sed command, or
 otherwise to recover an object sitting in the web cache.
 
 i know the filename(s) in the cache, however, there's a squid header
 on top of a binary file(s), and I don't know how to recover just the
 binary portion.

See the purge tool. It knows how to do this.

Found in the related software section.

Regards
Henrik



Re: [squid-users] squidnt.com, warning

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 17:01 +0100, Mr Lyphifco wrote:
 It seems that the site http://squidnt.com/ is trying to masquerade as
 an
 official website for Mr Serassio's Windows port of Squid. It doesn't
 explicitly state this, but the wording of the site contents strongly
 implies
 such a thing.
 
 Also it was entered into a new Wikipedia article on SquidNT as the
 homepage:
 
   http://en.wikipedia.org/w/index.php?title=SquidNTaction=history
 
 I suspect blog-spam of some sort.

I would agree. The site seems completely anonymous on who is behind the
content, and I have never heard of the name who is registered as owner
of the domain (additionally the domain owner is registered with an UK
address but US phonenumber.. which is a bit odd but imho)

But I do suspect the wikipedia user who created the wikipedia article is
the the same. The wikipedia article was created before the first blog
post (Wikipedia article created 19 July, first blog post is from 26
July).

I have added a warning comment on their download page.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] wbinfo_group.pl ?? return a error cannot run ..

2008-10-16 Thread Henrik Nordstrom

On tor, 2008-10-16 at 22:26 +0200, Phibee Network Operation Center
wrote:
 Hi
 
 We have a problems with our new squid server,
 when we want add wbinfo_group.pl, he can't start it :
 

 2008/10/14 06:07:39| WARNING: Cannot run 
 '/usr/lib/squid/wbinfo_group.pl' process.

Is wbinfo_group.pl executable from a shell running as your
cache_effective_user? (not the same as testing from the root account..)

Do you have SELINUX enabled? Check your system logs in case it's SELINUX
denying Squid from running wbinfo_group.pl.

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] newbie: configuring squid to always check w/origin server

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 16:12 -0700, dukehoops wrote:
 1. With what headers should the origin server respond in 3a) and 3b)? In
 latter case, it seems like something like Cache-Control: must-revalidate,
 not sure whether to use s-maxage=0 and/or maxage=0

You probably do not need or want must-revalidate, it's a quite hard
directive. max-age is sufficient I think.

You only need must-revalidate (in addition to max-age) if it's
absolutely forbidden to use the last known version when/if revalidation
should fails to contact the web server for some reason.

You only need s-maxage if you want to assign different cache criterias
to shared caches such as Squid and browsers, for example enabling
browsers to cache the image longer than Squid.

 2. What params should be used in squid config?

Preferably nothing specific for this since you have control over the web
server..

REgards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squidnt.com, warning

2008-10-16 Thread Henrik Nordstrom
On tor, 2008-10-16 at 21:16 +0200, Guido Serassio wrote:

 Please, do you can update again the Wikipedia page ?

Done.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Header Stripping of Header type other

2008-10-17 Thread Henrik Nordstrom
On fre, 2008-10-17 at 06:09 +0200, WRIGHT Alan [UK] wrote:
 I could use ACL with request_header_access other deny, but this will
 strip some other headers too which is not possible.

You should be able to use any header name in request_header_access. If
not file a bug report.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Update Accelerator, Squid and Windows Update Caching

2008-10-17 Thread Henrik Nordstrom
On fre, 2008-10-17 at 06:06 +0100, Richard Wall wrote:

 but I don't see anything evil in the server response headers
 today. I guess the client may be sending no-cache headers...I'll
 double check that later.
 
 Is there some other case that I'm missing?

I think the missing partial object cache is the main culpit for windows
update caching today.

Another minor culpit is that sometimes SSL is used. But I think this is
only some metadata requests.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squidnt.com, warning

2008-10-17 Thread Henrik Nordstrom
On fre, 2008-10-17 at 14:40 +1300, Amos Jeffries wrote:

  I have added a warning comment on their download page.

 Which appears to have been moderated out of existence.
 At least the three comments now present are all by 'admin' advertising
 their downloads.

Suspected this would happen. Oh well. Now we at least know for sure they
are hostile. For all we know that Squid download may well be a trojan.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] newbie: configuring squid to always check w/origin server

2008-10-17 Thread Henrik Nordstrom
On fre, 2008-10-17 at 10:01 -0700, dukehoops wrote:

 Thanks for a prompt response. Unfortunatley, seems like we're still missing
 something:
  
 The origin server is including
  
 CacheControl: max-age=0
 ETag: etag-value
  
 in it's response.
  
 The problems are 
  
 1) Squid is not sending 
  
 If-None-Match: etag-value

Which Squid version?
 
 2) When the origin server return 302, squid just passes 302 back to the
 browser rather than serving up its cached copy of the image.

302 is redirects.. did you mean 304?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Using Squid as a reverse-proxy to SSL origin?

2008-10-17 Thread Henrik Nordstrom
On tor, 2008-10-16 at 10:56 -0400, Todd Lainhart wrote:

 Could I do the same thing with SSL to the reverse proxy?  That is, the
 reverse proxy is the endpoint for the client, gets the creds, becomes
 the endpoint for the server, decrypts and caches the origin response,
 and then serves cached content encrypted back to the client?

Yes.

 I would
 guess this falls into man-in-the-middle style ugliness, is
 out-of-bounds for SSL and so wouldn't be supported.  But then again I
 was wrong about my original use-case not being supported :-) .

It's supported, and not a man-in-the-middle attack as the reverse proxy
is the administrative endpoint, and as far as the user is concerned is
the authoriative server. The fact that this web server happens to use
HTTP (or HTTP over SSL) to fetch it's content is an implementation
detail.

You'll need a valid certificate on the reverse proxy. The certifiate on
the actual web server may be self-signed or by an internal CA, not
visible to the end-user, only the reverse proxy.


There is one notable limitation however, and that is that the origin
server can not request SSL client certifiacates from the end-user.
because the SSL is terminated at the reverse proxy there is no SSL
between web server and end-user. The proxy can request client
certificates, and may also relay details about the user provided
certificate (not sure such relaying is implemented by Squid yet). The
proxy can also present it's own client certificate to the web server
provint authenticity that it's really a trusted reverse proxy.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Complicate ACL affect performance?

2008-10-17 Thread Henrik Nordstrom
On tor, 2008-10-16 at 12:02 +0300, Henrik K wrote:

 Optimizing 1000 x www.foo.bar/randomstuff into a _single_
 www.foobar.com/(r(egex|and(om)?)|fuba[rz]) regex is nowhere near linear.
 Even if it's all random servers, there are only ~30 characters from which
 branches are created from.

Right. 

Would be interesting to see how 50K dstdomain compares to 50k host
patterns merged into a single dstdomain_regex pattern in terms of CPU
usage. Probably a little tweaking of Squid is needed to support such
large patterns, but that's trivial. (squid.conf parser is limited to
4096 characters per line, including folding)

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Complicate ACL affect performance?

2008-10-18 Thread Henrik Nordstrom
On lör, 2008-10-18 at 12:58 +0300, Henrik K wrote:

 By doing it correctly, using ^hostname$ instead of plain hostname in regex
 results in 1.2 seconds, that's 8+ hosts/sec..

The interesting pattern match to compare with is

s/^www\.// on the hostnames before making patterns

Then for each hostname
(\.|^)hostname$

or expanded in two patterns depending how well Regexp::Assemble handles
this case.

   \.hostname$
   ^hostname$

blacklists have a quite large proportion domain matches, matching a
complete domain.

Quite likely regex will handle this much better if you reverse the
hostnames, resulting in patterns on the form

 ^emantsoh(\.|$)

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Complicate ACL affect performance?

2008-10-18 Thread Henrik Nordstrom
On lör, 2008-10-18 at 14:26 +0300, Henrik K wrote:

 Fair test would be reversing the hostname, which is very cheap operation. ;)
 
 (^|\.)example\.com$  .. runtime 2.2 secs
 ^moc\.elpmaxe(\.|$)  .. runtime 1.3 secs

Heh, and I should learn to read the whole thread before responding ;-)

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Time on squid

2008-10-18 Thread Henrik Nordstrom
On lör, 2008-10-18 at 04:50 -0700, Jeff Pang wrote:
 
 
 --- On Fri, 10/17/08, netmail [EMAIL PROTECTED] wrote:
 
  From: netmail [EMAIL PROTECTED]
  Subject: [squid-users] Time on squid
  To: squid-users@squid-cache.org
  Date: Friday, October 17, 2008, 10:37 AM
  Hi
  When squid generate the message when block an website , the
  time show is
  different respect the linux time
 
 I also met this same problem in my squid setup.
 How to fix up it? thanks.

See FAQ.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] LFT_REQUEST_SIZE_TOTAL

2008-10-18 Thread Henrik Nordstrom
On lör, 2008-10-18 at 22:52 +0900, Mikio Kishi wrote:
 Hi,
 
 No, I'm using squid-3.0.STABLE9.
 I recorded the http request size in access log (using st)
 But, the value was always 0 .
 
 In access_log.cc
 
785  case LFT_REQUEST_SIZE_TOTAL:
786  outint = al-cache.requestSize;
787  dooff = 1;
788  break;
 
 I think that
 
  outint = al-cache.requestSize;
 
 must be
 
  outoff = al-cache.requestSize;

Indeed. Congratulations to your first contribution to the Squid source
code! Will show up at
http://www.squid-cache.org/Versions/v3/HEAD/changesets/ shortly.


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Why are cache_peer_access acls called 4 times in a row?

2008-10-19 Thread Henrik Nordstrom
On sön, 2008-10-19 at 17:25 +1300, Amos Jeffries wrote:
  The following trace shows up in the log 4 times one after the other, and in
  all of them it is a success (returning 1):

  What is the reason for that? Other acls in the file are invoked only once
  (seen in the trace), but acls on cache_peer_access are always called 3 or 4
  times.
 
 4 connection attempts were tried?

No. peer acls is evaluated during peering selection, not during
re-forwarding.

More likely it's because the peer was selected by 4 different
algorithms. It's an accelerator where going direct is not allowed so
Squid tries really hard to find all possible paths to forward the
request.


Regards
Henrik



[squid-users] 2.7.STABLE5 2.6.STABLE22 available

2008-10-19 Thread Henrik Nordstrom
2.7.STABLE5  2.6.STABLE22 bugfix releases has been released and is
available for download.

2.7.STABLE5 is now the recommended version for users of the Squid-2
series. 

Please note that 2.6 is a legacy release and is no longer actively
maintained by the Squid project. We encourage any users of Squid-2.6 or
ealier you to upgrade to 3.0 or 2.7, the currently actively maintained
releases. You should not expect to see any further 2.6 maintenance
releases from the Squid project.

Maintenance of Squid-2.7 is still active. The above only applies to 2.6.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Why are cache_peer_access acls called 4 times in a row?

2008-10-19 Thread Henrik Nordstrom
On sön, 2008-10-19 at 17:12 -0700, Elli Albek wrote:

 It makes sense since setting always direct to this acl evaluates the acl 
 once (and also once on the always direct rule, but this is expected).
 
 The four acl evaluations return success, so is it possible to configure squid 
 to stop at the first success or just evaluate one algorithm?

Not without changing the code. See the peerSelectFoo() function.

Regards
Henrij


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid conf for live video stream

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 19:13 +1300, Amos Jeffries wrote:
 You need to fix the VOD implementation to use cacheable URI. Or scream 
 at the vendors who wrote it so they fix it.

And most won't fix it as they regard this cache unfriendlyness as one of
the premium features of their system.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Strange entries in cache.log (3.0.STABLE10)

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 11:01 +0200, Malte Schröder wrote:
 Hello,
 I am seeing entries like below quite frequently. It looks to me as if the 
 Content-Language and Content-Location headers are not parsed correctly, since 
 I 
 cannot see this stuff in the traffic going to the squid. In this config Squid 
 has a WebWasher installation as parent and has an imagefilter as 
 ICAP-respmod.

Have you inspected the ICAP responses from imageFilter, and paired this
with the error? I suspect the error may be from there.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid and accept-encoding gzip,deflate

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 15:23 +0530, Aviral Pandey wrote:
 When my client asks for Accept-Encoding: gzip, deflate, squid is sending 
 it raw content. Shouldn't it gzip and then send?

No, Content-Encoding (just as Content-Languate) is a task of webservers,
not semantically transparent proxies such as Squid.

 Is there a way in which 
 this can be achieved?

There is an addon for squid-3, but it apparently needs a bit of work to
apply to current Squid-3 sources.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid conf for live video stream

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 06:20 -0400, [EMAIL PROTECTED] wrote:

 All the videos are cachable. Because the video's are maintained by us.
 
 But the origin server not near by us. So i try to cache and serve to 
 customers quickly.

So fix the origin. Alternatively you can play games with a url rewriter
to canonialize the requested URLs.

The easiest way to acheive what you want is to NOT use a video streaming
server for distributing the videos. Instead store the videos as plain
files on an HTTP server.

Sorry for being a bit dense in the response. If you want more precise
answers then provide more information on what requests  responses look
like, and why.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid and accept-encoding gzip,deflate

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 16:08 +0530, Aviral Pandey wrote:
 Thanks Henrik...But I have seen vice-versa to be working i.e., server 
 sending a gzipped response and squid serving deflated one when client 
 asks for deflated content

This is not available in any Squid version.

But Squid do support servers doing this correctly, by caching both
gzip:ed and plain variants of the resource.

All servers I know of supporting gzip also supports serving plain
variants when the client do not support gzip.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Re: acl deny in transparent cache

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 05:42 -0500, Jian Wang wrote:
 I'm not sure how to do this. Is it externally changing the
 configuration of the router? For example,
 in the Squid external_acl_helper code, telnet to the router and add an
 acl line to the configuration of router?

Yes that's one way.

 Isn't this way unsecured? Further more, if I have thousands of client
 IP, it sounds like to me that I will have
 to add thousands of acl configuration lines to the router.

Yes.

 Or am I totally misunderstanding your suggestion?

No.

But it may be possible to do the same in the local firewall on the proxy
server instead of the router. Depends on your setup.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid and accept-encoding gzip,deflate

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 11:21 +0100, Simon Waters wrote:

 Apache will do this as a reverse proxy, but Apache as a reverse proxy is 
 interesting - most places I've seen it done it is sitting on the webserver on 
 port 80 forwarding to less capable webservers on another port. Although 
 Apache can do quite a lot as a reverse proxy the first person I saw who tried 
 to set it up created an open proxy - so be careful.

Also mod_deflate still works very sub-otimal when it comes to HTTP 
caching. Currently cache validations is a bit broken, after it was fixed
to at least minimally comply with HTTP specifications.

There is an open task in the Apache project for supporting mod_deflate
and similar filters that conditionally modify the response entity and
thereby creating new variants of the requested resource. HTTP isn't
really designed for this and getting it right requires some care.. (ETag
needs to be remapped, in a way that If-* conditional requests still to
the right thing).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Secondary Cache

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 19:57 +0300, Monah Baki wrote:

 Can I have my squid cache be a secondary cache to a bluecoat server?

Yes.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Objects Release from Cache Earlier Than Expected

2008-10-20 Thread Henrik Nordstrom
On mån, 2008-10-20 at 16:02 -0700, BUI18 wrote:
 Hi -
 
 I have been trying to track down an issue with Squid 2.6 STABLE18 and
 why users were getting TCP_REFRESH_MISS instead of TCP_REFRESH_HIT on
 files that were recently cached.  We first noticed that users were
 getting misses when we expected them to receive hits.

TCP_REFRESH_MISS is a cache validation which indicated the object has
been updated on the origin server.

 I have set the min and max age to be 5 and 7 days respectively.  When
 I look in the store.log file, I do see objects which were known to
 have been cached today (base on time/date stamp in the file name), yet
 they have status code of RELEASE.  

And you are sure it wasn't simply replaced with a newer copy of the same
URL?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Objects Release from Cache Earlier Than Expected

2008-10-21 Thread Henrik Nordstrom
On mån, 2008-10-20 at 17:45 -0700, BUI18 wrote:
 I not sure what you mean by a newer copy of the same URL?  Can you elaborate 
 on that a bit?

The cache (i.e. Squid) performed a conditional request to the origin web
server, and the web server returned a new 200 OK object with full
content instead of a small 304 Not Modified.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] can I use Squid as a proxy of streaming protocol?

2008-10-21 Thread Henrik Nordstrom
On tis, 2008-10-21 at 11:30 +0900, [EMAIL PROTECTED] wrote:
 Hello, I have a question.
 
 Can I use Squid as a proxy of Streaming protocol such as WMV,RealMedia and 
 QuickTime?

Yes, by configuring the client for using an HTTP proxy. Works at least
for Real  Quicktime clients..

 Can I use Squid as a proxy of Instant Messsage such as AOL,Yahoo and MSN?

Only in tunneled mode where these runs over http(s).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Re:How to increaswe the downloading capacity in my proxy server ?

2008-10-21 Thread Henrik Nordstrom
On tis, 2008-10-21 at 13:14 -0400, [EMAIL PROTECTED] wrote:
 *
 This message has been scanned by IMSS NIT-Silchar
 
 Dear All Squid USERS,
 
 Of late it been realised that many users of our present facilty proxy
 server are complaining that the net accessiblity (downloading speed) has
 become slow.
 
 A few weeks back no one felt so. What could be the possible reason for that ?

There is two common causes


a) Running out of memory causing a lot of swap activity.

b) Disk performance insufficient. Gets noticeable when the cache has
been filled and Squid starts to recycle space. With the ufs family of
cache_dir stores (ufs, aufs, diskd) recycling space is often more costly
than storing...

 Please give me your ideas technically ?
 
 For your information the the downloading speed of the internet link, is
 2MB ps and the downloading capacity for each faculty is 100MM per 24
 hours.


A 2Mbit link isn't much. So my guess is 'a' above.

 Is it due to the low bandwidth that we are getting form the ISP (Internet
 Service Provider) ?

If the link is oversaturated then performance will obviously be slow..
You should see this in the link statistics (ask your ISP if they have
any, many do).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Override the Accept-Encoding value

2008-10-21 Thread Henrik Nordstrom
On tis, 2008-10-21 at 14:55 +0200, Christian Tzolov wrote:
 Hi all,
 
 I would like to reduce the diversity of Accept-Encoding request header
 values by replacing the header with a hardcoded value like:
 gzip,deflated.
 
 In Squid 2.6 there are two directives that seems sutible for the job:
 header_access and heaer_replace. Will the following configuration do the
 job? 
 
 header_access Accept-Encoding deny all
 header_replace Accept-Encoding gzip,deflated
 
 If yes are they replaced before or after squid cache the entry?

heder_access/replace is only applied on forwarded requests, modifying
the headers as sent by Squid. They do not modify Squid's own view of
received headers.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] CARP setup

2008-10-21 Thread Henrik Nordstrom
Scrolling back to my first response in this thread:

http://marc.info/?l=squid-usersm=122366977412432w=2

On tis, 2008-10-21 at 21:18 +0530, Paras Fadte wrote:
 Hi Henrik,
 
 Thanks for your reply. What would be your suggestion for a CARP setup
 which would provide an efficient caching system?
 
 Thanks in advance.
 
 -Paras
 
 On 10/16/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:
  On tor, 2008-10-16 at 09:42 +0530, Paras Fadte wrote:
Hi Henrik,
   
In CARP setup, if one uses same weightage for all the parent caches
how would the requests be handled ? will the requests be equally
forwarded to all the parent caches ? if the weightages differ then
won't all the requests be forwarded to a particular parent cache only
which has the highest weightage ?
 
 
  CARP is a hash algorithm. For each given URL there is one CARP parent
   that is the designated one.
 
   The weights control how large portion of the URL space is assigned to
   each member.
 
 
Also if I do not use the proxy-only option in the squid which
forwards the requests to parent caches, won't less number of requests
be forwarded to parent caches since it will be already cached by squid
in front of the parent caches?
 
 
  Correct. And it's completely orthogonal to the use of CARP. As I said
   most setups do not want to use proxy-only. proxy-only is only useful in
   some very specific setups. These setups MAY be using CARP or some other
   peering method, the choice of peering method is unrelated to proxy-only.
 
   Regards
 
  Henrik
 
 


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Override the Accept-Encoding value

2008-10-21 Thread Henrik Nordstrom
On tis, 2008-10-21 at 20:09 +0200, Christian Tzolov wrote:
 Hi Henrik,
 
 Thank you for the clarification. 
 
 Do you know any other approach (or tool) that can help me to replace the
 accept-encoding header before it is processed by Squid?

Two Squids.

  or

An ICAP server (together with squid-3).

  or

A modified Squid

  or

An eCAP module (together with Squid-3.1)

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Need help with Transparent Proxy configuration

2008-10-21 Thread Henrik Nordstrom
On tis, 2008-10-21 at 11:07 -0700, swb311 wrote:

 For our workstations, I am setting the gateway to 192.168.0.13, and I would
 like to figure out how to get iptables to forward everything besides the
 port 80 traffic directly to the Router on .1.

Just configure it as default gateway (probably already done) and enable
IP-forwarding.

Another option is to use WCCP. This way the workstations all have the
Cisco as default gateway and the cache registers with the Cisco to have
port 80 forwarded to it..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Announcement: txforward (for php behind squid)

2008-10-22 Thread Henrik Nordstrom
Interesting, but is missing a crucial piece. There is nothign which
establishes trust. If the same server can be reached directly without
using the reverse proxy then security is bypassed, or if the module is
loaded on a server not using a reverse proxy.

This needs a configuration directive indicating which addresses (hosts
and/or networks) is trusted with X-Forwarded-For.

When you have this you can also unwind the chain of IP addresses
properly when the request passes via a chain of reverse proxies in
peering relation.


On ons, 2008-10-22 at 01:02 +0200, Francois Cartegnie wrote:
 Hello,
 
 Txforward is php module providing a simple hack for deploying PHP 
 applications 
 behind squid in reverse proxy (accelerator) mode. You don't need anymore 
 X-Forward header aware applications.
 http://fcartegnie.free.fr/patchs/txforward.html
 
 PS: but you'll still need to fix your webserver logs :)
 
 Greetings,
 
 Francois


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] configuration question

2008-10-22 Thread Henrik Nordstrom
On tis, 2008-10-21 at 19:57 -0500, Lou Lohman wrote:

 I have been poking around the Internet and mailing lists and anything
 else I can think of, for DAYS, to try to answer what I thought would
 be a simple question, How can I configure Squid so that my authorized
 Windows users (Members of the proper security group in AD who are
 logged into the network) don't have to answer a challenge to get out
 to the Internet?

This consists of three pieces.

1. Configuring the clients to use the proxy, using a server name which
MSIE secururity classifies as Local LAN/Intranet. Usually a short
server name without domain works, but Windows people can answer this
better than me.

2. Configuring the proxy with ntlm (and perhaps negotiate)
authentication scheme support. Using Samba ntlm_auth as helper is
recommended.

3. Limiting access to the given group. Can be done in two ways, either
restrict ntlm_auth to only accept members of the given group, or lookup
the group membership using wbinfo_group.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Objects Release from Cache Earlier Than Expected

2008-10-22 Thread Henrik Nordstrom
On ons, 2008-10-22 at 14:35 -0700, BUI18 wrote:

 Object is initially cached.  Max age in squid.conf is set to 1 min.
 Before 1 min passes, I request the object and Squid returns TCP_HIT.
 After 1 min, I try to request for object again.  Squid returns
 TCP_REFRESH_HIT, which is what I expect.  I leave the entire system
 untouched.  A day or a day and a half later, I ask for the object
 again and Squid returns TCP_REFRESH_MISS/200.


TCP_HIT is a local hit on the Squid cache. Origin server was not asked.

TCP_REFRESH_HIT is a cache hit after the origin server was asked if the
object is still fresh.

TCP_REFREHS_MISS is when the origin server says the object is no longer
fresh and returns a new copy on the conditional query sent by the cache.
(same query as in TCP_REFRESH_HIT, different response from the web
server).

 What could possibly cause Squid to refetch the entire object again?

A better question is why your server responds with the entire object on
a If-Modified-Since type query if it hasn't been modified. It should
have responded with a 304 response as it did in the TCP_REFRESH_HIT
case.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid3 keeps many idle connections

2008-10-22 Thread Henrik Nordstrom
On ons, 2008-10-22 at 11:31 +0200, Malte Schröder wrote:
 Hello,
 Squid3 seems to keep a LOT (over a thousand) idle connections to its
 parent proxy.

Not normal.

Squid version?

And how did you measure these? You are not counting TIME_WAIT sockets
are you?

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Announcement: txforward (for php behind squid)

2008-10-22 Thread Henrik Nordstrom
On ons, 2008-10-22 at 15:02 +0200, Francois Cartegnie wrote:
 Le mercredi 22 octobre 2008, vous avez écrit :
  Interesting, but is missing a crucial piece. There is nothign which
  establishes trust. If the same server can be reached directly without
  using the reverse proxy then security is bypassed, or if the module is
  loaded on a server not using a reverse proxy.

 That's what the README and the warning in the phpinfo output are for...

And everyone reads documentation... and remembers to uninstall modules
no longer used..

Adding the small trusted server acl check isn't much code, and would
make this module generic and suitable as a version 1.0.

Note: The support for chains of proxies is just an idea for future
improvement, not a criticism.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Diagnosing RPCviaHTTP setup?

2008-10-22 Thread Henrik Nordstrom
On ons, 2008-10-22 at 16:49 +0200, Jakob Curdes wrote:
 .. I am trying to setup a RCPviaHTTP reverse proxy scenario as described in
 
 http://wiki.squid-cache.org/ConfigExamples/SquidAndRPCOverHttp
 
 Squid starts with my configuration (like example plus some standard 
 ACLs) but connections with a browser to the SSL port on the outside take 
 eternally and eventually time out.
 If I connect with a telnet 443 I get some sort of connection, so I 
 suppose it's not a firewall issue. In the cache or access log I see 
 nothing even after turning up debugging.
 What else can I do to troubleshoot this ? What should I see when 
 telnetting into 443?

Is there anything in cache.log and/or access.log?

Also try connecting with openssl

openssl s_client -connect ip:443

This should show you the SSL details.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Objects Release from Cache Earlier Than Expected

2008-10-22 Thread Henrik Nordstrom
I am talking about If-Modified-Since between Squid and the web server,
not browser-squid.


On ons, 2008-10-22 at 17:57 -0700, BUI18 wrote:
 Henrik -  Thanks for taking time out to respond to my questions.  I'm 
 completely stumped on this one.
 
 In our production environment, we set min and max to 5 and 7 days, 
 respectively.
 
 As I understand it, if the request is made for the object in say3 days or 
 4 days (less than 5 days), I would always expect a TCP_HIT.
 
 But again, after 1 to 2 days, I see TCP_REFRESH_MISS and I get the whole 
 object.
 
 I thought that by setting the min to 5 days would guarantee freshness up to 5 
 days.
 
 Do you know of a problem that maybe causes Squid to ignore the rules on 
 determining whether an object is fresh?
 
 We used fiddler and actually removed the If-Modified-Since part of the 
 request and still we get TCP_REFRESH_MISS.
 
 Do you have any other ideas on areas we might want to check to see what could 
 possibly be causing this behavior?
 
 Thanks
 
 
 
 
 
 - Original Message 
 From: Henrik Nordstrom [EMAIL PROTECTED]
 To: BUI18 [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Sent: Wednesday, October 22, 2008 4:06:33 PM
 Subject: Re: [squid-users] Objects Release from Cache Earlier Than Expected
 
 On ons, 2008-10-22 at 14:35 -0700, BUI18 wrote:
 
  Object is initially cached.  Max age in squid.conf is set to 1 min.
  Before 1 min passes, I request the object and Squid returns TCP_HIT.
  After 1 min, I try to request for object again.  Squid returns
  TCP_REFRESH_HIT, which is what I expect.  I leave the entire system
  untouched.  A day or a day and a half later, I ask for the object
  again and Squid returns TCP_REFRESH_MISS/200.
 
 
 TCP_HIT is a local hit on the Squid cache. Origin server was not asked.
 
 TCP_REFRESH_HIT is a cache hit after the origin server was asked if the
 object is still fresh.
 
 TCP_REFREHS_MISS is when the origin server says the object is no longer
 fresh and returns a new copy on the conditional query sent by the cache.
 (same query as in TCP_REFRESH_HIT, different response from the web
 server).
 
  What could possibly cause Squid to refetch the entire object again?
 
 A better question is why your server responds with the entire object on
 a If-Modified-Since type query if it hasn't been modified. It should
 have responded with a 304 response as it did in the TCP_REFRESH_HIT
 case.
 
 Regards
 Henrik
 
 
 
   


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] about refresh_pattern

2008-10-23 Thread Henrik Nordstrom
On tor, 2008-10-23 at 16:30 +0800, Sandy lone wrote:
 Hello,
 
 Under what cases squid will use refresh_pattern?
 If the response objects have expire or age headers, squid will follow
 their values.

Yes. Unless overridden in refresh_pattern override options.

 If the response objects have neither expire nor age headers, squid
 will not cache them at all.

If there is no Expires then caches are allowed to guess as they like
pretty much.

Responses which should not be cached MUST have suitable Cache-Control
headers, or Expires: now (same as Date header).

Age is a different header. You probably meant Last-Modified, from which
the doument age can be calculated (document_age = Date - Last-Modified)

 So when will squid use the refresh_pattern settings?

In the default settings with a min-age of 0 when there is no Expires but
there is Last-Modified.

If a min-age 0 is used then this is used if there is no Last-Modified.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] problem with flash player behind NATed firewall

2008-10-23 Thread Henrik Nordstrom
On tor, 2008-10-23 at 11:19 +0100, Walton, Jason (Accenture) wrote:
 When we monitor the firewall, we can see port 80 taking via squid and
 port 1935 talking to our test server when it has a public IP, as soon as
 we take away the public IP, all port 1935 traffic stops but port 80
 still routes via squid.
 
 I'm going to look at a socks server (dante seems to be the SUSE
 recommendation) but in the meantime if anyone has any input into what it
 might be?

What?

The traffic as such is already explained (RTMP streaming traffic).

Many clients and content servers support tunneling of RTMP over HTTP,
but apparently not this site.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid and vbulletin

2008-10-23 Thread Henrik Nordstrom
On tor, 2008-10-23 at 08:02 -0700, TheGash35 wrote:

 vBulletin already has code built in that looks for HTTP_X_FORWARDED_FOR ,
 but it looks like my squid is not configured to pass this because all
 activity is coming from the proxy server IP, not the user's IP address.

Squid sends X-Forwarded-For unless you actively configure Squid not to.

But most likely you need to tell vbullentin to trust the header from
your proxy. It's not something it can do automatically as that may leave
it open to spoofing attacks.

Check the vbullentin manual on how to make use of X-Forwarded-For (or
HTTP_X_FORWARDED_FOR in CGI langauge...)

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Trouble getting kerberos auth working with squid 3.0

2008-10-23 Thread Henrik Nordstrom
On tor, 2008-10-23 at 14:25 -0400, Steven Cardinal wrote:
 I see no sign on my DCs of any failed authentication. A tcpdump trace
 on my workstation shows no attempts from my Windows PC to perform any
 kerberos authentication. If I try running the command line specified
 in the squid.conf, I get:

Then your browsers do not trust the proxy with kerberos authentication.
Verify that you have configured the proxy by name and not IP in the
browser proxy settings. To be exact the proxy name needs to match both a
name that the browser trusts with Kerberos authentication AND a server
kerberos ticket (or whatever those are called, kept in the keytab,
kerberos is not a strong field of mine..)

 I'm guessing, however, that squid_kerb_auth can't be run just like
 that, however.

Correct. You need to speak base64 encoded GSSAPI wrapped in Microsoft
Negotiate SSP protocol format wrapped in the Squid NTLM/Negotiate
protocol to it..

 Any ideas where I should look? I set my keytab file to be
 world-readable as a test and that didn't help.

It seems you don't even get that far.. the very first steps is not
dependent on the helper, only browser.. only when the browser agrees on
sending the initial negotiation packet is the helper called. Until then
all that happens is that Squid says that authentication is required to
continue and the Negotiate SSP authentication protocol is supported.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Problems with downloads

2008-10-23 Thread Henrik Nordstrom
On tor, 2008-10-23 at 14:34 -0500, Osmany Goderich wrote:
 Hi everyone,
 
 I have Squid3.0STABLE9 installed on a CentOS5.2_x86_64 system. I have
 problems with downloads, especially large files. Usually downloads are slow
 in my network because of the amount of users I have but I dealt with it
 using download accelerators like “FlashGET”. Now the downloads get
 interrupted and they never resume and I don’t know why.

Can you try downgrading to 2.7 to see if that makes any difference. If
it does please file a bug report.

Also check your cache.log for any errors.

  I can’t seem to find
 a pattern as to when or why the downloads get interrupted. I don’t know if I
 explained my self well enough. I’m suspecting that there is something wrong
 with all the configurations I did to tune de cache effectiveness.

There isn't much you can do wrong at this level.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Problems with downloads

2008-10-24 Thread Henrik Nordstrom
On tor, 2008-10-23 at 15:54 -0500, Osmany Goderich wrote:

 I had squid2.6STABLE6-5 before and I upgraded it thinking it was a bug in 
 that release. Should I still downgrade to 2.7?

Yes.

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Problems with downloads

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 08:31 -0500, Osmany Goderich wrote:

 It was the range_offset_limit -1 KB line that was not letting squid
 resume downloads. I set it back to 0KB as it is by default and
 woila!!! Everything back to normal!!

Good.

range_offset_limit -1 says Squid should NEVER resume download, and
instead always download the complete file.

To use this you must also disable quick_abort, telling Squid to always
continue downloading the requested object when the client has
disconnected.

quick_abort_min -1 KB


But be warned that both these settings can cause Squid to waste
excessive amounts of bandwidth on data which will perhaps never be
requested by any client..

Also depending on the Squid version range_offset_limit -1 may result in
significant delays or even timeouts if the client requests a range far
into the requested file. Not sure what the status in Squid-3 is wrt
this.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] How do I configure Keepalive-Timeout?

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 16:52 +0900, [EMAIL PROTECTED] wrote:
 Hello,I have a question.
 
 I'd like to configure Keepalive-Timeout.
 But I can't find Keepalive section in the squid.conf file.
 
 Does persistent_request_timeoutTAG  mean Keepalive-timeout?

Yes. It sets the timeout for idle client connections. How long Squid
waits after the last received request before it closes the connection.

 If so, Can I choose KeepAlive on or KeepAlive off  on each destination 
 site?

No. It's global.

 And Can I choose KeepAlive on  or KeepAlive off  on clientside and 
 serverside?

Yes. Both the on/off and timeout is separate for client and server.

client-squid:

client_persistent_connections
persistent_request_timeout

squid-server:

server_persistent_connections
pconn_timeout


These sets the upper limit as enforced by Squid. Clients and servers
also has their own settings which may further limit persistent
connection lifetime or use.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Ignoring query string from url

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 13:40 +0530, nitesh naik wrote:

 Is there way to ignore query string in url so that objects are cached
 without query string ?  I am using external perl program to strip them query
 string from url which is slowing down response time. I have started 1500
 processes of redirect program.

Then switch to the concurrent helper protocol with only one or two
helper processes.. requires a minimal change in the helper to support
the new request/response format. This significantly speeds up things as
Squid then batches several request to the helper, reducing the amount of
context switching.

See url_rewrite_concurrency. The protocol change is the same as for the
auth_param concurency parameter:

request:

  channel url method ...[newline]

response:

  channel new-url[newline]
or
  channel[newline]

that is responses need to echo back the same channel identifier as the
request had.

requests may be answered out-of-order if one likes.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid.Conf Needed For Proxy to Proxy Cache (Not Via ICP)

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 14:32 -0400, [EMAIL PROTECTED] wrote:

 I am looking to force all requests sent to an internal proxy to another 
 internal proxy.  The two proxies are separated via a WAN link and each one is 
 managed by different admins.  I am not able to use ICP.

 Does anyone have a squid.conf that would address this requirement?

http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid#head-c050a0a0382c01fbfb9da7e9c18d58bafd4eb027

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] HTTP status - in http_log file

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 14:46 -0400, Strauss, Christopher wrote:
  I am running Squid version 2.6.STABLE20 as a proxy server on
  2.2.20-gentoo-r3 Linux. I am seeing HTTP status code - in the http_log
  file:

It means the request was aborted before there was any form of response.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] WARNING: Median response time is 57448 milliseconds: Why?

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 11:52 -0700, Linda W wrote:
 I see alot of these messages in my squid warning log...
 
 Specifically, in filtering off the date, and sort+uniq+counting, I see:
 
 var/log# grp Median response warn|cut -c36-90 |more|sort|uniq -c
 107  WARNING: Median response time is 57448 milliseconds
   1  WARNING: Median response time is 6996 milliseconds
   1  WARNING: Median response time is 7384 milliseconds

This can happen naturally if you at some time have only very few users
and those mostly perform downloads or other long running requests.

But if seen during normal load with mostly interactive browsing requests
then something is wrong.

So it depends on when you got these warnings and how Squid was being
used at the time.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] HTTP status - in http_log file

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 15:51 -0400, Strauss, Christopher wrote:
 Thanks for your reply, Henrik.
 Has this always been the way squid handles these aborted requests?

As far as I can remember yes.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Re: WARNING: Median response time is 57448 milliseconds: Why?

2008-10-24 Thread Henrik Nordstrom
On fre, 2008-10-24 at 13:32 -0700, Linda W wrote:

 BUT---this sure is misleading and confuses the heck out of poor
 ignorami like me, who think of a response time as something along the lines
 of (srchost)ping - (-remotehost-echo: YO! -)- (srchost: YO!)

It's the median response time of all responses completed within a 5
minutes period, so if there is ANY traffic at around the time when the
download finished then Squid won't even notice the download response
time. The problem only arises if there is no other traffic at the time.

Note: The median of 1,2,2,3,4,5,10 is 3

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] headers say HIT, logs say MISS, payload is truncated...

2008-10-25 Thread Henrik Nordstrom
On fre, 2008-10-24 at 15:44 -0700, Neil Harkins wrote:

 We are using collapsed_forwarding here. I haven't tried disabling it yet.
 
 Unfortunately, since the problem appears to be load-related, I've been
 unable to reproduce for a tcpdump or running squid in debug thus far.

The mismatch in HIT/MISS is most likely related to collapsed forwarding.
Collapsed requests gets somewhere inbetween an hit or miss, and may well
be reported a little inconsistent.

Have no idea on the timeout issue unless there is a communication issue
between Squid and your web server.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Question about ACLs and http_access in Squid 3

2008-10-25 Thread Henrik Nordstrom
On fre, 2008-10-24 at 18:41 -0700, Tom Williams wrote:

 1224898553.333  2 www.xxx.yyy.zzz TCP_DENIED/403 2434 GET 
 http://aaa.bbb.ccc.ddd/ - NONE/- text/html
 
 yet I can't generate any debug info to provide more information as to 
 why the TCP_DENIED was issued.

Anything in cache.log?

Are you using cache_peer_access/cache_peer_domain? If so, is the IP
address included? (as far as these rules is concerned the IP is just
another requested hostname)

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Ignoring query string from url

2008-10-27 Thread Henrik Nordstrom
On mån, 2008-10-27 at 12:30 +0530, nitesh naik wrote:
 We use query string in each url for bursting cache at client end (
 browser) hence its not important for us and it won't provide any
 incorrect results. We already use similar configuration at CDN level.

Why do you do this?


 Henrik suggested some clever idea to make changes to
 url_rewrite_program to process request in parallel but unfortunately i
 am not sure how to incorporate it.

Write your own url rewriter helper. It's no more than a couple of lines
perl..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Ignoring query string from url

2008-10-27 Thread Henrik Nordstrom
On mån, 2008-10-27 at 10:11 +0100, Matus UHLAR - fantomas wrote:
  Write your own url rewriter helper. It's no more than a couple of lines
  perl..
 
 shouldn't that be storeurl rewriter?

No, since the backend server is not interested in this dummy query
string an url rewriter is better.

Regards
Henrik



Re: [squid-users] Ignoring query string from url

2008-10-27 Thread Henrik Nordstrom
On mån, 2008-10-27 at 16:12 +0530, nitesh naik wrote:
 Henrik,
 
 Is this code capable for handling requests in parallel ?

It's capable to handle the concurrent helper mode yes. It doesn't
process requests in parallell, but you don't need to.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Ignoring query string from url

2008-10-27 Thread Henrik Nordstrom
Sorry, forgot the following important line in both

BEGIN { $|=1; }

should be inserted as the second line in each script (just after the #! line)


On mån, 2008-10-27 at 11:48 +0100, Henrik Nordstrom wrote:

 Example script removing query strings from any file ending in .ext:
 
 #!/usr/bin/perl -an
 $id = $F[0];
 $url = $F[1];
 if ($url =~ m#\.ext\?#) {
 $url =~ s/\?.*//;
 print $id $url\n;
 next;
 }
 print $id\n;
 next;
 
 
 Or if you want to keep it real simple:
 
 #!/usr/bin/perl -p
 s%\.ext\?.*%.ext%;
 
 but doesn't illustrate the principle that well, and causes a bit more
 work for Squid.. (but not much)
 
  I am still not clear as how to write
  help program which will process requests in parallel using perl ? Do
  you think squirm with 1500 child processes  works differently
  compared to the solution you are talking about ?
 
 Yes.
 
 Regards
 Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Ignoring query string from url

2008-10-27 Thread Henrik Nordstrom
See earlier response.

On mån, 2008-10-27 at 16:59 +0530, nitesh naik wrote:
 Henrik,
 
 What if I use following code ?  logic is same as your program ?
 
 
 #!/usr/bin/perl
 $|=1;
 while () {
 s|(.*)\?(.*$)|$1|;
 print;
 next;
 }
 
 Regards
 Nitesh
 
 On Mon, Oct 27, 2008 at 4:25 PM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
 
  Sorry, forgot the following important line in both
 
  BEGIN { $|=1; }
 
  should be inserted as the second line in each script (just after the #! 
  line)
 
 
  On mån, 2008-10-27 at 11:48 +0100, Henrik Nordstrom wrote:
 
   Example script removing query strings from any file ending in .ext:
  
   #!/usr/bin/perl -an
   $id = $F[0];
   $url = $F[1];
   if ($url =~ m#\.ext\?#) {
   $url =~ s/\?.*//;
   print $id $url\n;
   next;
   }
   print $id\n;
   next;
  
  
   Or if you want to keep it real simple:
  
   #!/usr/bin/perl -p
   s%\.ext\?.*%.ext%;
  
   but doesn't illustrate the principle that well, and causes a bit more
   work for Squid.. (but not much)
  
I am still not clear as how to write
help program which will process requests in parallel using perl ? Do
you think squirm with 1500 child processes  works differently
compared to the solution you are talking about ?
  
   Yes.
  
   Regards
   Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] headers say HIT, logs say MISS, payload is truncated...

2008-10-27 Thread Henrik Nordstrom
On mån, 2008-10-27 at 12:23 -0700, Neil Harkins wrote:

 The timeout is because the Content-Length header is bigger than the
 payload it sent.
 Every http client/server will hang in that situation. This isn't
 simply a misreported
 HIT-MISS in the log, this is absolutely a significant bug where
 collapsed forwarding is
 mixing up the metadata from the two branches of our Vary:
 Accept-Encoding (gzip and not),
 i.e. giving the headers and content as non-gzip, but the amount of
 payload it reads from
 the cache and sends is based on the gzip size. Disabling
 collapsed_forwarding fixed it.

Please file a bug report on this. Preferably including squid -k debug
cache.log output and tcpdump -s 0 -w traffic.pcap traces.

http://bugs.squid-cache.org/

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] 2.7 reverse proxy -- compression problems

2008-10-27 Thread Henrik Nordstrom
On mån, 2008-10-27 at 14:49 +0100, Ralf Hildebrandt wrote:
 I set up a reverse proxy in front to http://www.charite.de (typo3) since it's
 fucking slow. Now it's fast, but SOME (!) users are reporting the sites:
 
 http://www.charite.de/neurologie/
 http://www.charite.de/stoffwechsel-centrum/
 http://www.charite.de/ch/anaest/ards/
 http://www.charite.de/akademie/
 http://www.charite.de/biometrie/de/
 
 as broken. The pictures they sent me look like compressed data instead
 of a page.
 
 I distinctly remember a similar problem with HTTP/1.1 and compression
 and heise.de --- 

Apache mod_deflate is broken in many versions, hence the
broken_vary_encoding directive in squid.conf...

It could also be the case that the site doesn't announce Vary at all on
the compressed objects. This is another mod_deflate bug, but can be
worked around easily by Apache configuration forcing the Vary:
accept-encoding header to be added on responses processed by
mod_deflate.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] another config question

2008-10-27 Thread Henrik Nordstrom
On mån, 2008-10-27 at 11:58 -0500, Lou Lohman wrote:
 don't have a process that uses the network credentials already in
 place to authorize Internet Access.   The question is - is it possible
 to do that using ldap - or must I continue to beat this NTLM horse to
 death?


You need NTLM or Negotiate for that.

Note: MSIE6 only supports NTLM.


How far have you managed to beat the NTLM horse?

- Have Samba joined the domain sucessfully?

- Do a manual ntlm_auth test work when running as your
cache_effective_user (as defined in squid.conf)?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] headers say HIT, logs say MISS, payload is truncated...

2008-10-27 Thread Henrik Nordstrom
On mån, 2008-10-27 at 15:56 -0700, Neil Harkins wrote:

 I'd like to help and see this get fixed, but as I said earlier,
 it happens on about 16% of our test requests, only when
 there's 750~1050 reqs/second going through the box,
 and pretty much disappears under 500 reqs/s (off-peak).

Ouch..

 Is this except significant?:

Hard to say, but probably not. It's just reading of the Vary/ETag index,
finding that the request matches the object with key
968D51EAA0C2BCF5688EAB92E8F56EE4.

Do your server support ETag on these objects? And do it properly report
different ETag values for the different variants? Or are you using
broken_vary_encoding to work around server brokenness?

 Note that I've since changed our load balancer to rewrite Accept-Encoding: 
 to Accept-Encoding: identity in case squid didn't like a null header,
 (although the example in the RFC implies that Accept-Encoding:  is valid:
 http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3)
 but the timeouts still happened, although I didn't grab debugging then.

Accept-Encoding header without a value is not really a valid HTTP header
(what you probably want is no Accept-Encoding header at all). But Squid
should work fine in both cases as it's just two different Vary request
fingerprints.

The blank example is an error in the specifications descriptive text and
has been corrected in the errata.

If you look closely at the above reference you'll notice the BNF says
1#(...) which means at least one. The BNF is the authorative definition
of the syntax of this header, the rest of the text just tries to explain
it..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] SSL Reuse behavior

2008-10-28 Thread Henrik Nordstrom
On tis, 2008-10-28 at 11:17 +0100, Andre E. wrote:

 The odd thing is the following. The time difference in ms between SSL
 Reuse enabled and disabled is considerably higher when using the
 rsa-cipher. With diffie-hellman the difference is about 40% and rsa
 about 20%.

How big keys? DH requires significantly larger keys to compare with RSA
in terms of computation.

But worth noting is that session reuse not only cuts down on the
computational demands, but also network overhead, especially so if
non-persistent connections is used. By session reuse you save a
significant amount of bandwidth from the server thanks to avoiding
sending the server certificate chain, and more noticeable for response
time one roundtrip exchange for the session establishement  key
exchange.

But the benefits is not very noticeable if you do use persistent
connections, which is an even more efficient optimization of SSL setup
costs with both SSL and TCP setup costs completely eleminated by reusing
already existing connection.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] slow response for cached objects

2008-10-29 Thread Henrik Nordstrom
On ons, 2008-10-29 at 15:08 +0530, nitesh naik wrote:
 Hi,
 
 Sometimes I see squid is taking time in delivering contents even if
 object is available in its cache. Any idea what could be the reason?
 I used external url rewrite program to strip the query string. Is it
 slowing down serving process ?
 
 First 2 line shows squid took 703 milliseconds to deliver the contents
 and rest of the url shows 0 milliseconds
 
 1225272393.185703   81.52.249.107TCP_MEM_HIT/200 1547 GET
 http://s2.xyz.com/1699/563/i0.js?z=5002 - NONE/-
 application/x-javascript

Just discovered that there is a noticeable measurement error in the
response time in Squid-2 which may add up to a second.. may be this.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY

2008-10-29 Thread Henrik Nordstrom
On ons, 2008-10-29 at 14:16 -0700, nairb rotsak wrote:

 http_access allow all NTLMUsers
 http_access allow our_networks

The our_networks line can not be reached.

This should probably be

http_access allow our_networks NTLMUsers
http_access deny all


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] NTLMv2 issue caused by Samba's Winbind helper

2008-10-29 Thread Henrik Nordstrom
On ons, 2008-10-29 at 17:23 +, Jamie Stallwood wrote:

 This is caused by Samba - does anyone know if this will ever be fixed
 properly?

Have you verified that it isn't fixed already?

Samba 2.0 is quite dated.. Current production Samba release is 3.2.4 and
the legacy version is 3.0.32.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] caching webdav traffic

2008-11-01 Thread Henrik Nordstrom
On tor, 2008-10-30 at 11:29 -0400, Seymen Ertas wrote:

 I am trying to cache webdav traffic through a squid proxy, I have the
 squid proxy configured in accel mode and have turned on the
 Cache-control: Public on my server for the reason that every request
 I send does contain a Authorization header, however I am unable
 cache the data.

What does the response headers look like (and also request headers may
be relevant, but strip out authorization headers in such case or use a
dummy account)

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] MSNT authentication - login window

2008-11-01 Thread Henrik Nordstrom
On fre, 2008-10-31 at 08:43 -0200, Luciano Cassemiro wrote:

 Everything is OK but what bothers me is: the login window shows up when an 
 user
 tries to connect to a forbidden site then he fill with his credentials BUT 
 after
  OK button the login window appears again and again until the user click 
 cancel.

This happens is the last acl on the http_access deny line denying access
is realted to authentication.

Now I am a little confused as the http_access rules you posted did not
have this.. is there other http_access deny lines in your squid.conf?


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Questions on research into using digest auth against MS AD2003

2008-11-01 Thread Henrik Nordstrom
On fre, 2008-10-31 at 13:55 -0500, Richard wrote:
 * What specific piece of the puzzle on the client side is it about the 
 NTLM or kerberos authentication methods that allow the authentication 
 traffic secure by sending only the credential hashes?

The client talks to the microsoft SSP libraries and subsystem when
requested to provide authentication by a trusted proxy.

   (Am I correct in 
 understanding that it is the ntlm_auth program that speaks to the NTLM 
 client and negotiates for the credential hashes to be exchanged?)

No and yes, that's the server side that Squid uses for speaking to the
domain controllers to verify the provided credentials. The first thing
this does is to send a challenge which is relayed by Squid to the
client.

 * When squid is configured to use *digest* authentication, I understand 
 that the traffic between the squid server and the LDAP server is 
 encrypted.  Is the traffic between the browser and the squid server 
 also encrypted when using Digest?   If so, how is it the client browser 
 know to encrypt/hash the communications for the return trip to the server?

Digest authentication is a hashed authentication scheme, exchanging
one-time hashes instead of passwords on the wire. The acutal password is
only known by the client, the server only knows how to verify that the
exchanged one-time hash corresponds to the password and current session.

 **Short of loading a program on a client machine, are there any 
 proxy servers out there that can prompt for credentials while keeping 
 secure the communication between the workstation and the proxy server?

Using digest authentication will do this.

 ** What is it that has to happen to ensure that the authentication 
 traffic from any browser to any proxy server is encrypted?

Neigher NTLM, kerberos or Digest is encrypted. But in all thre the
exchanged password is a one-time cryptographic hash of the password
and various session dependent details.

Modern windows versions provide single-sign-on for all three, but also
support prompting for credentials if the proxy isn't trusted or (Digest
ony) the realm is not the AD domain.

 * Considering the fact that I'm trying to use digest_ldap_auth against 
 an MS LDAP/AD 2003 server that should be storing several precomputed 
 digest hash versions of H(username:realm:password)

You can't use this helper to access the standard Active Directory
password details, but you can store an additional suitable DIgest hash
in another attribute and tell the helper to use this.

Or you can use a separade Digest password file on the proxy, and only
verify group memberships etc in the AD.


 A) Is it even possible to use digest_ldap_auth to do digest authenticate 
 against an Active Directory 2003's LDAP database server?

Yes, but not to the system password. At least not without writing and AD
extension.

 B) What would be a working example command line of a successful 
 digest_ldap_auth test against an AD 2003 server? (In my attempts, I have 
 been unable to identify the proper digest hash containing LDAP (-A) 
 attribute to use in a lookup.  I *THINK* this is because MS AD2003 
 expects the digest hash request to come via a SASL mechanism...which 
 begs the question...is there a  SASL mechanism that works with 
 squid+AD2003?)

The Microsoft AD Digest implementation expects to be fully responsible
for the Digest implementation itself from what I understand, but not
sure. One way to find out is to read the Microsoft protocol
documentation which is provided on request. I don't have access to these
documents.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid 3.1

2008-11-01 Thread Henrik Nordstrom
On lör, 2008-11-01 at 14:05 +0200, İsmail ÖZATAY wrote:
  I'm suspecting it may be gcc-3.3 related. Is there a more recent gcc 
  version you can upgrade to and try again?
 
  Amos
 Opps i am already using gcc version 3.3.5 .  ;) . I have just checked it...

Is there any newer GCC version than 3.3.X available for you?

GCC-3.3 was end-of-life some years ago.. 3.3.5 was released Sep 2004.

Refards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Pushing HTTP-Response into the cache

2008-11-01 Thread Henrik Nordstrom
On lör, 2008-11-01 at 19:48 +0100, Willem Stender wrote:

 So here is my question: How to push the data directly into squid's 
 cache? Is there any interfaces? Some port, so i can use sockets or 
 something like that?

cache_peer, cache_peer_access, never_direct and a suitable HTTP request
sent to Squid.


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Clients running amok - what can one do?

2008-11-01 Thread Henrik Nordstrom
On tor, 2008-10-30 at 09:25 +0100, Ralf Hildebrandt wrote:
 Ever so often we have clients (browsers) that are somehow (?) caught
 in a tight loop, resulting in a LOT of queries - one example
 
 7996 10.39.108.198 
 http://cdn.media.zylom.com/images/site/whitelabel/promo/deluxefeature/button_up.gif
 
 (7996 requests per hour from 10.39.108.198 for
 http://cdn.media.zylom.com/images/site/whitelabel/promo/deluxefeature/button_up.gif)
 
 How can I automatically throttle such clients?
 I'm either looking for an iptables or squid solution.

Use iptables to blacklist the client until it behaves.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Ignoring query string from url

2008-11-01 Thread Henrik Nordstrom
On tor, 2008-10-30 at 19:50 +0530, nitesh naik wrote:

 url rewrite helper script works fine for few requests ( 100 req/sec )
 but slows down response as number of requests increase and it takes
 10+ second to deliver the objects.

I'v run setups like this at more than thousand requests/s.

 Is there way to optimise it further ?
 
 url_rewrite_program  /home/zdn/bin/redirect_parallel.pl
 url_rewrite_children 2000
 url_rewrite_concurrency 5

Those two should be the other way around.

url_rewrite_concurrency 2000
url_rewrite_children 2

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Questions on research into using digest auth against MS AD2003

2008-11-02 Thread Henrik Nordstrom
On lör, 2008-11-01 at 19:49 -0700, Chuck Kollars wrote:

 One-time generally refers to the 'nonce' (and 'cnonce') used by
 challenge-response authentication protocols. But verifying the
 nonce-hashed-by-password would require using the actual original
 cleartext password, something proxies don't have (and can't obtain
 reliably yet securely). 

Digest authentication is one-time as it is dependent on the unique to
server nounce which never repeats.

Verifying the Digest response requires access to H(A1), not neccesarily
the plain-text password. The H(A1) hash is static until the user changes
his password, and is the secret keying material used by Digest
authentication.

 So proxies like Squid instead use the H{username:realm:password} field
 (which was originally intended for use mainly for identification).
 Most importantly this H(A1) field that Squid uses is the same every
 time (since Squid is always in the same 'realm'); it's *not*
 one-time in the sense of never ever repeating. 

Yes, but it's only exchanged between Squid and the user directory, not
between client and Squid. Between client and Squid there is one-time
hashes influenced by both server (squid) and client (browser) nounces
and the specific request (method and URI).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Reverse - Apache - Syn Flood

2008-11-02 Thread Henrik Nordstrom
On sön, 2008-11-02 at 20:34 +0200, Mehmet CELIK wrote:

 I want to setup Squid reverse proxy for my apache servers. But.. Can
 Squid protect my apache servers from Syn flood and Bot-Net attack ? or
 Squid drop this connection, when apache is the syn_recv ? or Squid
 Reverse be enough to this as resource ? or Can it be resource problem?

syn floods isn't really a big problem with correct OS tuning, only costs
memory and a little bit of CPU to deal with. You need a sufficiently
large SYN backlog. This is independent of Squid, same for any TCP
service.

Connection flooding is worse.. and requires offending clients to be
blacklisted by firewalling once identified.

Hmm... we probably should do something about that in Squid as well..
there is a good beginner task for anyone interested in Squid
development. http://wiki.squid-cache.org/Features/TCPAccess

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid is dying

2008-11-03 Thread Henrik Nordstrom
On mån, 2008-11-03 at 11:26 +0545, Anuj Shrestha wrote:
 i m using squid in freebsd 7.0 below are the compile options,
 
 proxy01# squid -v
 Squid Cache: Version 3.0.STABLE9

 below are the cache.log errors
 
 FATAL: Received Segment Violation...dying.

You may want to try upgrading to 3.0.STABLE10.

Or at a minimum file a bug report including a stack backtrace.

http://wiki.squid-cache.org/SquidFaq/TroubleShooting#head-7067fc0034ce967e67911becaabb8c95a34d576d

 proxy01# tail -f /var/log/squid/cache.log
 2008/11/03 17:14:17| clientParseRequestMethod: Unsupported method in 
 request 'REGISTER sip:68.142.233.183:80;transport=tcp SIP/2.0__From: 
 sip:[EMAIL PROTECTED]:80;ta'

Hmm.. SIP requests sent to Squid? Why is that? SIP is not HTTP even if
it borrows much of the syntax from HTTP.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] WCCP load balancing and TPROXY fully transparent interception

2008-11-05 Thread Henrik Nordstrom
On mån, 2008-11-03 at 16:57 +0800, Bin Liu wrote:
 Hi,
 
 I'm going to deploy multiple squid servers in a ISP for HTTP traffic
 caching. I'm now considering using WCCP for load balancing and TPROXY
 for fully transparent interception.
 
 Here is the problem. As far as I know, Cisco WCCP module does not
 maintain connection status, it just redirect packets based on their IP
 addresses and ports. I'm just wondering if it's possible that one
 squid server(squid A, for example) sends a outbound request, but the
 router redirects the corresponding inbound response to another
 squid(squid B)? Then that's totally messed.

The redirection in both directions must match for this to work. See the
wiki for a configuration example

http://wiki.squid-cache.org/ConfigExamples/FullyTransparentWithTPROXY

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] MSNT authentication - login window

2008-11-05 Thread Henrik Nordstrom
On mån, 2008-11-03 at 09:25 -0200, Luciano Cassemiro wrote:


 http_access deny our_networks users forbidden_sites !directors

This line requests authentication as the last acl on the line is
authentication related (directors).

Rewrite it to

http_acccess deny out_networks !directors forbidden_sites

and it will show an access denied message instead. And it also makes
deny_info more natural if you want a custom error message based on
forbidden_sites.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] R: [squid-users] Connection to webmail sites problem using more than one parent proxy

2008-11-05 Thread Henrik Nordstrom
On tis, 2008-11-04 at 19:49 +0100, Sergio Marchi wrote:

 cache_peer myparentproxy1.dipvvf.it parent 3128 3130 sourcehash
 round-robin no-query

Don't mix round-robin and sourcehash. Not sure what will happen in such
confusing setup.

But you should indeed use no-query if you use sourcehash or round-robin.

 It seems to work , but the connection are established only on one
 parentproxy, even if  the clients ip addresses are different.

How many addresses did you try with? There is a 1/3 probability of two
addresses to end up on the same parent when having 3 srchash parents.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Timezone issue

2008-11-05 Thread Henrik Nordstrom
On tis, 2008-11-04 at 18:02 +1100, Rod Taylor wrote:

 My squid is running on a machine that is set to local time in both
 software and hardware. Squid shows GMT in all error messages and uses
 GMT in the ACLs. How do I set Squid to use local time not GMT. Squid is
 the only program to do this...

Squid FAQ I want to use local time zone in error messages.
http://wiki.squid-cache.org/SquidFaq/SquidAcl#head-de11286b4accdede48d411359ab365725673c88a

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid cache proxy + Exchange 2007 problems

2008-11-05 Thread Henrik Nordstrom
On tis, 2008-11-04 at 01:58 -0800, Retaliator wrote:

 on the squid log i see
 TCP_MISS/404 0 CONNECT SERVERNAME.SUBDOMAIN.beeper.co.il:443 - DIRECT/- -
 servername and subdomain are smt else i changed.

From this it looks like yout Squid can not resolve te requested hostname
into an IP.

Check your DNS

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid-3 + Tproxy4 clarification

2008-11-05 Thread Henrik Nordstrom
On tis, 2008-11-04 at 22:37 +0530, Arun Srinivasan wrote:

 Yes. I could see the connections go over lo interface. However, it is
 not getting handled by the stack.

Public addresses can not talk to loopback addresses (127.X). This is an
intentional security restriction in the TCP/IP stack.

Also I don't think using TPROXY internally on the same server is even
intended to work. It's intended use is on traffic being routed by the
proxy to some other servers (i.e. Internet).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] CACHEMGR - What`s wrong?

2008-11-05 Thread Henrik Nordstrom
On tis, 2008-11-04 at 14:22 -0300, Rodrigo de Oliveira Gomes wrote:
Cache Manager Error
 
target 192.168.47.89:3128 not allowed in cachemgr.conf
  __

cachemgr.conf:
localhost
192.168.47.89:3128
 
 Am I doing something wrong? Lack configuration? Permission? I look forward to 
 in a hand. 

Can cachemgr.cgi open cachemgr.conf?

Is cachemgr.conf in the proper location? Either same directory as
cachemgr.cgi (or to be exact the current working directory when
cachemgr.cgi runs.. usually the same directory but depends on web server
setup), or if not there then prefix/etc/cachemgr.conf

If you are unsure about the prefix location then strings cachemgr.cgi
| grep cachemgr.conf should tell you.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid 2.6/block https

2008-11-05 Thread Henrik Nordstrom
On ons, 2008-11-05 at 17:57 +0530, sohan krishi wrote:

 My configuration is Ubuntu-iptables-squid2.6/Transparent Proxy. I
 block gmail to all employees in my company. My problem is, squid does
 not block https://gmail.com. And does not even log https://gmail.com !
 I didn't knew this until I've seen one of our employe browsing gmail!

It's because https is encrypted on port 443.

 I did add this to my iptables : #iptables -t nat -A PREROUTING -i eth1
 -p tcp --dport 443 -j DNAT --to eth0:3128 but get this meesage in
 access.log : error:unsupported-request-method

It's because https is encrypted. It sort of works it you redirect it to
an https_port, but probably not what you want as it breaks many things.

The proper soultion to all this is to use proxy settings. It's fairly
easy to roll out proxy settings company wide using group policies or
login scripts or eeven auto discovery using WPAD, and then use
interception and firewalling only as a backup method for those who for
some reason did not get the prexy settings.

 Can anyone please help me how to block gmail. I want to block
 gmail/gtalk to all IPs except couple of IPs.

You'll have to block pore 443 traffic to all addresses used by google
servers almost..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] SSL Site Problem...

2008-11-05 Thread Henrik Nordstrom
Most likely a window scaling issue. There is still very many broken
firewalls out there..

Squid FAQ System Wierdness - Linux - Some sites load extremely slowly or not at 
all:
http://wiki.squid-cache.org/SquidFaq/SystemWeirdnesses#head-4920199b311ce7d20b9a0d85723fd5d0dfc9bc84

Regards
Henrik

On ons, 2008-11-05 at 15:07 +, Andy McCall wrote:
 Hi Folks,
 
 I have a problem accessing an SSL site through my Squid setup, IE just spins 
 its blue circle forever, and doesn't seem to ever actually time out.  The 
 same site works when going direct.  I have tried multiple browsers to 
 eliminate the browser as the issue.
 
 Any help is appreciated, as I am really stuck now...
 
 The site is:
 
 https://secure.crtsolutions.co.uk
 
 I am using:
 
 Squid Cache: Version 2.6.STABLE18
 configure options:  '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' 
 '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' 
 '--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' 
 '--enable-async-io' '--with-pthreads' 
 '--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' 
 '--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' 
 '--enable-snmp' '--enable-delay-pools' '--enable-htcp' 
 '--enable-cache-digests' '--enable-underscores' '--enable-referer-log' 
 '--enable-useragent-log' '--enable-auth=basic,digest,ntlm' '--enable-carp' 
 '--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 
 'i386-debian-linux' 'build_alias=i386-debian-linux' 
 'host_alias=i386-debian-linux' 'target_alias=i386-debian-linux' 'CFLAGS=-Wall 
 -g -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS='
 
 The entry in access.log is:
 
 1225894644.785   3106 10.XX.XX.XX TCP_MISS/200 174 CONNECT 
 secure.crtsolutions.co.uk:443 - DIRECT/195.114.102.18 -
 
 The cache.log entry is (if there is too much here, I apologise, I am not sure 
 how much to post!):
 
 2008/11/05 14:17:25| parseHttpRequest: Client HTTP version 1.0.
 2008/11/05 14:17:25| parseHttpRequest: Method is 'CONNECT'
 2008/11/05 14:17:25| parseHttpRequest: URI is 'secure.crtsolutions.co.uk:443'
 2008/11/05 14:17:25| parseHttpRequest: req_hdr = {User-Agent: Mozilla/4.0 
 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; Media 
 Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)^M
 Proxy-Connection: Keep-Alive^M
 Content-Length: 0^M
 Host: secure.crtsolutions.co.uk^M
 Pragma: no-cache^M
 ^M
 }
 2008/11/05 14:17:25| parseHttpRequest: end = {}
 2008/11/05 14:17:25| parseHttpRequest: prefix_sz = 294, req_line_sz = 48
 2008/11/05 14:17:25| parseHttpRequest: Request Header is
 User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET 
 CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)^M
 Proxy-Connection: Keep-Alive^M
 Content-Length: 0^M
 Host: secure.crtsolutions.co.uk^M
 Pragma: no-cache^M
 ^M
 
 2008/11/05 14:17:25| parseHttpRequest: Complete request received
 2008/11/05 14:17:25| conn-in.offset = 0
 2008/11/05 14:17:25| commSetTimeout: FD 44 timeout 86400
 2008/11/05 14:17:25| init-ing hdr: 0x191b82c8 owner: 2
 2008/11/05 14:17:25| parsing hdr: (0x191b82c8)
 User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET 
 CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)^M
 Proxy-Connection: Keep-Alive^M
 Content-Length: 0^M
 Host: secure.crtsolutions.co.uk^M
 Pragma: no-cache^M
 
 2008/11/05 14:17:25| creating entry 0x1a1f39a0: near 'User-Agent: Mozilla/4.0 
 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; Media 
 Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)'
 2008/11/05 14:17:25| created entry 0x1a1f39a0: 'User-Agent: Mozilla/4.0 
 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; Media 
 Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)'
 2008/11/05 14:17:25| 0x191b82c8 adding entry: 50 at 0
 2008/11/05 14:17:25| creating entry 0x1a1ea180: near 'Proxy-Connection: 
 Keep-Alive'
 2008/11/05 14:17:25| created entry 0x1a1ea180: 'Proxy-Connection: Keep-Alive'
 2008/11/05 14:17:25| 0x191b82c8 adding entry: 41 at 1
 2008/11/05 14:17:25| creating entry 0x82b4a88: near 'Content-Length: 0'
 2008/11/05 14:17:25| created entry 0x82b4a88: 'Content-Length: 0'
 2008/11/05 14:17:25| 0x191b82c8 adding entry: 14 at 2
 2008/11/05 14:17:25| creating entry 0x1a1f38d0: near 'Host: 
 secure.crtsolutions.co.uk'
 2008/11/05 14:17:25| created entry 0x1a1f38d0: 'Host: 
 secure.crtsolutions.co.uk'
 2008/11/05 14:17:25| 0x191b82c8 adding entry: 27 at 3
 2008/11/05 14:17:25| creating entry 0x1a1f3910: near 'Pragma: no-cache'
 2008/11/05 14:17:25| created entry 0x1a1f3910: 'Pragma: no-cache'
 2008/11/05 14:17:25| 0x191b82c8 adding entry: 37 at 4
 2008/11/05 14:17:25| 0x191b82c8 lookup for 20
 2008/11/05 14:17:25| clientSetKeepaliveFlag: http_ver = 1.0
 2008/11/05 14:17:25| clientSetKeepaliveFlag: method = CONNECT
 2008/11/05 14:17:25| 0x191b82c8 lookup for 41
 2008/11/05 14:17:25| 0x191b82c8: joining for id 41
 2008/11/05 

Re: [squid-users] Auto-configuration file hosted by squid

2008-11-06 Thread Henrik Nordstrom
On tor, 2008-11-06 at 11:39 +0100, Jan Welker wrote:

 My Question for you is:
 Is Squid capable of hosting the auto-configuration file? Or is there a
 workaround for that?

There is a workaround if you enable the transparent option. You can then
use an url rewriter to rewrite the PAC URL to a web server of your
choice.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid cache proxy + Exchange 2007 problems

2008-11-06 Thread Henrik Nordstrom
On tor, 2008-11-06 at 05:43 -0800, Retaliator wrote:
 My Squid server is on the external (DMZ) with real ip, of course it can't
 resolve internal hosts like the exchange server..

Then how do you expect the server to be able to connet to internal hosts
by name?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] mallinfo() vs. sbrk()

2008-11-06 Thread Henrik Nordstrom
On tor, 2008-11-06 at 13:17 -0800, Mark Nottingham wrote:
 I remember reading somewhere (can't forget where, and I may be  
 incorrect) that when available, sbrk is a more reliable indication of  
 memory use for squid than mallinfo().

mallinfo is more reliabe than sbrk when it works... but at least Linux
mallinfo fails when the process grows above 2GB in size..

mallinfo includes all memory allocated by the memory allocator (malloc
and friends).

sbrk includes the size of the data segment, where most memory
allocations go, but not all. Large allocations is handled by malloc
outside of the data segment.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


  1   2   3   4   5   6   7   8   9   10   >