Re: [squid-dev] FYI: the C++11 roadmap

2014-11-05 Thread Henrik Nordstrom
ons 2014-11-05 klockan 13:31 +0100 skrev Kinkie:
 MAYBE this could be mitigated by providing RPMs for RHEL6/CentOS 6
 that are built on a custom server with a recent gcc but older
 libraries?
 What do you guys think?

Might work, but it means Squid will not get updated in EPEL or other
repositories wich follows the RHEL release as upgrade of GCC is out of
reach for any of them, so users will be forced to use our repositories
for Squid.

And it's a major pain for any administrator that needs to test a patch
of change config options, as upgrading core components such as GCC to an
out-of-repository version is out of question.

Regards
Henrik

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: RFC: how to handle OS-specific configure options?

2010-04-25 Thread Henrik Nordstrom
fre 2010-04-23 klockan 16:41 +0200 skrev Kinkie:

 
 It's probably time for another interim merge.

+1





Re: squid 3.1 ICAP problem

2010-04-25 Thread Henrik Nordstrom
fre 2010-04-23 klockan 16:43 +0100 skrev Dan Searle:
 16:36:29.552442 IP localhost.1344  localhost.50566: P 1:195(194) ack 72 
 win 512 nop,nop,timestamp 62423291 62423291
 0x:  4500 00f6 7ac7 4000 4006 c138 7f00 0001  e.@.@..8
 0x0010:  7f00 0001 0540 c586 bdca fa3d bd99 b529  .@.=...)
 0x0020:  8018 0200 feea  0101 080a 03b8 80fb  
 0x0030:  03b8 80fb 3230 3020 4f4b 0d0a 4461 7465  200.OK..Date


THat's not a correct ICAP response status line.. Should be

ICAP/1.0 200 OK

Just as for HTTP ICAP responses always start with the ICAP protocol
identifier  version.

RFC3507 4.3.3  Response Headers, first paragraph.

Regards
Henrik



Re: [PATCH] [RFC] Enforce separate http_port modes

2010-04-22 Thread Henrik Nordstrom
tor 2010-04-22 klockan 00:25 + skrev Amos Jeffries:
  In addition it may make sense to be able to selectively enable tproxy
  spoofing independent of interception, which would also solve the above
  reservation.
 
 From TPROXYv4 the intercept options are mutually exclusive. Due to the
 nature of the NAT and TPROXY lookups.

I was not talking about the options above, just the functionality of
intercepting vs spoofing.

The tproxy http_port option enables both functions (intercepting
requests, and spofing outgoing requests).

 TPROXY in other modes has a wide set of possibilities and potential
 problems we will need to consider carefully before enabling.

Basic problems is the same in all modes. For tproxy spoofing to work
return traffic on forwarded requests need to find their way back to the
right cache server, not to the reuesting client or another cache server.

The available solutions to that problem differs slightly depending on
how client traffic arrives on the cache servers.

Regards
Henrik



Re: /bzr/squid3/trunk/ r10399: Back out the tweak on rev10398.

2010-04-22 Thread Henrik Nordstrom
tor 2010-04-22 klockan 17:13 +1200 skrev Amos Jeffries:

 What hudson was showing was build always exiting at the first of these 
 files. Even in the awk ok - SUCCESS case. That needs to be figured 
 out before  exit 1 can go back on.

right. The sequence of ||  is a bit ambiguous. Needs to be explicitly
grouped as in my first variant to get the desired result.

awk || (rm  exit)

alternatively

awk || (rm ; exit)

or using explicit flow control

if ! awk ; then rm ; exit; fi


Regards
Henrik



Re: /bzr/squid3/trunk/ r10399: Back out the tweak on rev10398.

2010-04-21 Thread Henrik Nordstrom
mån 2010-04-19 klockan 23:25 + skrev Amos Jeffries:

  -  $(AWK) -f $(srcdir)/mk-globals-c.awk  $(srcdir)/globals.h  $@ ||
  $(RM) -f $@  exit 1
  +  $(AWK) -f $(srcdir)/mk-globals-c.awk  $(srcdir)/globals.h  $@ ||
  $(RM) -f $@
  
  Why?
  
 
 I had bound the exit(1) to success of the rm command, not failure of the
 awk. When we get time to sort out the shell nesting of automake it needs to
 be added back in to ensure the make runs exit on awk failure.

Sure you had.

awk ok - SUCCESS
awk fail - rm
   rm fail - FAILURE
   rm ok - exit 1 == FAILURE

exit in this context is just to make the shell command return an error
so make stops. rm failing is in itself an error and no explicit exit
with an error is needed in that case. If you absolutely want exit to be
called in both cases then group rm  exit

$(AWK) -f $(srcdir)/mk-globals-c.awk  $(srcdir)/globals.h  $@ || 
($(RM) -f $@ ;exit 1)

but I prefer your original

$(AWK) -f $(srcdir)/mk-globals-c.awk  $(srcdir)/globals.h  $@ || 
$(RM) -f $@  exit 1

the effect of both is the same, and your original is both clearer and more 
efficient.

Regards
Henrik



Re: [PATCH] [RFC] Enforce separate http_port modes

2010-04-21 Thread Henrik Nordstrom
ons 2010-04-21 klockan 02:44 + skrev Amos Jeffries:

 It alters documentation to call accel, tproxy, intercept, and sslbump
 options mode flags since they determine the overall code paths which
 traffic received is handled by.

+1, but with a slight reservation on tproxy as technically there is
nothing that stops tproxy + accel from being combined. Current
implementation do not mix well however.

In addition it may make sense to be able to selectively enable tproxy
spoofing independent of interception, which would also solve the above
reservation.

Regards
Henrik



Re: Squid 2.7

2010-04-21 Thread Henrik Nordstrom
fre 2010-04-09 klockan 09:23 -0400 skrev Kulkarni, Hemant V:

 I am trying to understand squid 2.7 stable7 code base. On the website
 I see everything pertaining to squid 3. Is there any link or archive
 where I can get more information and understand the squid 2.7 code
 base ?

The little documentation there is on the Squid-2 codebase can be found
in doc/programmers-guide/ in the source distribution.

For additional help just ask on squid-...@squid-cache.org.

Regards
Henrik



Re: /bzr/squid3/trunk/ r10399: Back out the tweak on rev10398.

2010-04-19 Thread Henrik Nordstrom
tor 2010-04-15 klockan 22:19 +1200 skrev Amos Jeffries:
 
 revno: 10399
 committer: Amos Jeffries squ...@treenet.co.nz
 branch nick: trunk
 timestamp: Thu 2010-04-15 22:19:26 +1200
 message:
   Back out the tweak on rev10398.


  globals.cc: globals.h mk-globals-c.awk
 - $(AWK) -f $(srcdir)/mk-globals-c.awk  $(srcdir)/globals.h  $@ || 
 $(RM) -f $@  exit 1
 + $(AWK) -f $(srcdir)/mk-globals-c.awk  $(srcdir)/globals.h  $@ || 
 $(RM) -f $@

Why?

Regards
Henrik



Re: Upgrade repository format for trunk?

2010-03-26 Thread Henrik Nordstrom
fre 2010-03-26 klockan 12:18 +1300 skrev Amos Jeffries:

 FWIW, I've already upgraded my local repos months ago. There have been 
 no ill side effects from 2a here despite the source repo being 0.9.  If 
 you can it's worth doing the upgrade locally anyway even before the master.

How do you merge stuff back to the main repo from your upgraded one?

Or maybe merge works and just push/commit(bound) don't..?

Regards
Henrik



Re: /bzr/squid3/trunk/ r10322: Bug 2873: undefined symbol rint

2010-03-12 Thread Henrik Nordstrom
Every AC_CHECK_LIB where we look for main needs to be redone to look for
some sane function. See bug for details.


ons 2010-03-10 klockan 20:59 +1300 skrev Amos Jeffries:
 
 revno: 10322
 committer: Amos Jeffries squ...@treenet.co.nz
 branch nick: trunk
 timestamp: Wed 2010-03-10 20:59:21 +1300
 message:
   Bug 2873: undefined symbol rint
   
   Detect math library properly based on rint synbol we need.
   On Solaris at least main symbol does not exist.
 modified:
   configure.in
   src/Common.am
 vanligt textdokument-bilaga (r10322.diff)
 === modified file 'configure.in'
 --- a/configure.in2010-02-03 12:36:21 +
 +++ b/configure.in2010-03-10 07:59:21 +
 @@ -2973,14 +2973,22 @@
fi
  
  AC_CHECK_LIB(regex, main, [REGEXLIB=-lregex])
 +MATHLIB=
  case $host_os in
  mingw|mingw32)
   AC_MSG_NOTICE([Use MSVCRT for math functions.])
   ;;
   *)
 - AC_CHECK_LIB(m, main)
 + AC_SEARCH_LIBS([rint],[m],[
 + case $ac_cv_search_rint in
 + no*)
 + ;;
 + *)
 + MATHLIB=$ac_cv_search_rint
 + esac ])
   ;;
  esac
 +AC_SUBST(MATHLIB)
  
  dnl Enable IPv6 support
  AC_MSG_CHECKING([whether to enable IPv6])
 
 === modified file 'src/Common.am'
 --- a/src/Common.am   2009-11-21 05:29:45 +
 +++ b/src/Common.am   2010-03-10 07:59:21 +
 @@ -29,6 +29,8 @@
  $(OBJS): $(top_srcdir)/include/version.h $(top_builddir)/include/autoconf.h
  
  ## Because compatibility is almost universal. And the link order is 
 important.
 +## NP: libmisc util.cc depends on rint from math library
  COMPAT_LIB = \
   -L$(top_builddir)/lib -lmiscutil \
 - $(top_builddir)/compat/libcompat.la
 + $(top_builddir)/compat/libcompat.la \
 + $(MATHLIB)
 




[patch] Disable ufsdump compilation due to linkage issues.

2010-03-07 Thread Henrik Nordstrom
As reported some weeks ago ufsdump fails to link on the upcoming Fedora
13 release due to linking issues, and as reported by Amos the same
linking issues is now also seen on Debian since somewhere between March
2 - 5.

While investigating this I found the following conclusions

- We are not actually installing ufsdump
- The dependencies between the Squid libraries are very non-obvious,
with libraries depending on plain object files and other strange things.
- The ufsdump linkage issues is somehow triggered by the libraries
including objects needing symbols from objects not included in that link
- Those failing library objects are not actually needed by ufsdump.
Linking succeeds if repeatedly removing each reported failing object
from the squid libraries.
- If the libraries were shared libraries then linking would fail on all
systems

As we are not installing ufsdump I propose we take ufsdump out from the
default compilation until these issues can be better understood. The
attached patch does just that.

Regards
Henrik
diff -up squid-3.1.0.16/src/Makefile.am.noufsdump squid-3.1.0.16/src/Makefile.am
--- squid-3.1.0.16/src/Makefile.am.noufsdump	2010-02-18 23:14:16.0 +0100
+++ squid-3.1.0.16/src/Makefile.am	2010-02-18 23:15:51.0 +0100
@@ -172,14 +172,14 @@ EXTRA_PROGRAMS = \
 	recv-announce \
 	tests/testUfs \
 	tests/testCoss \
-	tests/testNull
+	tests/testNull \
+	ufsdump
 
 ## cfgen is used when building squid
 ## ufsdump is a debug utility, it is possibly useful for end users with cache
 ## corruption, but at this point we do not install it.
 noinst_PROGRAMS = \
-	cf_gen \
-	ufsdump
+	cf_gen
 
 sbin_PROGRAMS = \
 	squid
diff -up squid-3.1.0.16/src/Makefile.in.noufsdump squid-3.1.0.16/src/Makefile.in
--- squid-3.1.0.16/src/Makefile.in.noufsdump	2010-02-18 23:12:26.0 +0100
+++ squid-3.1.0.16/src/Makefile.in	2010-02-18 23:13:16.0 +0100
@@ -57,8 +57,8 @@ check_PROGRAMS = tests/testAuth$(EXEEXT)
 EXTRA_PROGRAMS = DiskIO/DiskDaemon/diskd$(EXEEXT) unlinkd$(EXEEXT) \
 	dnsserver$(EXEEXT) recv-announce$(EXEEXT) \
 	tests/testUfs$(EXEEXT) tests/testCoss$(EXEEXT) \
-	tests/testNull$(EXEEXT)
-noinst_PROGRAMS = cf_gen$(EXEEXT) ufsdump$(EXEEXT)
+	tests/testNull$(EXEEXT) ufsdump$(EXEEXT)
+noinst_PROGRAMS = cf_gen$(EXEEXT)
 sbin_PROGRAMS = squid$(EXEEXT)
 bin_PROGRAMS =
 libexec_PROGRAMS = $(am__EXEEXT_1) $(DISK_PROGRAMS) $(am__EXEEXT_2)


Re: negotiate auth with fallback to other schemes

2010-03-06 Thread Henrik Nordstrom
fre 2010-03-05 klockan 20:44 + skrev Markus Moeller:

 I don't understand this part. Usually the kdc is on AD so how can NTLM work 
 and Kerberos not ?

The NTLM client just needs the local computer configuration +
credentials entered interactively by the user. All communication with
the AD is indirect via the proxy. The client do not need any form of
ticked before trying to authenticate via NTLM, just the username +
domain + password.

For similar reasons NTLM also do not have any protection from mitm
session theft. Meaning that the auth exchange done to the proxy may just
as well be used by a mitm attacker to authenticate as that client to any
server in the network for any purpose.

Regards
Henrik



Re: [REVIEW] Carefully verify digest responses

2010-03-06 Thread Henrik Nordstrom
tis 2010-03-02 klockan 18:06 +0100 skrev Henrik Nordstrom:

 Comments are very welcome while I validate the parser changes.

Validation completed and committed to trunk.

Forgot to mantion the relevant bug reports in commit messages.

Parser: 2845

Stale: 2367

both changes needs to get merged back all way to 3.0.

Regards
Henrik



[MERGE] New digest scheme helper protocol

2010-03-06 Thread Henrik Nordstrom
The current digest scheme helper protocol have great issues in how to
handle quote  charactes. An easy way to solve this is to switch
protocol to a protocol similar to what we already use for basic helpers
using url escaped strings

  urlescape(user) SPACE urlescape(realm) NEWLINE

Note: The reason why realm is urlescaped is to allow for future
expansions of the protocol. 

The default is still the old quoted form as helpers have not yet been
converted over. But once the helpers have been converted default should
change to urlescaped form.
# Bazaar merge directive format 2 (Bazaar 0.90)
# revision_id: hen...@henriknordstrom.net-20100306200338-\
#   py3969agu3ccjdeh
# target_branch: /home/henrik/SRC/squid/trunk/
# testament_sha1: b4225d56b5d7245eacec5a2406019e692aa00cce
# timestamp: 2010-03-06 21:03:58 +0100
# base_revision_id: hen...@henriknordstrom.net-20100306194302-\
#   eknq7yvpt5ygzkdz
# 
# Begin patch
=== modified file 'src/auth/digest/auth_digest.cc'
--- src/auth/digest/auth_digest.cc	2010-03-06 14:47:46 +
+++ src/auth/digest/auth_digest.cc	2010-03-06 19:48:42 +
@@ -50,6 +50,7 @@
 #include SquidTime.h
 /* TODO don't include this */
 #include digestScheme.h
+#include rfc1738.h
 
 /* Digest Scheme */
 
@@ -935,7 +936,9 @@
 safe_free(digestAuthRealm);
 }
 
-AuthDigestConfig::AuthDigestConfig() : authenticateChildren(20)
+AuthDigestConfig::AuthDigestConfig() :
+	authenticateChildren(20),
+	helperProtocol(DIGEST_HELPER_PROTOCOL_QUOTEDSTRING)
 {
 /* TODO: move into initialisation list */
 /* 5 minutes */
@@ -978,6 +981,17 @@
 parse_onoff(PostWorkaround);
 } else if (strcasecmp(param_str, utf8) == 0) {
 parse_onoff(utf8);
+} else if (strcasecmp(param_str, protocol) == 0) {
+	char *token = NULL;
+	parse_eol(token);
+	if (strcmp(token, quoted)) {
+	helperProtocol = DIGEST_HELPER_PROTOCOL_QUOTEDSTRING;
+	} else if (strcmp(token, urlescaped)) {
+	helperProtocol = DIGEST_HELPER_PROTOCOL_URLESCAPE;
+	} else {
+	debugs(29, 0, unrecognised digest auth helper protocol '  token  ');
+	}
+	safe_free(token);
 } else {
 debugs(29, 0, unrecognised digest auth scheme parameter '  param_str  ');
 }
@@ -1237,10 +1251,10 @@
 }
 
 /* Sanity check of the username.
- *  can not be allowed in usernames until * the digest helper protocol
- * have been redone
+ *  can not be allowed in usernames when using the old quotedstring
+ * helper protocol
  */
-if (strchr(username, '')) {
+if (helperProtocol == DIGEST_HELPER_PROTOCOL_QUOTEDSTRING  strchr(username, '')) {
 debugs(29, 2, authenticateDigestDecode: Unacceptable username '  username  ');
 return authDigestLogUsername(username, digest_request);
 }
@@ -1390,7 +1404,6 @@
 AuthDigestUserRequest::module_start(RH * handler, void *data)
 {
 DigestAuthenticateStateData *r = NULL;
-char buf[8192];
 digest_user_h *digest_user;
 assert(user()-auth_type == AUTH_DIGEST);
 digest_user = dynamic_cast  digest_user_h * (user());
@@ -1402,20 +1415,35 @@
 return;
 }
 
+
 r = cbdataAlloc(DigestAuthenticateStateData);
 r-handler = handler;
 r-data = cbdataReference(data);
 r-auth_user_request = this;
 AUTHUSERREQUESTLOCK(r-auth_user_request, r);
+
+const char *username = digest_user-username();
+char utf8str[1024];
 if (digestConfig.utf8) {
-char userstr[1024];
-latin1_to_utf8(userstr, sizeof(userstr), digest_user-username());
-snprintf(buf, 8192, \%s\:\%s\\n, userstr, realm);
-} else {
-snprintf(buf, 8192, \%s\:\%s\\n, digest_user-username(), realm);
-}
-
-helperSubmit(digestauthenticators, buf, authenticateDigestHandleReply, r);
+latin1_to_utf8(utf8str, sizeof(utf8str), username);
+	username = utf8str;
+}
+
+MemBuf mb;
+
+mb.init();
+switch(digestConfig.helperProtocol) {
+case AuthDigestConfig::DIGEST_HELPER_PROTOCOL_QUOTEDSTRING:
+	mb.Printf(\%s\:\%s\\n, username, realm);
+	break;
+case AuthDigestConfig::DIGEST_HELPER_PROTOCOL_URLESCAPE:
+	mb.Printf(%s , rfc1738_escape(username));
+	mb.Printf(%s\n, rfc1738_escape(realm));
+	break;
+}
+
+helperSubmit(digestauthenticators, mb.buf, authenticateDigestHandleReply, r);
+mb.clean();
 }
 
 DigestUser::DigestUser (AuthConfig *aConfig) : AuthUser (aConfig), HA1created (0)

=== modified file 'src/auth/digest/auth_digest.h'
--- src/auth/digest/auth_digest.h	2009-12-16 03:46:59 +
+++ src/auth/digest/auth_digest.h	2010-03-06 16:09:24 +
@@ -163,6 +163,10 @@
 int CheckNonceCount;
 int PostWorkaround;
 int utf8;
+enum {
+	DIGEST_HELPER_PROTOCOL_QUOTEDSTRING,
+	DIGEST_HELPER_PROTOCOL_URLESCAPE
+} helperProtocol;
 };
 
 typedef class AuthDigestConfig auth_digest_config;

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2010-03-06 19:43:02 +
+++ src/cf.data.pre	2010-03-06 20:03:38 +
@@ -181,13 +181,18 @@
 	=== Parameters for the digest scheme 

Re: [REVIEW] Carefully verify digest responses

2010-03-05 Thread Henrik Nordstrom
tor 2010-03-04 klockan 20:57 -0700 skrev Alex Rousskov:

 Please consider adding a cppunit test to check a few common and corner
 parsing cases.

Unfortunately the existing auth tests we have do not come close to
touching actual request parsing/handling, and trying to get things in
shape to the level that this can be exercised with cppunit is a little
heaver than I have time for today.

Regards
Henrik



Re: [PATCH] HTTP/1.1 to servers

2010-03-05 Thread Henrik Nordstrom
fre 2010-03-05 klockan 23:08 +1300 skrev Amos Jeffries:
 Sending HTTP/1.1 in all version details sent to peers and servers.
 
 Passes the basic tests I've thrown at it. If anyone can think of some 
 please do.

The src/client_side.cc change looks wrong to me.. should not overwrite
the version sent by the client when parsing the request headers.

upgrade should only be done in http.cc when making the outgoing request,
and client_side_reply.cc when making the outgoing response, between
there the received version should be preserved as much as possible.

Regards
Henrik



Re: [REVIEW] Carefully verify digest responses

2010-03-03 Thread Henrik Nordstrom
ons 2010-03-03 klockan 22:19 +1300 skrev Amos Jeffries:

 
 This debugs seems to have incorrect output for the test being done:
 + /* check cnonce */
 + if (!digest_request-cnonce || digest_request-cnonce[0] == '\0') {
 + debugs(29, 2, authenticateDigestDecode: Missing URI field);

Thanks. Fixed.

Regards
Henrik



Re: squid_kerb_auth logging patch

2010-02-09 Thread Henrik Nordstrom
Reviewed and applied.

tis 2010-02-09 klockan 19:20 + skrev Markus Moeller:
 Hi Amos,
 
Here are patched for squid 3.1 and squid 3-head to add ERROR, WARNING, 
 etc to the logging messages.
 
 
 Regards
 Markus 




Re: SMB help needed

2009-12-12 Thread Henrik Nordstrom
tor 2009-12-10 klockan 00:34 +1300 skrev Amos Jeffries:

 A few months ago we had this argument out and decided to keep them for 
 the people who still don't want to or can't install Samba.

Indeed. The SMB helpers are easier to get going as one does not need to
join the domain or anything, just being able to speak to the SMB port of
a server in the domain.

But other than that the helpers are in quite crappy shape..

Regards
Henrik



Re: Assertion in clientProcessBody

2009-12-07 Thread Henrik Nordstrom
tis 2009-12-08 klockan 13:34 +1100 skrev Mark Nottingham:

 Any thoughts here? Should this really be =, or should clientProcessBody 
 never get a 0 size_left?

It's done when size_left == 0, and no further body processing handler
shoud be active on this request at that time. Any data on the connection
at this time is either surplus data (HTTP violation) or a pipelined
request waiting to be processed.

If you look a little further down (about one screen) in
clientProcessBody you'll also see that the body reader gets unregistered
when processing reaches 0.

But it would not be harmful to make clientProcessBody gracefully handle
size_left == 0 I guess.

A backtrace would be nice.

Regards
Henrik



Re: R: obsoleting nextstep?

2009-12-07 Thread Henrik Nordstrom
lör 2009-12-05 klockan 00:32 +1300 skrev Amos Jeffries:

 +1. Last mentioned by that name in a press release 14 years ago. Much 
 more talk and use of Squid on the various Linux flavors built for the PS3.
 I believe they come under GNU/Linux tags.

No idea what newsos was. Searching... Sony M68K workstation in the 80s
and early 90s. Doubt anyone have been running Squid on those since early
90s if even then...

Gegarding ps3, otheros is most often Linux. FreeBSD also exists.
Fairly straightforward platform to us (powerpc CPU, and some
co-processors we don't touch). Again I highly doubt anyone is running
Squid on those, but should share the same properties as IBM Power
mainframes running Linux of FreeBSD so..

It's not a big issue if we happens to delete one or two oldish platforms
too many. If there is someone running on that platform they usually fix
it themselves (mostly knowledgeable geeks) and come back to us. So there
is no pressing need to preserve old platforms when touching
compatibility code. But it's also not worth much to discuss old
platforms viability.

My proposal regarding these in general is to touch them as little as
possible. If encountering such code when shuffling things around for
readability then drop them in the shuffle if easier than relocating the
code.

Regards
Henrik



Re: Helper children defaults

2009-11-26 Thread Henrik Nordstrom
tor 2009-11-26 klockan 17:35 +1300 skrev Amos Jeffries:
 I'm making the helper children configurable for on-demand startup so a 
 minimal set can be started and allowed to grow up to a max as determined 
 by traffic load.
 Growth is triggered by helpers dying or requests needing to be queued 
 when all helpers are full.

Drawback is that this fork can be quite expensive on larger squids, and
then momentarily stops all forwarding under peak load. But overloaded
helpers is generally worse so..

Ideally the concurrent protocol should be used as much as possible,
avoiding this..

   * start 1 child synchronously on start/reconfigure
   * new child helpers as needed in bunches of 2
   * maximum running kept capped at 5.
 ?

I would increase the max to 20 or so.

 This affects helpers for auth_param, url_rewrite, and external_acl_type.

Why not dnsserver?

Regards
Henrik



Re: [bundle] helper-mux feature

2009-11-26 Thread Henrik Nordstrom
tor 2009-11-26 klockan 10:43 +0100 skrev Kinkie:
 It's a perl helper multiplexer: it talks the multi-slot helper dialect
 to squid, and the single-slot variant to the helpers, starting them up
 lazily and handling possible helper crashes.

Nice!

 Since squid aggressively tries to reuse the some helpers, setting a
 high helpers number in conjunction with this has the effect of
 allowing on-demand-startup of helpers.

Interesting twist on that problem ;-)

See no significant issues with doing things that way.

Sure it's a little added overhead, but marginally so compared to Squid
maintaining all those helpers.

Regards
Henrik




Re: RFC: obsoleting nextstep?

2009-11-25 Thread Henrik Nordstrom
ons 2009-11-25 klockan 12:49 +0100 skrev Kinkie:
 Hi all,
just like SunOS: NextStep's last version (3.3) was released in
 1995, which means 15 years before the expected release date of 3.2 .
 How about dropping support for it?

+1 I think. Or just ignore it..

Regards
Henrik



Re: squid-smp: synchronization issue solutions

2009-11-24 Thread Henrik Nordstrom
sön 2009-11-22 klockan 00:12 +1300 skrev Amos Jeffries:

 I think we can open the doors earlier than after that. I'm happy with an 
 approach that would see the smaller units of Squid growing in 
 parallelism to encompass two full cores.

And I have a more careful opinion.

Introducing threads in the current Squid core processing is very
non-trivial. This due to the relatively high amount of shared data with
no access protection. We already have sufficient nightmares from data
access synchronization issues in the current non-threaded design, and
trying to synchronize access in a threaded operations is many orders of
magnitude more complex.

The day the code base is cleaned up to the level that one can actually
assess what data is being accessed where threads may be a viable
discussion, but as things are today it's almost impossible to judge what
data will be directly or indirectly accessed by any larger operation.

Using threads for micro operations will not help us. The overhead
involved in scheduling an operation to a thread is comparably large to
most operations we are performing, and if adding to this the amount of
synchronization needed to shield the data accessed by that operation
then the overhead will in nearly all cases by far out weight the actual
processing time of the micro operations only resulting in a net loss of
performance. There is some isolated cases I can think of like SSL
handshake negotiation where actual processing may be significant, but at
the general level I don't see many operations which would be candidates
for micro threading.

Using threads for isolated things like disk I/O is one thing. The code
running in those threads are very very isolated and limited in what it's
allowed to do (may only access the data given to them, may NOT allocate
new data or look up any other global data), but is still heavily
penalized from synchronization overhead. Further the only reason why we
have the threaded I/O model is because Posix AIO do not provide a rich
enough interface, missing open/close operations which may both block for
significant amount of time. So we had to implement our own alternative
having open/close operations. If you look closely at the threads I/O
code you will see that it goes to quite great lengths to isolate the
threads from the main code, with obvious performance drawbacks. The
initial code even went much further in isolation, but core changes have
over time provided a somewhat more suitable environment for some of
those operations.


For the same reasons I don't see OpenMP as fitting for the problem scope
we have. The strength of OpenMP is to parallize CPU intensive operations
of the code where those regions is well defined in what data they
access, not to deal with a large scale of concurrent operations with
access to unknown amounts of shared data.



Trying to thread the Squid core engine is in many ways similar to the
problems kernel developers have had to fight in making the OS kernels
multithreaded, except that we don't even have threads of execution (the
OS developers at least had processes). If trying to do the same with the
Squid code then we would need an approach like the following:

1. Create a big Squid main lock, always held except for audited regions
known to use more fine grained locking.

2. Set up N threads of executing, all initially fighting for that big
main lock in each operation.

3. Gradually work over the code identify areas where that big lock is
not needed to be held, transition over to more fine grained locking.
Starting at the main loops and work down from there.

This is not a path I favor for the Squid code. It's a transition which
is larger than the Squid-3 transition, and which have even bigger
negative impacts on performance until most of the work have been
completed.



Another alternative is to start on Squid-4, rewriting the code base
completely from scratch starting at a parallel design and then plug in
any pieces that can be rescued from earlier Squid generations if any.
But for obvious staffing reasons this is an approach I do not recommend
in this project. It's effectively starting another project, with very
little shared with the Squid we have today.


For these reasons I am more in favor for multi-process approaches. The
amount of work needed for making Squid multi-process capable is fairly
limited and mainly circulates around the cache index and a couple of
other areas that need to be shared for proper operation. We can fully
parallelize Squid today at process level if disabling persistent shared
cache + digest auth, and this is done by many users already. Squid-2 can
even do it on the same http_port, letting the OS schedule connections to
the available Squid processes.


Regards
Henrik



Re: squid-smp: synchronization issue solutions

2009-11-24 Thread Henrik Nordstrom
ons 2009-11-25 klockan 00:55 +1300 skrev Amos Jeffries:

 I kind of mean that by the smaller units. I'm thinking primarily here 
 of the internal DNS. It's API is very isolated from the work.

And also a good example of where the CPU usage is negligible.

And no, it's not really that isolated. It's allocating data for the
response which is then handed to the caller, and modified in other parts
of the code via ipcache..

But yes, it's a good example of where one can try scheduling the
processing on a separate thread to experiment with such model.

Regards
Henrik



Re: Server Name Indication

2009-11-23 Thread Henrik Nordstrom
fre 2009-11-20 klockan 01:28 +0100 skrev Craig:

 do you plan to implement Server Name Indication into squid? I know the
 caveats of browser compatibility, but in a year or two, the percentage
 of people using FF1.x and IE6 will surely decrease.

Getting SNI implemented is interesting to the project, but at this time
there is no current developer actively looking into the problem.

Squid is an community driven project. As such what features get
implemented is very much dependent on what the community contributes to
the project in terms of developer time.

Regards
Henrik



Squid-3.1 release?

2009-11-18 Thread Henrik Nordstrom
What is the status for 3.1?

At a minimum I would say it's about time for a 3.1.0.15 release,
collecting up what has been done so far.

The bigger question, what is blocking 3.1.1? (and moving 3.0 into
previous releases)

Regards
Henrik



Re: libresolv and freebsd

2009-11-16 Thread Henrik Nordstrom
Quite likely we don't even need libresolv in the majority of cases, on
pretty much all platforms.

mån 2009-11-16 klockan 12:07 +0100 skrev Kinkie:
 In configure.in there is something like this:
 
  if test $ac_cv_lib_bind_gethostbyname = no ; then
  case $host in
 i386-*-freebsd*)
 AC_MSG_NOTICE([skipping libresolv checks for $host])
 ;;
 *)
AC_CHECK_LIB(resolv, main)
 ;;
  esac
  fi
 
 I fail to see what's the point in skipping this test.
 I'd expect to get the same result with a simple(r)
 
 AC_SEARCH_LIBS([gethostbyname],[bind resolv])
 
 See my other request on moving form AC_CHECK_LIB to AC_SEARCH_LIBS.
 
 Thanks for any input.
 



Re: /bzr/squid3/trunk/ r10118: Portability fix: non-GNU diff is not guarranteed to handle the -q switch

2009-11-13 Thread Henrik Nordstrom
fre 2009-11-13 klockan 14:12 +0100 skrev Francesco Chemolli:
 

   Portability fix: non-GNU diff is not guarranteed to handle the -q switch

Heh.. really should use cmp instead.. 

Regards
Henrik







Re: [RFC] Libraries usage in configure.in and Makefiles

2009-11-11 Thread Henrik Nordstrom
ons 2009-11-11 klockan 18:38 +1300 skrev Amos Jeffries:

 Henriks recent commit to remove one of these on grounds of being old 
 has highlighted a need to document this and perhapse bring you all in on 
 making the changes.

Haven't removed any, just generalized one to apply to another lib
needing the same conditions..

 A: The squid binary is topping 3.5MB in footprint with many of the small 
 tool stopping 500KB each. A small but substantial amount of it is 
 libraries inked but unused. Particularly in the helpers.

Unused libraries uses just a tiny bit of memory for the link table, at
least if built properly (PIC).


 With some of the libraries being bunched up when there is a strong link 
 between, ie -lresolv -lnsl in @RESOLVLIB@
 
 Does anyone disagree?

Not with the principle.

What I have been working on is mainly that lots of these have also found
their ways into _DEPENDENCIES rules which is a no-no. System libs must
only be added to _LDADD rules.

 Does anyone have other examples of libraries which _need_ to include 
 other libraries like -lresolv/-lnsl do for Solaris?

-lldap need -llber  in some LDAP implementations. But we already deal
with that.

OpenSSL: -lssl -lcrypto -ldl

-lnsl is a bit special as it's needed in pretty much every binary built
and is why it is in XTRA_LIBS.

-lcap should move to it's own variable.

we should probably skip -lbsd on glibc systems. But need it on Solaris
systems and possibly others. Does not hurt on Linux.

-lm is probably needed just about everywhere.

-ldl requirements are very fuzzy and not easy to detect when wrong. On
Linux the proble only shows up when doing a static build as the dynamic
linker handles chain dependencies automatically.


 I'm not terribly fussed with Henriks change because the DISK IO stuff is 
 heavily interlinked. It's only an extra build dependency for every 
 test run. But it grates against my ideologies to see the AIO specific 
 API testers needing to link against the pthreads libraries and vice-versa.

You are welcome to split the DISK_OS_LIBS variable into AIOLIB and
PTHREADLIB if you prefer. Have no attachment to it, was just that it was
easier to extend  rename AIOLIB than to add yet another variable needed
at the same places. Just keep them out of _DEPENDENCIES rules, and make
sure to add it where needed. Only makes a difference for the testsuite
programs.

Regards
Henrik



Re: /bzr/squid3/trunk/ r10110: Style Makefile.am to use instead of @AUTOMAKEVAR

2009-11-11 Thread Henrik Nordstrom
Sorry, bash ate part of that message.. Correct text:

Style Makefile.am to use $(AUTOMAKEVAR) instead of @AUTOMAKEVAR@

@AUTOMAKEVAR@ is troublesome when used in \ constructs as it may expand
to empty and the last line in a \ construct must not be empty or some
make versions will fail.

thankfully automake adds all variables for us, so using $(AUTOMAKEVAR)
is preferred.

ons 2009-11-11 klockan 12:44 +0100 skrev Henrik Nordstrom:
 
 revno: 10110
 committer: Henrik Nordstrom hen...@henriknordstrom.net
 branch nick: trunk
 timestamp: Wed 2009-11-11 12:44:58 +0100
 message:
   Style Makefile.am to use  instead of @AUTOMAKEVAR
   
   @AUTOMAKEVAR@ is troublesome when used in \ constructs as it may expand
   to empty and the last line in a \ construct must not be empty or some
   make versions will fail.
   
   thankfully automake adds all variables for us, so using 
   is preferred.
 modified:
   scripts/srcformat.sh
 vanligt textdokument-bilaga (r10110.diff)
 === modified file 'scripts/srcformat.sh'
 --- a/scripts/srcformat.sh2009-08-23 03:08:22 +
 +++ b/scripts/srcformat.sh2009-11-11 11:44:58 +
 @@ -36,8 +36,16 @@
   else
   rm $FILENAME.astylebak
   fi
 - continue;
 + continue
  fi
 + ;;
 +
 +Makefile.am)
 +
 + perl -i -p -e 's/@([A-Z0-9_]+)@/\$($1)/g' ${FILENAME} 
 ${FILENAME}.styled
 + mv ${FILENAME}.styled ${FILENAME}
 + ;;
 +
  esac
  
  if test -d $FILENAME ; then
 



STORE_META_OBJSIZE

2009-11-11 Thread Henrik Nordstrom
(12.57.26) amosjeffries: hno: next on my list is STORE_META_OBJSIZE.
safe to back-port to 3.1? 3.0?
(12.57.51) amosjeffries: useful to do so?

It's safe, but perhaps not very useful. Depends on if Alex will need it
in 3.1 or just 3.2 I guess.

Regards
Henrik





Re: Issue compiling last 3.1 squid in 64-bit platform

2009-11-10 Thread Henrik Nordstrom
tis 2009-11-10 klockan 23:41 +1300 skrev Amos Jeffries:

 Yet I ported those fixes down and he still reports it in the snapshot 
 built afterwards. :(

I think something went wrong in that port, the build farm also failed..

was a number of iterations in trunk before it worked right.

Regards
Henrik



Re: Issue compiling last 3.1 squid in 64-bit platform

2009-11-10 Thread Henrik Nordstrom
can you try ftp://ftp.squid-cache.se/private/squid-3.1.0.14-BZR.tar.bz2
(snapshot of current sources in bzr)

this is fixed so that it builds for me on Fedora 11.


tis 2009-11-10 klockan 09:14 -0200 skrev rena...@flash.net.br:
 If you need me to do any type of specific test or use any compile options,
 please let me know and I would be glad to help!
 
 Thanks again for your effort!
 
  tis 2009-11-10 klockan 23:41 +1300 skrev Amos Jeffries:
 
  Yet I ported those fixes down and he still reports it in the snapshot
  built afterwards. :(
 
  I think something went wrong in that port, the build farm also failed..
 
  was a number of iterations in trunk before it worked right.
 
  Regards
  Henrik
 
 
 



Re: Issue compiling last 3.1 squid in 64-bit platform

2009-11-10 Thread Henrik Nordstrom
tis 2009-11-10 klockan 10:28 -0200 skrev rena...@flash.net.br:

 make[3]: *** No rule to make target `-lpthread', needed by `all-am'.  Stop.

 Is your Fedora 11 64-bit? I will install Ubuntu-64 and try to compile it
 in the same server. As soon as I have the results I'll post back to you!

It is 64-bit.

But the problem seem to be dependent on something else. We saw similar
problems with trunk some time ago where it failed on my machine but
worked on all the machines in the build farm..

can you do

  grep -- -lpthread src/Makefile

Regards
Henrik



Re: /bzr/squid3/trunk/ r10096: Bug 2778: fix linking issues using SunCC

2009-11-10 Thread Henrik Nordstrom
ons 2009-11-11 klockan 10:38 +1300 skrev Amos Jeffries:

 Worth a query to squid-users. Any OS which is so old it does not support
 the auto-tools and libraries we now need is a candidate. I'm thinking
 NextStep may be one more.
 Though I'm inclined to keep as much support as possible until we have
 solid evidence the OS is not able to ever be build Squid.

There probably is one or two that may try running Squid on something
that once upon a time was Solaris 1.x (known as SunOS 4.x before the
name switch to Solaris for everything). But in general terms that OS is
pretty much distinct these days. Been declared end-of-life for more than
a decade now, with last release in 1994.

autotools has never been part of SunOS for that matter, always an addon.
And someone patient enough can get up to date autotools + gcc + whatever
ontop of SunOS 4.x and build Squid. Question is will anyone really do
that?

Regards
Henrik



Re: Issue compiling last 3.1 squid in 64-bit platform

2009-11-10 Thread Henrik Nordstrom
Found the culpit now..

configure.in:

AC_CHECK_LIB(pthread,
main,[DISK_LIBS=$DISK_LIBS -lpthread],


src/Makefile.am:

squid_DEPENDENCIES = ...
@DISK_LIBS@ \


No idea why that passes without error on my F11 box when other such
errors do not..

Regards
Henrik



Re: Issue compiling last 3.1 squid in 64-bit platform

2009-11-10 Thread Henrik Nordstrom
Should be fixed in trunk now I hope..

Can you try applying the patch from
http://www.squid-cache.org/Versions/v3/HEAD/changesets/squid-3-10105.patch 
ontop of the tree you downloaded before:

note: you need to run bootstrap.sh after patching.


tis 2009-11-10 klockan 23:43 +0100 skrev Henrik Nordstrom:
 Found the culpit now..
 
 configure.in:
 
 AC_CHECK_LIB(pthread,
 main,[DISK_LIBS=$DISK_LIBS -lpthread],
 
 
 src/Makefile.am:
 
 squid_DEPENDENCIES = ...
 @DISK_LIBS@ \
 
 
 No idea why that passes without error on my F11 box when other such
 errors do not..
 
 Regards
 Henrik



Re: Issue compiling last 3.1 squid in 64-bit platform

2009-11-09 Thread Henrik Nordstrom
tis 2009-11-10 klockan 18:23 +1300 skrev Amos Jeffries:

  make[3]: *** No rule to make target `-lpthread', needed by `all-am'.

This is the XTRA_LIBS confusion currently being fixed up in trunk.

XTRA_LIBX must only be added in LDADD rules, not the CUSTOM_LIBS which
is also a dependency..

Regards
Henrik



Re: question about submitting patch

2009-11-04 Thread Henrik Nordstrom
You are missing the openssl development package, usually openssl-devel
or libssl-dev depending on OS flavor.

Patches are submitted as an unified diff attached to a squid-dev
message, preferably with [PATCH] in the subject to make it stick out
from the other discussions..


ons 2009-11-04 klockan 17:23 -0500 skrev Matthew Morgan:
 Ok, I think I've got the kinks worked out regarding setting 
 range_offset_limit per a pattern. I've done a decent bit of testing, and 
 it seems to be working as intended. I did added a file to the source 
 tree, and I'm pretty sure I've updated Makefile.am properly.
 
 I tried to do a ./test-builds, but it fails identically in my test 
 repository and in trunk, in areas of squid I didn't touch. I guess HEAD 
 doesn't always pass? It may be that I don't have some headers that it's 
 looking for. Here's the output of the test-builds script:
 
 TESTING: layer-00-bootstrap
 BUILD: .././test-suite/buildtests/layer-00-bootstrap.opts
 TESTING: layer-00-default
 BUILD: .././test-suite/buildtests/layer-00-default.opts
 ../../../src/ssl_support.h:55: error: expected constructor, destructor, 
 or type conversion before ‘*’ token
 ../../../src/ssl_support.h:58: error: expected constructor, destructor, 
 or type conversion before ‘*’ token
 ../../../src/ssl_support.h:71: error: ‘SSL’ was not declared in this scope
 ../../../src/ssl_support.h:71: error: ‘ssl’ was not declared in this scope
 ../../../src/ssl_support.h:74: error: typedef ‘SSLGETATTRIBUTE’ is 
 initialized (use __typeof__ instead)
 ../../../src/ssl_support.h:74: error: ‘SSL’ was not declared in this scope
 ../../../src/ssl_support.h:74: error: expected primary-expression before 
 ‘,’ token
 ../../../src/ssl_support.h:74: error: expected primary-expression before 
 ‘const’
 ../../../src/ssl_support.h:77: error: ‘SSLGETATTRIBUTE’ does not name a type
 ../../../src/ssl_support.h:80: error: ‘SSLGETATTRIBUTE’ does not name a type
 ../../../src/ssl_support.h:83: error: ‘SSL’ was not declared in this scope
 ../../../src/ssl_support.h:83: error: ‘ssl’ was not declared in this scope
 ../../../src/ssl_support.h:86: error: ‘SSL’ was not declared in this scope
 ../../../src/ssl_support.h:86: error: ‘ssl’ was not declared in this scope
 ./../../../src/acl/CertificateData.h:45: error: ‘SSL’ was not declared 
 in this scope
 ./../../../src/acl/CertificateData.h:45: error: template argument 1 is 
 invalid
 ./../../../src/acl/CertificateData.h:51: error: expected `)' before ‘*’ 
 token
 ./../../../src/acl/CertificateData.h:55: error: ‘SSL’ has not been declared
 ./../../../src/acl/CertificateData.h:59: error: ‘SSL’ was not declared 
 in this scope
 ./../../../src/acl/CertificateData.h:59: error: template argument 1 is 
 invalid
 ./../../../src/acl/CertificateData.h:65: error: ISO C++ forbids 
 declaration of ‘SSLGETATTRIBUTE’ with no type
 ./../../../src/acl/CertificateData.h:65: error: expected ‘;’ before ‘*’ 
 token
 make[5]: *** [testHeaders] Error 1
 make[4]: *** [check-am] Error 2
 make[3]: *** [check-recursive] Error 1
 make[2]: *** [check] Error 2
 make[1]: *** [check-recursive] Error 1
 make: *** [distcheck] Error 2
 Build Failed. Last log lines are:
 ./../../../src/acl/CertificateData.h:45: error: template argument 1 is 
 invalid
 ./../../../src/acl/CertificateData.h:51: error: expected `)' before ‘*’ 
 token
 ./../../../src/acl/CertificateData.h:55: error: ‘SSL’ has not been declared
 ./../../../src/acl/CertificateData.h:59: error: ‘SSL’ was not declared 
 in this scope
 ./../../../src/acl/CertificateData.h:59: error: template argument 1 is 
 invalid
 ./../../../src/acl/CertificateData.h:65: error: ISO C++ forbids 
 declaration of ‘SSLGETATTRIBUTE’ with no type
 ./../../../src/acl/CertificateData.h:65: error: expected ‘;’ before ‘*’ 
 token
 distcc[31643] ERROR: compile ./testHeaderDeps_CertificateData.cc on 
 localhost failed
 make[5]: *** [testHeaders] Error 1
 make[5]: Leaving directory 
 `/home/lytithwyn/source/squid/trunk/btlayer-00-default/squid-3.HEAD-BZR/_build/src/acl'
 make[4]: *** [check-am] Error 2
 make[4]: Leaving directory 
 `/home/lytithwyn/source/squid/trunk/btlayer-00-default/squid-3.HEAD-BZR/_build/src/acl'
 make[3]: *** [check-recursive] Error 1
 make[3]: Leaving directory 
 `/home/lytithwyn/source/squid/trunk/btlayer-00-default/squid-3.HEAD-BZR/_build/src'
 make[2]: *** [check] Error 2
 make[2]: Leaving directory 
 `/home/lytithwyn/source/squid/trunk/btlayer-00-default/squid-3.HEAD-BZR/_build/src'
 make[1]: *** [check-recursive] Error 1
 make[1]: Leaving directory 
 `/home/lytithwyn/source/squid/trunk/btlayer-00-default/squid-3.HEAD-BZR/_build'
 make: *** [distcheck] Error 2
 buildtest.sh result is 2
 
 
 Should I go ahead and follow the patch submission instructions on 
 http://wiki.squid-cache.org/Squid3VCS, or is there something I should 
 check first?



Re: /bzr/squid3/trunk/ r10080: Portability fix: __FUNCTION__ is not available on all preprocessors.

2009-11-04 Thread Henrik Nordstrom
ons 2009-11-04 klockan 17:20 +0100 skrev Francesco Chemolli:

   Portability fix: __FUNCTION__ is not available on all preprocessors.

 +#ifdef __FUNCTION__
 +#define _SQUID__FUNCTION__ __FUNCTION__

Do this really work?

__FUNCTION__ is not a preprocessor symbol, it's a magic compiler
variable.

Regards
Henrik



Re: gcc -pipe

2009-11-03 Thread Henrik Nordstrom
tis 2009-11-03 klockan 14:59 +0100 skrev Kinkie:

 Performance gain is probably not much, but it won't hurt either :)

Some says it hurts, but that's probably in low-memory conditions.

Regards
Henrik



Re: [MERGE] Use libcap instead of direct linux capability syscalls

2009-10-27 Thread Henrik Nordstrom
tis 2009-10-27 klockan 14:46 +1300 skrev Amos Jeffries:

 I would like the patch to be updated with said function test logic to
 auto-disable libcap2, modulo a fatal death on the --with-* case.  Then for
 it to go back in :)

OK. I'll give it a shot. Will make TPROXY require libcap-2.09 or later.

At what version was libcap fixed for the issue we test in the magic
sys/capability.h test case? Comment says libcap2 fixed, so I guess we no
longer need this?

Regards
Henrik



Re: [MERGE] Use libcap instead of direct linux capability syscalls

2009-10-27 Thread Henrik Nordstrom
ons 2009-10-28 klockan 01:35 +1300 skrev Amos Jeffries:

 Not entirely sure what version. I think the Gentoo 2.15 or 2.16 is 
 fixed. My Ubuntu 2.11 is broken by the test results of last build I ran.

Ok.

The configure test was broken however, always reporting failure...

 I think we need to keep the magic voodoo for a while longer.

It's still there.

Regards
Henrik



Re: compute swap_file_sz before packing it

2009-10-27 Thread Henrik Nordstrom
tis 2009-10-27 klockan 13:43 -0600 skrev Alex Rousskov:

 Hi Henrik,
 
 Can you explain what you mean by just ignore above? It is kind of
 difficult for me to ignore the only code that seems to supply the info
 Rock store needs. Do you mean we should ultimately remove STORE_META_STD
 from Squid, replacing all its current uses with STORE_META_OBJSIZE?

The object size field in STORE_META_STD should be ignored. Got broken
many years ago (1997 or so), and should be recalculated by using
STORE_META_OBJSIZE or alternatively the on-disk object size.

 Moreover, STORE_META_OBJSIZE has the following comment attached to its
 declaration: not implemented, squid26 compatibility and appears to be
 unused...

Right.. should be forward ported. [attached]

 Neither approach works for Rock store because Rock store does not have a
 swap state file like COSS and does not use individual files like UFS.
 That is why it has to rely on the file size information supplied by
 the core. Perhaps there is a better way of getting that information, but
 I do not know it.

STORE_META_OBJSIZE is the object size (if known) not including TLV
headers, and is generally what you need to know in order to access the
object.

Long term objects should be split in TLV + HTTP Headers (probably part
of TLV) + Content, but that's another topic.. 

Actual file storage size is more a business of the cache_dir than the
core..

  +// so that storeSwapMetaBuild/Pack can pack corrent swap_file_sz
  +swap_file_sz = objectLen() + mem_obj-swap_hdr_sz;
  +

objectLen() MAY be -1 here...

Regards
Henrik
=== modified file 'src/StoreMeta.h'
--- src/StoreMeta.h	2009-01-21 03:47:47 +
+++ src/StoreMeta.h	2009-08-27 12:35:36 +
@@ -127,10 +127,6 @@
  */
 STORE_META_STD_LFS,
 
-/**
- \deprecated
- * Object size, not implemented, squid26 compatibility
- */
 STORE_META_OBJSIZE,
 
 STORE_META_STOREURL,	/* the store url, if different to the normal URL */

=== modified file 'src/store_swapmeta.cc'
--- src/store_swapmeta.cc	2009-01-21 03:47:47 +
+++ src/store_swapmeta.cc	2009-08-27 12:35:36 +
@@ -61,6 +61,7 @@
 tlv **T = TLV;
 const char *url;
 const char *vary;
+const int64_t objsize = e-objectLen();
 assert(e-mem_obj != NULL);
 assert(e-swap_status == SWAPOUT_WRITING);
 url = e-url();
@@ -88,6 +89,17 @@
 return NULL;
 }
 
+
+if (objsize = 0) {
+	T = StoreMeta::Add(T, t);
+	t = StoreMeta::Factory(STORE_META_OBJSIZE, sizeof(objsize), objsize);
+
+	if (!t) {
+	storeSwapTLVFree(TLV);
+	return NULL;
+	}
+}
+
 T = StoreMeta::Add(T, t);
 vary = e-mem_obj-vary_headers;
 



Re: compute swap_file_sz before packing it

2009-10-27 Thread Henrik Nordstrom
tis 2009-10-27 klockan 21:41 +0100 skrev Henrik Nordstrom:

 Actual file storage size is more a business of the cache_dir than the
 core..

Forgot to mention.. nothing stops a cache_dir implementation from
storing this attribute somehow associated with the data if one likes to,
but for cache_dirs taking unbounded object sizes the information is
not known until the object is completed.

Regards
Henrik



Re: [MERGE] Use libcap instead of direct linux capability syscalls

2009-10-27 Thread Henrik Nordstrom
tis 2009-10-27 klockan 14:46 +1300 skrev Amos Jeffries:

 I would like the patch to be updated with said function test logic to
 auto-disable libcap2, modulo a fatal death on the --with-* case.  Then for
 it to go back in :)

Done.

Regards
Henrik



Re: compute swap_file_sz before packing it

2009-10-27 Thread Henrik Nordstrom
tis 2009-10-27 klockan 15:23 -0600 skrev Alex Rousskov:

 Is not that always the case? Even if a store refuses to accept objects
 larger than X KB, that does not mean that all objects will be X KB in
 size, regardless of any reasonable X value. Or did you mean something
 else by unbounded?

There is two kinds of swapouts related to this

a) Size-bounded, where objects are known to be of a certain size.

b) Size not known at start of swapout. Impossible to record the size in
headers.

When there is at least one cache_dir with a size restriction we buffer
objects of type 'b' before swapout in case it's small enough to actaully
fit in the cache_dir policy even if size initially unknown.

 Technically, core does not know the true content size for some responses
 until the response has been received, but I do not remember whether we
 allow such responses to be cachable.

We do. It's a quite common response form.

Regards
Henrik



Re: compute swap_file_sz before packing it

2009-10-27 Thread Henrik Nordstrom
tis 2009-10-27 klockan 15:51 -0600 skrev Alex Rousskov:

 To compute StoreEntry::swap_file_sz, I will add up the ported
 STORE_META_OBJSIZE value and the swap_hdr_len set by StoreMetaUnpacker.
 Would you compute it differently?

Sounds right to me.

 What should I do if STORE_META_OBJSIZE is not known? Does this question
 itself imply that each store that wants to rebuild an index has to store
 the final object size somewhere or update the STORE_META_OBJSIZE value?

Exactly. But see my previous response some seconds ago.

As you already noticed the ufs family uses the filesystem file size meta
information to rebuild swap_file_sz.

COSS in Squid-2 uses STORE_META_OBJSIZE + swap_hdr_sz.

 Thank you for porting this.

Was already done, just not sent yet.

 What happens to STORE_META_OBJSIZE if the object size is not yet known
 at the time when Squid start swapping content to disk?

Then there is no STORE_META_OBJSIZE.

But stores with a max size limit won't get such swapouts.

 The whole patch is not needed if we start relying on STORE_META_OBJSIZE,
 I guess.

Probably, was more a note to illustrate the issues that field battles
with..

Regards
Henrik



Re: [MERGE] Use libcap instead of direct linux capability syscalls

2009-10-27 Thread Henrik Nordstrom
ons 2009-10-28 klockan 10:32 +1300 skrev Amos Jeffries:

  The configure test was broken however, always reporting failure...
 
 Strange. That was the change the Gentoo people are all enjoying at the
 moment.

Well, I think most are silently happy with the workaround enabled even
if not strictly needed.

Regards
Henrik



Re: compute swap_file_sz before packing it

2009-10-27 Thread Henrik Nordstrom
tis 2009-10-27 klockan 17:04 -0600 skrev Alex Rousskov:
 On 10/27/2009 04:07 PM, Henrik Nordstrom wrote:
 
  Thank you for porting this.
  
  Was already done, just not sent yet.
 
 Will you commit your changes to trunk?
 

Done.

Regards
Henrik



Re: [MERGE] Use libcap instead of direct linux capability syscalls

2009-10-26 Thread Henrik Nordstrom
tis 2009-10-20 klockan 12:52 +1300 skrev Amos Jeffries:

 We can do that yes. I think I would also rather do it too. It paves the
 way for a clean deprecation cycle now that TPROXYv4 kernels are effectively
 mainstream:
 
  3.0: (2008-2010) TPROXYv2 with libcap + libcap2
  3.1: (2010-2012) support TPROXYv2 + TPROXYv4 with libcap2
  3.2: (2011?) support TPROXYv4 with libcap2

So you want me to add the patch back on trunk?

Means we must update libcap on several of the build farm members,
including the master, or disable TPROXY in the build tests..

I guess I could add some configure magics to look for the missing
function and automatically disable..

Regards
Henrik



Re: [MERGE] Use libcap instead of direct linux capability syscalls

2009-10-18 Thread Henrik Nordstrom
fre 2009-10-16 klockan 02:04 +0200 skrev Henrik Nordstrom:
 fre 2009-10-16 klockan 11:03 +1300 skrev Amos Jeffries:
 
  /* NP: keep these two if-endif separate. Non-Linux work perfectly well 
 
 Sorry.. thought I had fixed that already..
 
  +#define PUSH_CAP(cap) cap_list[ncaps++] = (cap)
  
  I can just see that converting to: 
  CAP_NET_ADMIN_ist[nCAP_NET_ADMINs++]=(CAP_NET_ADMIN) ...
 
 Nope.. preprocessor is tokens based. But as this macro is farily simple
 now it can just as well be expanded. I think the plan was to eventually
 C++ encapsulate these details, but that's overkill here.
 
 Updated patch attaced.


Crap. libcap on centos is not usable.

Regards
Henrik



[MERGE] Use libcap instead of direct linux capability syscalls

2009-10-15 Thread Henrik Nordstrom
The kernel interface, while some aspects of it is much simpler is also
not really meant to be called directly by applications.

The attached patch approximates the same functionality using libcap.
Differs slightly in how it sets the permitted capabilities to be kept on
uid change (explicit instead of masked), but end result is the same as
setting the capabilities won't work if these were not allowed.




# Bazaar merge directive format 2 (Bazaar 0.90)
# revision_id: hen...@henriknordstrom.net-20091015142822-\
#   is615u5fl72d5vt3
# target_branch: http://www.squid-cache.org/bzr/squid3/trunk/
# testament_sha1: 7003f761ebaefca2b4e2fd090f186cfb0ec0357e
# timestamp: 2009-10-15 20:21:24 +0200
# base_revision_id: squ...@treenet.co.nz-20091015121532-\
#   hhwys6416uxebd9y
# 
# Begin patch
=== modified file 'configure.in'
--- configure.in	2009-10-15 10:12:38 +
+++ configure.in	2009-10-15 14:28:22 +
@@ -2763,7 +2763,7 @@
   fi
 ],[AC_MSG_RESULT(yes)])
 if test x$use_caps = xyes; then
-  dnl Check for libcap1 breakage or libcap2 fixed (assume broken unless found working)
+  dnl Check for libcap1 header breakage or libcap2 fixed (assume broken unless found working)
   libcap_broken=1
   AC_CHECK_HEADERS(sys/capability.h)
   AC_CACHE_CHECK([for operational libcap2], $libcap_broken,
@@ -2773,6 +2773,7 @@
]])],[libcap_broken=0],[])
   )
   AC_DEFINE_UNQUOTED([LIBCAP_BROKEN],$libcap_broken,[if libcap2 is available and not clashing with libc])
+  AC_CHECK_LIB(cap, cap_get_proc)
 fi
 
 AC_CHECK_TYPE(mtyp_t,AC_DEFINE(HAVE_MTYP_T,1,[mtyp_t is defined by the system headers]),,[#include sys/types.h

=== modified file 'src/tools.cc'
--- src/tools.cc	2009-08-28 01:44:26 +
+++ src/tools.cc	2009-10-15 14:24:33 +
@@ -1240,51 +1240,41 @@
 restoreCapabilities(int keep)
 {
 /* NP: keep these two if-endif separate. Non-Linux work perfectly well without Linux syscap support. */
-#if defined(_SQUID_LINUX_)
-
-#if HAVE_SYS_CAPABILITY_H
-#ifndef _LINUX_CAPABILITY_VERSION_1
-#define _LINUX_CAPABILITY_VERSION_1 _LINUX_CAPABILITY_VERSION
-#endif
-cap_user_header_t head = (cap_user_header_t) xcalloc(1, sizeof(*head));
-cap_user_data_t cap = (cap_user_data_t) xcalloc(1, sizeof(*cap));
-
-head-version = _LINUX_CAPABILITY_VERSION_1;
-
-if (capget(head, cap) != 0) {
-debugs(50, DBG_IMPORTANT, Can't get current capabilities);
-} else if (head-version != _LINUX_CAPABILITY_VERSION_1) {
-debugs(50, DBG_IMPORTANT, Invalid capability version   head-version   (expected   _LINUX_CAPABILITY_VERSION_1  ));
+#if defined(_SQUID_LINUX_)  HAVE_SYS_CAPABILITY_H
+cap_t caps;
+if (keep)
+	caps = cap_get_proc();
+else
+	caps = cap_init();
+if (!caps) {
+	IpInterceptor.StopTransparency(Can't get current capabilities);
 } else {
-
-head-pid = 0;
-
-cap-inheritable = 0;
-cap-effective = (1  CAP_NET_BIND_SERVICE);
-
-if (IpInterceptor.TransparentActive()) {
-cap-effective |= (1  CAP_NET_ADMIN);
+#define PUSH_CAP(cap) cap_list[ncaps++] = (cap)
+	int ncaps = 0;
+	int rc = 0;
+	cap_value_t cap_list[10];
+	PUSH_CAP(CAP_NET_BIND_SERVICE);
+
+	if (IpInterceptor.TransparentActive()) {
+	PUSH_CAP(CAP_NET_ADMIN);
 #if LINUX_TPROXY2
-cap-effective |= (1  CAP_NET_BROADCAST);
+	PUSH_CAP(CAP_NET_BROADCAST);
 #endif
-}
-
-if (!keep)
-cap-permitted = cap-effective;
-
-if (capset(head, cap) != 0) {
+	}
+#undef PUSH_CAP
+
+	cap_clear_flag(caps, CAP_EFFECTIVE);
+	rc |= cap_set_flag(caps, CAP_EFFECTIVE, ncaps, cap_list, CAP_SET);
+	rc |= cap_set_flag(caps, CAP_PERMITTED, ncaps, cap_list, CAP_SET);
+
+if (rc || cap_set_proc(caps) != 0) {
 IpInterceptor.StopTransparency(Error enabling needed capabilities.);
 }
+	cap_free(caps);
 }
-
-xfree(head);
-xfree(cap);
-
 #else
 IpInterceptor.StopTransparency(Missing needed capability support.);
 #endif /* HAVE_SYS_CAPABILITY_H */
-
-#endif /* !defined(_SQUID_LINUX_) */
 }
 
 void *

# Begin bundle
IyBCYXphYXIgcmV2aXNpb24gYnVuZGxlIHY0CiMKQlpoOTFBWSZTWduZKS4ABHffgEQwee///39v
/2q+YAf98llXqgAGd3O2GiQAAwkkChNR7U9GqM09JtMqPU9qj1PRpGI9Rk0ANB6mgBzCaA0B
o0YRoMRpiZMTQYRoGQDJgJKQDTRpTTamgAAGjQAAaAADRkAxTRCj0nqA3qQbU0GgAGnqPUAH
MJoDQGjRhGgxGmJkxNBhGgZAMmAkkBGgBAmTSNNqU20aKPU9NpqnpNHiZE0A9TbKjyXkn8xVSzOo
6nuY29xwpyJqc4QM6dfBXF6U3PALME7+yiygZHhYJ6GFAZOydW+/qhJBCOT4sE63ILSZnlhQfIiO
rm+MtaEbipK8/U+Z8+ujzoF8y4g4ciAkUoXkPMgYOJLLXM4IMFZZKMXCy+LORYBB7SuZyL0Il75E
ICDp4WDDvnrOPrEValQtaNatSpCNt54PN41VtS2WHA7zHj/2nM+Gkz7WaFCCr0K5SjhJycwMMIzT
3KF3Vi15a6qJ7dBlLhLczYCmVbNeLJxzyFNSShC+ZTDESO+q+5wDEzUZMUdKjOw7ZGXYorvDob1G
2PF+rT+n6s3T9HSiN7mWhRxV8L9gMeCCYg5JZL1dAKxAyX6LI0Qj4SZbiDgFaCOVBkFnvYa7g5zu
oRHDPzKtD2BeZhPfkPMjzyJJ6EnhowkWFetKqWUc7YI2ltKCIjGJri+NJI4BghToa8ZMPJzefxrJ
kanJVYOoyZhzJmDouttZAmOv6yoYJrJJ0ZpjEKGmVr2VjZhqjSbfBltSKFEI4QSQCncJE1PUZivx
HOGZXnwlgeOh8EnYI36GsoMXrDLTCDtk9FMV9C9IWJaPImv5VvSsWy4XAglalQgUuSqMATIZliwM

Re: [MERGE] Use libcap instead of direct linux capability syscalls

2009-10-15 Thread Henrik Nordstrom
fre 2009-10-16 klockan 11:03 +1300 skrev Amos Jeffries:

 /* NP: keep these two if-endif separate. Non-Linux work perfectly well 

Sorry.. thought I had fixed that already..

 +#define PUSH_CAP(cap) cap_list[ncaps++] = (cap)
 
 I can just see that converting to: 
 CAP_NET_ADMIN_ist[nCAP_NET_ADMINs++]=(CAP_NET_ADMIN) ...

Nope.. preprocessor is tokens based. But as this macro is farily simple
now it can just as well be expanded. I think the plan was to eventually
C++ encapsulate these details, but that's overkill here.

Updated patch attaced.

Regards
Henrik
# Bazaar merge directive format 2 (Bazaar 0.90)
# revision_id: hen...@henriknordstrom.net-20091015235726-\
#   tjj24dnri2arionc
# target_branch: http://www.squid-cache.org/bzr/squid3/trunk/
# testament_sha1: e0544b31cc7e7f4f877a1b5939e6cfe26d60bc6f
# timestamp: 2009-10-16 01:58:06 +0200
# base_revision_id: squ...@treenet.co.nz-20091015121532-\
#   hhwys6416uxebd9y
# 
# Begin patch
=== modified file 'configure.in'
--- configure.in	2009-10-15 10:12:38 +
+++ configure.in	2009-10-15 14:28:22 +
@@ -2763,7 +2763,7 @@
   fi
 ],[AC_MSG_RESULT(yes)])
 if test x$use_caps = xyes; then
-  dnl Check for libcap1 breakage or libcap2 fixed (assume broken unless found working)
+  dnl Check for libcap1 header breakage or libcap2 fixed (assume broken unless found working)
   libcap_broken=1
   AC_CHECK_HEADERS(sys/capability.h)
   AC_CACHE_CHECK([for operational libcap2], $libcap_broken,
@@ -2773,6 +2773,7 @@
]])],[libcap_broken=0],[])
   )
   AC_DEFINE_UNQUOTED([LIBCAP_BROKEN],$libcap_broken,[if libcap2 is available and not clashing with libc])
+  AC_CHECK_LIB(cap, cap_get_proc)
 fi
 
 AC_CHECK_TYPE(mtyp_t,AC_DEFINE(HAVE_MTYP_T,1,[mtyp_t is defined by the system headers]),,[#include sys/types.h

=== modified file 'src/tools.cc'
--- src/tools.cc	2009-08-28 01:44:26 +
+++ src/tools.cc	2009-10-15 23:57:26 +
@@ -1241,50 +1241,40 @@
 {
 /* NP: keep these two if-endif separate. Non-Linux work perfectly well without Linux syscap support. */
 #if defined(_SQUID_LINUX_)
-
 #if HAVE_SYS_CAPABILITY_H
-#ifndef _LINUX_CAPABILITY_VERSION_1
-#define _LINUX_CAPABILITY_VERSION_1 _LINUX_CAPABILITY_VERSION
-#endif
-cap_user_header_t head = (cap_user_header_t) xcalloc(1, sizeof(*head));
-cap_user_data_t cap = (cap_user_data_t) xcalloc(1, sizeof(*cap));
-
-head-version = _LINUX_CAPABILITY_VERSION_1;
-
-if (capget(head, cap) != 0) {
-debugs(50, DBG_IMPORTANT, Can't get current capabilities);
-} else if (head-version != _LINUX_CAPABILITY_VERSION_1) {
-debugs(50, DBG_IMPORTANT, Invalid capability version   head-version   (expected   _LINUX_CAPABILITY_VERSION_1  ));
+cap_t caps;
+if (keep)
+	caps = cap_get_proc();
+else
+	caps = cap_init();
+if (!caps) {
+	IpInterceptor.StopTransparency(Can't get current capabilities);
 } else {
-
-head-pid = 0;
-
-cap-inheritable = 0;
-cap-effective = (1  CAP_NET_BIND_SERVICE);
-
-if (IpInterceptor.TransparentActive()) {
-cap-effective |= (1  CAP_NET_ADMIN);
+	int ncaps = 0;
+	int rc = 0;
+	cap_value_t cap_list[10];
+	cap_list[ncaps++] = CAP_NET_BIND_SERVICE;
+
+	if (IpInterceptor.TransparentActive()) {
+	cap_list[ncaps++] = CAP_NET_ADMIN;
 #if LINUX_TPROXY2
-cap-effective |= (1  CAP_NET_BROADCAST);
+	cap_list[ncaps++] = CAP_NET_BROADCAST;
 #endif
-}
-
-if (!keep)
-cap-permitted = cap-effective;
-
-if (capset(head, cap) != 0) {
+	}
+
+	cap_clear_flag(caps, CAP_EFFECTIVE);
+	rc |= cap_set_flag(caps, CAP_EFFECTIVE, ncaps, cap_list, CAP_SET);
+	rc |= cap_set_flag(caps, CAP_PERMITTED, ncaps, cap_list, CAP_SET);
+
+if (rc || cap_set_proc(caps) != 0) {
 IpInterceptor.StopTransparency(Error enabling needed capabilities.);
 }
+	cap_free(caps);
 }
-
-xfree(head);
-xfree(cap);
-
 #else
 IpInterceptor.StopTransparency(Missing needed capability support.);
 #endif /* HAVE_SYS_CAPABILITY_H */
-
-#endif /* !defined(_SQUID_LINUX_) */
+#endif /* _SQUID_LINUX_ */
 }
 
 void *

# Begin bundle
IyBCYXphYXIgcmV2aXNpb24gYnVuZGxlIHY0CiMKQlpoOTFBWSZTWeXFlV4ABjxfgEQwef///39v
/2q+YAnPj1q+owJFAC6wnTU1lAAoAMkTQSeRPUzQnpPakeiaaPU00GgAaaBo0aAA4yZNGgNG
mIyNDEMCaNMQYjQYQAGHGTJo0Bo0xGRoYhgTRpiDEaDCAAwYlSPFBiNMnkmAhpiYgDRoMAmjJkMJ
gikRojRpqniPSaaTKbFMyMQ1PTJqNEeiD1NBtQP1BJIBNNNAhCek1MaNJpH6oGRmoyMTR6mT1Hpp
PCjhAcBA/KxRq1hVPG4pZm3lBsotwW7hoeBiSZwQhH0/L5PH/kbyD/uLuplZbxN2y1f8fZogkkyC
aYHcL90IFWcSSwYKsiE+LyjBhNwLMnJqM1M9XJ8pp0hqwfSU6dKQuYVSv7m9a9R6u/J5/hu0DwLV
RcnDIAkUnMqGQkxuEOu/zzGEwQUAhFxmfXGk80CefYwUo9wwYQBxOcXyfCzqxcymxgpzi5iJF2EY
0U55YbSigl3BGUOeSjOVXxhd+FvrRBZasfyxaKQCJ8BskUW8K+C2l+leb4cJr4W3P2YqIV4EazdE
8uc3g5hmBhATdqv1zKZMU1u7PYqz7dBuGAPCEUqgRrsCOo6uAXo6SLqriqnF9gpXSmR5HTRV9R1w
70c8uFh1Rvee+vka9RAroY78lh1wpyiWupjVyVseZEn8VOsFimsDmAQ1hiiWNDJGbAgPJC0F+0TW
++nIjzgwJ/15qUHGK6ew0DchuHpVdSHu9mKHqu83NkJSlKUv4q17+o3uk1RctUCnrgZzP+b7Ub71

Re: adding content to the cache -- a revisit

2009-10-12 Thread Henrik Nordstrom
mån 2009-10-12 klockan 07:57 -0600 skrev bergenp...@comcast.net:

 The content I'm trying to manually install into the squid server will be 
 a subset of the origin server content, so for objects not manually 
 installed into squid, squid will still need to go directly back to the 
 origin server. 

What you need is

a) A HTTP server on the Squid server capable of serving the objects
using HTTP, preverably with as identical properties as possible as the
origin. This includes at least properties such as ETag, Content-Type,
Content-Language, Content-Encoding and Last-Modified.

b) wget, squidclient or another simple HTTP client capable of requesting
URLs from the proxy.

c) cache_peer line telling Squid that this local http server exists

d) A unique http_port bound on the loopback interface, only used for
this purpose (simplifies next step)

e) cache_peer_access + never_direct rules telling Squid to fetch content
requested from the unique port defined in 'd' from the peer defined in
'c' and only then..

Regards
Henrik



Re: [2.HEAD patch] Fix compilation on opensolaris

2009-10-12 Thread Henrik Nordstrom
Looks fine. Applied.

Regards
Henrik

mån 2009-10-12 klockan 23:35 +0200 skrev Kinkie:
 Sure. Attached.
 
  K.
 
 On Mon, Oct 12, 2009 at 10:05 PM, Henrik Nordstrom
 hen...@henriknordstrom.net wrote:
  Can you resend that as an unified diff please... not keen on applying
  contextless patches
 
 
  fre 2009-10-09 klockan 17:39 +0200 skrev Kinkie:
  Hi all,
 2.HEAD currently doesn't build on opensolaris, in at least some
  cases due to it not properly detecting kerberosv5 variants.
  The attached patch is a backport of some 3.HEAD changes which allows
  2.HEAD to build on opensolaris
 
  Please review and, if it seems OK to you, apply.
 
 
 
 
 
 



Re: [squid-users] HTTP Cache General Question

2009-10-12 Thread Henrik Nordstrom
fre 2009-10-09 klockan 18:26 +1300 skrev Amos Jeffries:

 Beyond that there is a lot of small pieces of work to make Squid capable 
 of contacting P2P servers and peers, intercept seed file requests, etc.

There is also the related topic of how to fight bad feeds of corrupted
data (intentionally or erroneous).

All p2p networks have to fight this to various degrees, and to do so you
must know the p2p network in sufficient detail to know who are the
trackers, definitions of authorative segment checksums, blacklisting
of bad peers etc.

Regards
Henrik



Re: squid-smp

2009-10-12 Thread Henrik Nordstrom
fre 2009-10-09 klockan 01:50 -0400 skrev Sachin Malave:

 I think it is possible to have a thread , which will be watching
 AsyncCallQueue, if it finds an entry there then it will execute the
 dial() function.

Except that none of the dialed AsyncCall handlers is currently thread
safe.. all expect to be running in the main thread all alone..

 can we separate dispatchCalls() in EventLoop.cc for that purpose? We
 can have a thread executing distatchCalls() continuously and if error
 condition occurs it is written  in error shared variable. which
 is then read by main thread executing mainLoop... in the same way
 returned dispatchedSome can also be passed to main thread...

Not sure I follow.

Regards
Henrik



Re: CVE-2009-2855

2009-10-12 Thread Henrik Nordstrom
Not sure. Imho it's one of those small things that is very questionable
if it should have got an CVE # to start with.

For example RedHat downgraded the issue to low/low (lowest possible
rating) once explained what it really was about.

But we should probably notify CVE that the bug has been fixed.


tis 2009-10-13 klockan 11:14 +1300 skrev Amos Jeffries:
 Are we going to acknowledge this vulnerability with a SQUID:2009-N alert?
 The reports seem to indicate it can be triggered remotely by servers.
 
 It was fixed during routine bug closures a while ago so we just need to
 wrap up an explanation and announce the fixed releases.
 
 Amos



Re: CVE-2009-2855

2009-10-12 Thread Henrik Nordstrom
tis 2009-10-13 klockan 12:12 +1300 skrev Amos Jeffries:

 Okay, I've asked the Debian reporter for access to details.
 Lacking clear evidence of remote exploit  I'll follow along with the quiet
 approach.

The exploit is only possible if squid.conf is configured to extract
cookies, i.e. for logging or external_acl purposes.

 The CVE has reference to our bugs which are clearly closed. If there is
 more to be done to notify anyone can you let me know what that is please?

A mail to c...@mitre.org mentioning that the Squid bug is fixed may
work..

 the other CVE from this year are in similar states of questionable
 open/closed-ness.

?

There has been 5 CVEs issued for Squid in 2009... I only classify this
one low and the transparent ip interception mess CVE-2009-0801 as minor,
the other 3 are all fairly major..


http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-0478
Squid 2.7 to 2.7.STABLE5, 3.0 to 3.0.STABLE12, and 3.1 to 3.1.0.4 allows
remote attackers to cause a denial of service via an HTTP request with
an invalid version number, which triggers a reachable assertion in (1)
HttpMsg.c and (2) HttpStatusLine.c.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-0801
Squid, when transparent interception mode is enabled, uses the HTTP Host
header to determine the remote endpoint, which allows remote attackers
to bypass access controls for Flash, Java, Silverlight, and probably
other technologies, and possibly communicate with restricted intranet
sites, via a crafted web page that causes a client to send HTTP requests
with a modified Host header.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-2621
Squid 3.0 through 3.0.STABLE16 and 3.1 through 3.1.0.11 does not
properly enforce buffer limits and related bound checks, which allows
remote attackers to cause a denial of service via (1) an incomplete
request or (2) a request with a large header size, related to (a)
HttpMsg.cc and (b) client_side.cc.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-2622
Squid 3.0 through 3.0.STABLE16 and 3.1 through 3.1.0.11 allows remote
attackers to cause a denial of service via malformed requests including
(1) missing or mismatched protocol identifier, (2) missing or negative
status value, (3) missing version, or (4) missing or invalid status
number, related to (a) HttpMsg.cc and (b) HttpReply.cc.

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-2855
The strListGetItem function in src/HttpHeaderTools.c in Squid 2.7 allows
remote attackers to cause a denial of service via a crafted auth header
with certain comma delimiters that trigger an infinite loop of calls to
the strcspn function.





Re: CVE-2009-2855

2009-10-12 Thread Henrik Nordstrom
tis 2009-10-13 klockan 12:12 +1300 skrev Amos Jeffries:

 Okay, I've asked the Debian reporter for access to details.
 Lacking clear evidence of remote exploit  I'll follow along with the quiet
 approach.

Right.. meant to provide the details as well but forgot... It can be
found in the RedHat bug report.
https://bugzilla.redhat.com/show_bug.cgi?id=518182

A sample test case is as follows:

-- test-helper.sh (executable) ---
#!/bin/sh
while read line; do
  echo OK
done
-- end test-helper.sh

-- squid.conf  (before where access is normally allowed) --
external_acl_type test %{Test:;test} /path/to/test-helper.sh
acl test external test
http_access deny !test
-- end squid.conf --

-- test command --
/usr/bin/squidclient -H Test: a, b, test=test\n http://www.squid-cache.org/
-- end test command --


 The CVE has reference to our bugs which are clearly closed. If there is
 more to be done to notify anyone can you let me know what that is please?
 the other CVE from this year are in similar states of questionable
 open/closed-ness.

Ah, now I get what you mean.

yes we should be more active in giving vendor feedback to CVE in
general.. Contacting

   c...@mitre.org

is a good start I guess.

Regards
Henrik



Re: CVE-2009-2855

2009-10-12 Thread Henrik Nordstrom
tis 2009-10-13 klockan 12:40 +1300 skrev Amos Jeffries:

 Mitre still list them all as Under Review.

That's normal.. still collecting information.

Regards
Henrik



Re: [squid-users] HTTP Cache General Question

2009-10-09 Thread Henrik Nordstrom
fre 2009-10-09 klockan 09:33 -0400 skrev Mark Schall:

 Peer 1 sends HTTP Request to Peer 2 with
 www.tracker.com/someuniqueidentifierforchunkoffile in the header.
 Would Squid or other Web Caches try to contact www.tracker.com instead
 of the Peer 2, or will it forward the request onward to Peer 2.

HTTP does not have both host and address detail. HTTP have an URL. If a
client requests

  http://www.tracker.com/someuniqueidentifierforchunkoffile

from the proxy then the proxy will
request /someuniqueidentifierforchunkoffile from www.tracker.com.

If the client does direct connections (not configured for using proxy)
then it MAY connect to other host and request

  GET /someuniqueidentifierforchunkoffile
  Host: www.tracker.com

from there. But if that is intercepted by a intercepting HTTP proxy then
the proxy will generally use the host from the Host header instead of
the intercepted destination address.

Regards
Henrik



Re: [squid-users] Squid ftp authentication popup

2009-10-06 Thread Henrik Nordstrom
ons 2009-10-07 klockan 10:06 +1300 skrev Amos Jeffries:

 Firefox-3.x wil happyily popup the ftp:// auth dialog if the proxy-auth
 header is sent.
 There were a few bugs which got fixed in the 3.1 re-writes and made squid
 start to send it properly. It's broken in 3.0, not sure if its the same in
 2.x but would assume so. The fixes done rely on C++ objects so wont be easy
 to port.

In what ways is 3.0 broken?

The visible changes I see is that 3.1 only prompts if required by the
FTP server, and that the realm for some reason is changed to also
include the requested server name. 401 basic auth realms are implicit
unique to each servername. (digest auth is a little fuzzier as it may
apply to more domains/servers)

Regards
Henrik



Re: [squid-users] Squid ftp authentication popup

2009-10-06 Thread Henrik Nordstrom
ons 2009-10-07 klockan 13:09 +1300 skrev Amos Jeffries:

 3.0 uses a generic fail() mechanism to send results back. That mechanism
 seems not to add the Proxy-Auth reply header at all. 3.0 also was only
 parsing the URL and config file. Popup re-sends contain the auth in headers
 not URL.

Strange. My 3.0 responds as

HTTP/1.0 401 Unauthorized
Server: squid/3.0.STABLE19-BZR
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
WWW-Authenticate: Basic realm=ftp username

and relays Authorization properly. It however rejects any login other
than the one supplied in the URL.

Squid-2 behaves the same.

Regards
Henrik



[Fwd: [squid-users] Akamai's new patent 7596619]

2009-10-05 Thread Henrik Nordstrom
Patent office operations at it's best...
---BeginMessage---

Take a look at this patent, granted on September 29, 2009

HTML delivery from edge-of-network servers in a content delivery network
(CDN) 

Abstract
A content delivery network is enhanced to provide for delivery of cacheable
markup language content files such as HTML. To support HTML delivery, the
content provider provides the CDNSP with an association of the content
provider's domain name (e.g., www.customer.com) to an origin server domain
name (e.g., html.customer.com) at which one or more default HTML files are
published and hosted. The CDNSP provides its customer with a CDNSP-specific
domain name. The content provider, or an entity on its behalf, then
implements DNS entry aliasing (e.g., a CNAME of the host to the
CDNSP-specific domain) so that domain name requests for the host cue the CDN
DNS request routing mechanism. This mechanism then identifies a best content
server to respond to a request directed to the customer's domain. The CDN
content server returns a default HTML file if such file is cached;
otherwise, the CDN content server directs a request for the file to the
origin server to retrieve the file, after which the file is cached on the
CDN content server for subsequent use in servicing other requests. The
content provider is also provided with log files of CDNSP-delivered HTML.

http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2Sect2=HITOFFp=1u=%2Fnetahtml%2FPTO%2Fsearch-bool.htmlr=1f=Gl=50co1=ANDd=PTXTs1=AkamaiOS=AkamaiRS=Akamai

-- 
View this message in context: 
http://www.nabble.com/Akamai%27s-new-patent-7596619-tp25727550p25727550.html
Sent from the Squid - Users mailing list archive at Nabble.com.
---End Message---


Re: [PATCH] warning: `squid' uses 32-bit capabilities

2009-10-05 Thread Henrik Nordstrom
tis 2009-10-06 klockan 00:46 +1300 skrev Amos Jeffries:

 I'm going to dare and hope that the fix really is this simple :)

The right fix is actually using libcap instead of the raw kernel
interface...

Regards
Henrik



Re: R: monitoring squid environment data sources

2009-10-03 Thread Henrik Nordstrom
ons 2009-09-30 klockan 16:57 +0200 skrev Kinkie:
 On Wed, Sep 30, 2009 at 2:00 PM, Henrik Nordstrom
 hen...@henriknordstrom.net wrote:
  Been thinking a little further on this, and came to the conclusion that
  effort is better spent on replacing the signal based -k interface with
  something workable, enabling a control channel into the running Squid to
  take certain actions.

 This warrants a Features/ wiki page IMO

Hmm.. sure it wasn't there already? It's a feature which has been
discussed for a decade or so..

Regards
Henrik



Re: R: monitoring squid environment data sources

2009-10-03 Thread Henrik Nordstrom
lör 2009-10-03 klockan 21:19 +0200 skrev Kinkie:
 I could not find it...
 If it's a dupe, I apologize

Well most discussions were before we had a Features section in the
wiki...





Re: Segfault in HTCP CLR request on 64-bit

2009-10-02 Thread Henrik Nordstrom
fre 2009-10-02 klockan 02:52 -0400 skrev Matt W. Benjamin:
 Bzero?  Is it an already-allocated array/byte sequence?  (Apologies, I 
 haven't seen the code.)  Assignment to NULL/0 is in fact correct for 
 initializing a sole pointer, and using bzero for that certainly isn't 
 typical.  Also, for initializing a byte range, memset is preferred [see Linux 
 BZERO(3), which refers to POSIX.1-2008 on that point].
 
 STYLE(9) says use NULL rather than 0, and it is clearer.  But C/C++ 
 programmers should know that NULL is 0.  And note that at least through 1998, 
 initialization to 0 was the preferred style in C++, IIRC.

You are both right.

the whole stuff should be zeroed before filled in to avoid accidental
leakage of random values from the stack, which also makes the explicit
assignment redundant.

bzero is not the right call (BSD specific), memset is preferred.

In C (which is what Squid-2 is written in) NULL is the right initializer
for pointers in all contexts.

C++ is different... no universally accepted pointer initializer value
there due to the slightly different type checks on pointers, often
needing casting.

But something is fishy here.. see my comment in bugzilla.

Regards
Henrik



Re: Segfault in HTCP CLR request on 64-bit

2009-10-02 Thread Henrik Nordstrom
fre 2009-10-02 klockan 11:48 -0400 skrev Jason Noble:
 Sorry, I went to bugzilla before reading all the e-mails here.  As I 
 commented on the bug report states, there is nothing fishy going on.  
 While strlen(NULL) will always segfault, htcpBuildCountstr() wraps the 
 strlen() call with a check for a NULL pointer:
 
 260if (s)
 261len = strlen(s);
 262else
 263len = 0;

Great. Then the memset is sufficient. Case closed.

Regards
Henrik



Re: Build failed in Hudson: 3.HEAD-i386-FreeBSD-6.4 #72

2009-09-30 Thread Henrik Nordstrom
ons 2009-09-30 klockan 08:06 +0200 skrev Kinkie:
  Several of the build jobs appeared to be pulling from www.squid-cache.org
  which is not updated since 10 sept. Thats now fixed so we shall see if the
  update affects this error
 
 I'm currently rebasing all jobs so that they pull from bzr.squid-cache.org
 
 Where should I point my bzr+ssh branches to?

I would recommend

bzr+ssh://bzr.squid-cache.org/bzr/squid3/

for now squid-cache.org also works. But may change later on.

bzr+ssh://squid-cache.org/bzr/squid3/

Using www.squid-cache.org for bzr is not a good idea.

Regards
Henrik



Re: R: monitoring squid environment data sources

2009-09-30 Thread Henrik Nordstrom
Been thinking a little further on this, and came to the conclusion that
effort is better spent on replacing the signal based -k interface with
something workable, enabling a control channel into the running Squid to
take certain actions.

At least on Linux many of the monitors needed is easier implemented
externally, just telling Squid what to do when needed.

Regards
Henrik



Re: monitoring squid environment data sources

2009-09-29 Thread Henrik Nordstrom
tis 2009-09-29 klockan 14:06 +1200 skrev Amos Jeffries:

 It seems to me that the master process might be assigned to monitor the
 upstate of the child process and additionally set a watch on

Except that the master process is entirely optional and generally not
even desired if you use a smart init system like upstart.

Additionally, a full reconfigure only because resolv.conf changed is a
bit on the heavy side imho.

And finally, as both resolv.conf and hosts paths is configuratble from
squid.conf the current used path and what the master process remembers
may differ

 local host IPs (bit tricky on non-windows as this may block)?

Linux uses netlink messages. non-blocking access available.


I think my preference is to get these monitors into the main process.

Regards
Henrik



Re: [noc] Build failed in Hudson: 3.HEAD-amd64-CentOS-5.3 #118

2009-09-28 Thread Henrik Nordstrom
mån 2009-09-28 klockan 12:21 +1200 skrev Amos Jeffries:
¨
 If you like, I thought the point of your change was that one of the
 libraries missing was not fatal.

Neither is required for enabling ESI. There is also the default custom
parser which is self-contained and always built.

 And that --with-X meant only that X should
 be used if possible.  Possibly the absence of both is fatal, in which case
 the WARN at the end of my change should be made an ERROR again.

The parsers are best seen as plugins, adding features.

One selects at runtime via the esi_parser squid.conf directive which
parser to use among the available ones.

The point of having configure options for these is only to get a
controlled build where one knows what features have been enabled if the
build was successful, and where building fails if those features can not
be built.

Regards
Henrik



Re: assert(e-mem_status == NOT_IN_MEMORY) versus TCP_MEM_HIT.

2009-09-28 Thread Henrik Nordstrom
mån 2009-09-28 klockan 12:04 +1200 skrev Amos Jeffries:

 I'm hitting the case approximately once every 10-15 minutes on the CDN
 reverse proxies. More when bots run through this particular clients
 website.  It's almost always on these small files (~10K) retrieved
 long-distance in reverse proxy requests. They arrive in two chunks
 200-300ms apart. swapin race gets lost at some point between the two reads.

Sice is not very important here. It's just about swapin timing and
frequency of swapin requests. The longer disk reads take the higher
probability.

 Content-Length is available and a buffer can be allocated (or existing one
 extended) for memory storage immediately instead of a disk file opened,
 modulo the min/max caching settings.

THe open is not about memory, it's about being sure the known to be on
disk data can be read in when required, even if I/O happens to be
overloaded at that time to the level that swapin requests are rejected.

 All the other cases (too big for memory, no content-length etc) can go back
 to the old file open if need be.

Yes, and have to.

Regards
Henrik



Re: ESI auto-enable

2009-09-28 Thread Henrik Nordstrom
Amos, in response to your IRC question:

with these ESI changes are you confident enough that we can now
auto-enable ESI for 3.1.0.14?

Answer:

Not until the autoconf foo stuff for the parsers have settled in trunc.
A default build of 3.1 should not strictly require libxml2 and expat and
should build without it.

Regards
Henrik



Re: assert(e-mem_status == NOT_IN_MEMORY) versus TCP_MEM_HIT.

2009-09-27 Thread Henrik Nordstrom
sön 2009-09-27 klockan 12:55 +1300 skrev Amos Jeffries:

 Ah, okay gotcha.
 So...
   (c) for people needing a quick patch.
   (b) to be committed (to meet 3.2 performance goals, saving uselsss 
 disk operations, etc etc).

The number of times 'b' as discussed here will be hit is negliable. Not
sure it's worth trying to optimize this.

But the bigger picture 'b' may be worthwhile to optimize a bit, namely
better management of swapin requests. Currently there is one open disk
cache handle per concurrent client, should be sufficient with just one
for all swapin clients.. but that requires the store io interface
implementation to be cleaned up a bit allowing multiple outstanding read
operations on the same handle but processed one at a time to avoid seek
issues..

Regards
Henrik




Re: Build failed in Hudson: 3.HEAD-amd64-CentOS-5.3 #118

2009-09-27 Thread Henrik Nordstrom
sön 2009-09-27 klockan 03:02 +0200 skrev n...@squid-cache.org:

 [Henrik Nordstrom hen...@henriknordstrom.net] Make ESI parser modules expat 
 and libxml2 dependent on their libraries
 
 The ESI parser system is actually pluggable. There is no reason we should
 require expat and libxml2. Just build what works.

Sorry about that. Forgot to check compile without ESI enabled..

Amos, regarding your change. Shouldn¨t the --with/--without force those
libraries? Having detection within a --with seems wrong to me..

My preference for trunk is

  --with-...  require that library to be present, fail the build if not.

  --without-...  don't use that lib even if present

Regards
Henrik



Re: assert(e-mem_status == NOT_IN_MEMORY) versus TCP_MEM_HIT.

2009-09-26 Thread Henrik Nordstrom
lör 2009-09-26 klockan 18:37 +1200 skrev Amos Jeffries:

 Something seems a bit weird to me there...
 
 (c) being harmless race condition?

It is harmless, the only ill effect is that the swap file is opened when
it does not need to be, just as happens if the request had arrived a few
ms before.

clients starting when an object is not fully in memory always opens the
disk object to be sure they can get the whole response, even if it most
times do not need to read anything from that on-disk object.

 Surely its only harmless if we do (b) by changing the assert to a 
 self-fix action?

The self-fix is already there in that the actual data will all be copied
from memory. It's just that not all data was in memory when the request
started (store client created) but then when doCopy was called the first
time it was.

Or at least that's my assumption on what has happened. To know for sure
the object needs to be analyzed extracting expected size and min  max
in-memory to rule out that it's not an object that has got marked
erroneous as in-memory. But I am pretty sure my guess is right.

Regards
Henrik



Re: Squid 3.1 kerb auth helper

2009-09-26 Thread Henrik Nordstrom
lör 2009-09-26 klockan 11:43 +0100 skrev Markus Moeller:
 Is this a real issue or just to be compliant with debian rules ?  Can you 
 give me more details ?

It's the same issue I had with squid_kerb_auth when trying to package
3.1 for Fedora and which you helped to get fixed.

http://www.squid-cache.org/Versions/v3/3.1/changesets/b9719.patch


Amos, please merge at least 1 and 10002 and roll a new 3.1 release
when possible. 3.1.0.13 is just not cutting it for distro packaging, and
the amount of patches needed to get a reasonable 3.1 is rather long
now...

Required packaging patches:

Correct squid_kerb_auth compile/link flags to avoid bad runpath settings etc
http://www.squid-cache.org/Versions/v3/3.1/changesets/b9719.patch

Cleanup automake-foo a bit in errors/ (fixes lang symlinks when using
make install DESTDIR=...)
http://www.squid-cache.org/Versions/v3/3.1/changesets/b9720.patch

Install error page templates properly. (correction to the above)
http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-9743.patch


Patches which may bite some packagers depending on compilers and enabled
Squid features:

Better const-correctness on FTP login parse (newer GCC barfing)
http://www.squid-cache.org/Versions/v3/3.1/changesets/b9694.patch

Fixup libxml2 include magics, was failing when a configure cache was
used (ESI related)
http://www.squid-cache.org/Versions/v3/3.1/changesets/b9713.patch

Bug #2734: fix compile errors from CBDATA_CLASS2()
http://www.squid-cache.org/Versions/v3/3.1/changesets/b9716.patch

Make ESI behave reasonable when built but not used
http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-9738.patch

Bug #2777: Don't know how to make target `-lrt' on OpenSolaris (not yet
in 3.1)
http://www.squid-cache.org/Versions/v3/HEAD/changesets/squid-3-1.patch


Other patches I ranked as critical for Fedora but unrelated to packaging

Bug #2718: FTP sends EPSV2 on ipv4 connection
http://www.squid-cache.org/Versions/v3/3.1/changesets/b9696.patch

Bug #2541: Hang in 100% CPU loop while extacting header details using a
delimiter other than comma
http://www.squid-cache.org/Versions/v3/3.1/changesets/b9704.patch

Bug #2745: Invalid response error on small reads
http://www.squid-cache.org/Versions/v3/3.1/changesets/b9707.patch

Bug #2624: Invalid response for IMS request
http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-9737.patch

Bug #2773: Segfault in RFC2069 Digest authantication  (not yet in 3.1)
http://www.squid-cache.org/Versions/v3/HEAD/changesets/squid-3-10002.patch

Regards
Henrik



Re: Build failed in Hudson: 3.HEAD-i386-FreeBSD-6.4 #72

2009-09-25 Thread Henrik Nordstrom
fre 2009-09-25 klockan 14:49 +0200 skrev n...@squid-cache.org:

 sed: 1:  s...@default_http_port@% ...: unbalanced brackets ([])
 *** Error code 1

Hmm.. what is this about?

The actual failing line is

sed  s...@default_http_port@%3128%g; s...@default_icp_port@%3130%g; 
s...@default_cache_effective_user@%nobody%g; 
s...@default_mime_table@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc/mime.conf%g;
 
s...@default_dnsserver@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/libexec/`echo
 dnsserver | sed 's,x,x,;s/$//'`%g; 
s...@default_unlinkd@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/libexec/`echo
 unlinkd | sed 's,x,x,;s/$//'`%g; 
s...@default_pinger@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/libexec/`echo
 pinger | sed 's,x,x,;s/$//'`%g; 
s...@default_diskd@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/libexec/`echo
 diskd | sed 's,x,x,;s/$//'`%g; 
s...@default_cache_log@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/var/logs/cache.log%g;
 
s...@default_access_log@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/var/logs/access.log%g;
 
s...@default_store_log@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/var/logs/store.log%g;
 
s...@default_pid_file@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/var/squid.pid%g;
 
s...@default_netdb_file@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/var/logs/netdb.state%g;
 
s...@default_swap_dir@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/var/cache%g;
 
s...@default_icon_dir@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/share/icons%g;
 
s...@default_error_dir@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/share/errors%g;
 
s...@default_config_dir@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc%g;
 
s...@default_prefix@%/home/hudson/workspace/3.HEAD-i386-FreeBSD-6.4/btlayer-00-default/squid-3.HEAD-BZR/_inst%g;
 s...@default_hosts@%/etc/hosts%g; s...@[v]ersion@%3.HEAD-BZR%g;


I can't see anything wrong with that, certainly no unbalanced brackets..

Note: the [V]ERSION thing which is the only brackets in that line is a hack to 
get around VERSION being some magic keyword iirc, and has been there since 2003.

Regards
Henrik



changeset filename change

2009-09-24 Thread Henrik Nordstrom
As Amos already noticed the changesets in /Versions/v3/X/changesets/
have got a new file name style. They are now named

  squid-version-bzrrevision.patch

instead of just

  bbzrrevision.patch

The reason to this is because bzr runs revisions per branch, which makes
it hard to keep track of the changesets when working with multiple
versions. Additionally I have always found it a bit confusing to deal
with downloaded patches with just the revision number and no project
name or version... This change should have been done years ago before
the changesets where put into production, but..

The change is only effective for new changesets generated in the last
week or so.

Regards
Henrik



bzr revision 10000 reached!

2009-09-24 Thread Henrik Nordstrom
Congratulations to Amos for making revision 10K

http://www.squid-cache.org/Versions/v3/HEAD/changesets/squid-3-1.patch

Regards
Henrik



Re: Build failed in Hudson: 2.HEAD-i386-Debian-sid #57

2009-09-22 Thread Henrik Nordstrom
ons 2009-09-23 klockan 00:07 +1200 skrev Amos Jeffries:
 n...@squid-cache.org wrote:
  See http://build.squid-cache.org/job/2.HEAD-i386-Debian-sid/57/
  
  --
  Started by upstream project 2.HEAD-amd64-CentOS-5.3 build number 118
  Building remotely on rio.treenet
  cvs [checkout aborted]: Name or service not known
  FATAL: CVS failed. exit code=1
  
 
 So what did I have to do to fix this after your changes the other day 
 kinkie?

it need to do a checkout from the main CVS repository, not the
SourceForge one.

Regards
Henrik



Re: wiki, bugzilla, feature requests

2009-09-22 Thread Henrik Nordstrom
tis 2009-09-22 klockan 23:27 +1000 skrev Robert Collins:
 I'm proposing:
  - if there is a bug for something, and a wiki page, link them together.
  - scheduling, assignment, and dependency data should be put in bugs
  - whiteboards to sketch annotate document etc should always be in the
 wiki

I fully second this opinion.

Wiki is great for documentation and the like, but very poor for tracking
progress (or lack thereof).

Regards
Henrik



Re: ip masks, bug #2601 2141

2009-09-20 Thread Henrik Nordstrom
Hmm.. thinking here.

Not sure we should warn this loudly on clean IPv4 netmasks. People are
very used to those, and do not really produce any problems for us.

But we definitely SHOULD barf loudly on odd masks, or even outright
reject them as fatal configuration errors when used in the ip acl.

Which brings the next isse. There is configurations which intentionally
do make use of odd IPv4 netmasks to simplify the config even if limited
to a single expression per acl. To support these we should add back the
functionalit by adding an maskedip acl type using linear list (basically
a copy of ip acl, changing store method from splay to list).
Questionable if this maskedip acl type should support IPv6. Alternative
name ipv4mask.



mån 2009-09-21 klockan 09:06 +1200 skrev Amos Jeffries:
 
 revno: 9996
 committer: Amos Jeffries squ...@treenet.co.nz
 branch nick: trunk
 timestamp: Mon 2009-09-21 09:06:24 +1200
 message:
   Bug 2601: pt 2: Mixed v4/v6 src acl leads to TCP_DENIED
   
- Remove 'odd' netmask support from ACL.
- Fully deprecate netmask support for ACL.
   
   Earlier fix caused inconsistent handling between IPv4 and IPv6 builds of
   Squid. Which has turned out to be a bad idea.
   This fixes that by 'breaking' both build alternatives.




Re: myport and myip differences between Squid 2.7 and 3.1 when running in intercept mode

2009-09-18 Thread Henrik Nordstrom
fre 2009-09-18 klockan 11:13 +1000 skrev James Brotchie:

 On Squid 2.7 the intercepted acl matches whilst in 3.1 it doesn't.

In 2.7 the myport and myip acls are very unreliable in interception
mode. Depends on the request received if these are the local endpoint or
the original destination enpoint..

 Digging deeper into the Squid 3.1 source it seems that if a http_port
 is set to intercept then the me member of ConnStateData, which is
 normally the proxy's ip and listening port, is replaced by the pre-NAT
 destination ip and port.

And in 2.7 it just sometimes are, i.e. when the original destnation is
required to resolve the request.

And on some OS:es it always are replaced, depends on how the original
destination information is given to Squid.

Regards
Henrik



Re: /bzr/squid3/trunk/ r9985: Remove 'NAT' lookup restrictions from TPROXY lookups.

2009-09-18 Thread Henrik Nordstrom
fre 2009-09-18 klockan 18:35 +1200 skrev Amos Jeffries:

 +/* NAT is only available in IPv6 */
 +if ( !me.IsIPv4()   ) return -1;
 +if ( !peer.IsIPv4() ) return -1;
 +


Code  comment does not seem to match to me...

Regards
Henrik



Re: R: Squid 3 build errors on Visual Studio - problem still present

2009-09-17 Thread Henrik Nordstrom
tor 2009-09-17 klockan 11:15 +0200 skrev Guido Serassio:

 It fails:
 
 vs_string.cc
 c:\work\vc_string\vs_string.cc(1) : error C2371: 'size_t' :
 redefinition; different basic types

Gah, should have been unsigned long.. but interesting that VS apparently
has size_t built-in. It was declared in the preprocessed source as

typedef __w64 unsigned int   size_t;

 c:\work\vc_string\vs_string.cc : see declaration of 'size_t'
 c:\work\vc_string\vs_string.cc(34) : error C2057: expected constant
 expression

Good. so it seems the test case worked.

now replace std with testcase and try again, both in namespace and the
failing assignment just to make sure it's not tripping over something
else built-in.

 Do you need the preprocessed source ?

No, that was preprocessed already with no includes or other preprocessor
directives.

Regards
Henrik



Re: Build failed in Hudson: 2.HEAD-amd64-CentOS-5.3 #11

2009-09-17 Thread Henrik Nordstrom
tor 2009-09-17 klockan 01:00 +0200 skrev n...@squid-cache.org:
 See http://build.squid-cache.org/job/2.HEAD-amd64-CentOS-5.3/11/
 
 --
 A SCM change trigger started this job
 Building on master
 [2.HEAD-amd64-CentOS-5.3] $ cvs -Q -z3 -d 
 :pserver:anonym...@cvs.devel.squid-cache.org:/cvsroot/squid co -P -d 
 workspace -D Wednesday, September 16, 2009 11:00:32 PM UTC squid
 $ computing changelog
 Fatal error, aborting.
 anoncvs_squid: no such system user
 ERROR: cvs exited with error code 1
 Command line was [Executing 'cvs' with arguments:
 '-d:pserver:anonym...@cvs.devel.squid-cache.org:/cvsroot/squid'

Why are you pulling from cvs.devel (SourceForge)?

please pull from the main repository instead.

Regards
Henrik



[MERGE] Clean up htcp cache_peer options collapsing them into a single option with arguments

2009-09-17 Thread Henrik Nordstrom
the list of HTCP mode options had grown a bit too large. Collapse them
all into a single htcp= option taking a list of mode flags.

# Bazaar merge directive format 2 (Bazaar 0.90)
# revision_id: hen...@henriknordstrom.net-20090917222032-\
#   nns17iudtio5jovr
# target_branch: http://www.squid-cache.org/bzr/squid3/trunk/
# testament_sha1: 8bd7245c3b25c9acc89a037834f39bc71100b3ea
# timestamp: 2009-09-18 00:21:15 +0200
# base_revision_id: amosjeffr...@squid-cache.org-20090916095346-\
#   m7liji2knguolxxw
# 
# Begin patch
=== modified file 'src/cache_cf.cc'
--- src/cache_cf.cc	2009-09-15 11:59:51 +
+++ src/cache_cf.cc	2009-09-17 22:11:15 +
@@ -1753,30 +1753,41 @@
 } else if (!strcasecmp(token, weighted-round-robin)) {
 p-options.weighted_roundrobin = 1;
 #if USE_HTCP
-
 } else if (!strcasecmp(token, htcp)) {
 p-options.htcp = 1;
 } else if (!strcasecmp(token, htcp-oldsquid)) {
+	/* Note: This form is deprecated, replaced by htcp=oldsquid */
 p-options.htcp = 1;
 p-options.htcp_oldsquid = 1;
-} else if (!strcasecmp(token, htcp-no-clr)) {
-if (p-options.htcp_only_clr)
-fatalf(parse_peer: can't set htcp-no-clr and htcp-only-clr simultaneously);
-p-options.htcp = 1;
-p-options.htcp_no_clr = 1;
-} else if (!strcasecmp(token, htcp-no-purge-clr)) {
-p-options.htcp = 1;
-p-options.htcp_no_purge_clr = 1;
-} else if (!strcasecmp(token, htcp-only-clr)) {
-if (p-options.htcp_no_clr)
-fatalf(parse_peer: can't set htcp-no-clr and htcp-only-clr simultaneously);
-p-options.htcp = 1;
-p-options.htcp_only_clr = 1;
-} else if (!strcasecmp(token, htcp-forward-clr)) {
-p-options.htcp = 1;
-p-options.htcp_forward_clr = 1;
+} else if (!strncasecmp(token, htcp=, 5) || !strncasecmp(token, htcp-, 5)) {
+	/* Note: The htcp- form is deprecated, replaced by htcp= */
+p-options.htcp = 1;
+char *tmp = xstrdup(token+5);
+char *mode, *nextmode;
+for (mode = nextmode = token; mode; mode = nextmode) {
+nextmode = strchr(mode, ',');
+if (nextmode)
+*nextmode++ = '\0';
+if (!strcasecmp(mode, no-clr)) {
+if (p-options.htcp_only_clr)
+fatalf(parse_peer: can't set htcp-no-clr and htcp-only-clr simultaneously);
+p-options.htcp_no_clr = 1;
+} else if (!strcasecmp(mode, no-purge-clr)) {
+p-options.htcp_no_purge_clr = 1;
+} else if (!strcasecmp(mode, only-clr)) {
+if (p-options.htcp_no_clr)
+fatalf(parse_peer: can't set htcp no-clr and only-clr simultaneously);
+p-options.htcp_only_clr = 1;
+} else if (!strcasecmp(mode, forward-clr)) {
+p-options.htcp_forward_clr = 1;
+		} else if (!strcasecmp(mode, oldsquid)) {
+		p-options.htcp_oldsquid = 1;
+} else {
+fatalf(invalid HTCP mode '%s', mode);
+}
+}
+safe_free(tmp);
 #endif
-
 } else if (!strcasecmp(token, no-netdb-exchange)) {
 p-options.no_netdb_exchange = 1;
 

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2009-09-15 23:49:34 +
+++ src/cf.data.pre	2009-09-17 22:20:32 +
@@ -922,7 +922,7 @@
 
 	NOTE: The default if no htcp_access lines are present is to
 	deny all traffic. This default may cause problems with peers
-	using the htcp or htcp-oldsquid options.
+	using the htcp option.
 
 	This clause only supports fast acl types.
 	See http://wiki.squid-cache.org/SquidFaq/SquidAcl for details.
@@ -1682,22 +1682,23 @@
 	
 	htcp		Send HTCP, instead of ICP, queries to the neighbor.
 			You probably also want to set the icp-port to 4827
-			instead of 3130.
-	
-	htcp-oldsquid	Send HTCP to old Squid versions.
-	
-	htcp-no-clr	Send HTCP to the neighbor but without
+			instead of 3130. This directive accepts a comma separated
+			list of options described below.
+	
+	htcp=oldsquid	Send HTCP to old Squid versions (2.5 or earlier).
+	
+	htcp=no-clr	Send HTCP to the neighbor but without
 			sending any CLR requests.  This cannot be used with
-			htcp-only-clr.
-	
-	htcp-only-clr	Send HTCP to the neighbor but ONLY CLR requests.
-			This cannot be used with htcp-no-clr.
-	
-	htcp-no-purge-clr
+			only-clr.
+	
+	htcp=only-clr	Send HTCP to the neighbor but ONLY CLR requests.
+			This cannot be used with no-clr.
+	
+	htcp=no-purge-clr
 			Send HTCP to the neighbor including CLRs but only when
 			they do not result from PURGE requests.
 	
-	htcp-forward-clr
+	htcp=forward-clr
 			Forward any HTCP CLR requests this proxy receives to the peer.
 	
 	

# Begin bundle

Re: [MERGE] Clean up htcp cache_peer options collapsing them into a single option with arguments

2009-09-17 Thread Henrik Nordstrom
fre 2009-09-18 klockan 08:28 +1000 skrev Robert Collins:
 On Fri, 2009-09-18 at 00:22 +0200, Henrik Nordstrom wrote:
  the list of HTCP mode options had grown a bit too large. Collapse them
  all into a single htcp= option taking a list of mode flags.
 
 Its not clear from the docs whether folk should do
 htcp=foo htcp=bar
 or
 htcp=foo,bar

both works.

But the documentation do say (after unwinding the patch)

   htcpSend HTCP, instead of ICP, queries to the neighbor.
   You probably also want to set the icp-port to 4827
   instead of 3130. This directive accepts a comma separated
   list of options described below.

Regards
Henrik



Re: [PATCH] Log additional header for the navigation from BlackBarry Device

2009-09-15 Thread Henrik Nordstrom
mån 2009-09-14 klockan 18:58 +0200 skrev devzero2000:
 I hope this tiny patch can be useful also for other user, so i put
 here for review and possible merge if you like.
 Thanks in advance
 
 Elia
 ~~~
 
  This patch permit to log the additional Header used by BlackBarry
 and to remove these via http_headers squid.conf directive.

As commented in bugzilla I don't quite see why the patch is needed.
Logging works equally well without the patch.

Adding header ids for new headers is only useful if you need to quickly
access these headers in the Squid code. Those header ids are not used by
the logging code, only the header name.

Regards
Henrik



Re: Why does Squid-2 return HTTP_PROXY_AUTHENTICATION_REQUIRED on http_access DENY?

2009-09-15 Thread Henrik Nordstrom
tis 2009-09-15 klockan 16:09 +1000 skrev Adrian Chadd:
 But in that case, ACCESS_REQ_PROXY_AUTH would be returned rather than
 ACCESS_DENIED..

Perhaps. Simple change moving that logic from client_side.c to acl.c,
but may cause unexpected effects in other access directives such as
cache_peer_access where we don't want to challenge the user.

Why does it matter?

Regards
Henrik



Re: Squid-smp : Please discuss

2009-09-15 Thread Henrik Nordstrom
tis 2009-09-15 klockan 05:27 +0200 skrev Kinkie:

 I'm going to kick-start a new round then. If the approach has already
 been discussed, please forgive me and ignore this post.
 The idea is.. but what if we tried using a shared-nothing approach?

Yes, that was my preference in the previous round as well, and then move
from there to add back shared aspects. Using one process per CPU core,
non-blocking within that process and maybe internal offloading to
threads in things like eCAP.

Having requests bounce between threads is generally a bad idea from a
performance perspective, and should only be used when there is obvious
offload benefits where the operation to be performed is considerably
heavier than the transition between threads. Most operations are not..

Within the process I have been toying with the idea of using a message
based design rather than async calls with callbacks to further break up
and isolate components, especially in areas where adding back sharedness
is desired. But that's a side track, and same goals can be accomplished
with asynccall interfaces.

 Quick run-down: there is farm of processes, each with its own
 cache_mem and cache_dir(s). When a process receives a request, it
 parses it, hashes it somehow (CARP or a variation thereof) and defines
 if should handle it or if some other process should handle it. If it's
 some other process, it uses a Unix socket and some simple
 serialization protocol to pass around the parsed request and the file
 descriptor, so that the receiving process can pick up and continue
 servicing the request.

Just doing an internal CARP type forwarding is probably preferable, even
if it adds another internal hop. Things like SSL complicates fully
moving the request.

 There are some hairy bits (management, accounting, reconfiguration..)
 and some less hairy bits (hashing algorithm to use, whether there is a
 master process and a workers farm, or whether workers compete on
 accept()ing), but on a first sight it would seem a simpler approach
 than the extensive threading and locking we're talking about, AND it's
 completely orthogonal to it (so it could be viewed as a medium-term
 solution while AsyncCalls-ification remains active as a long-term
 refactoring activity, which will eventually lead to a true MT-squid)

I do not think we will see Squid ever become a true MT-squid without a
complete ground-up rewrite. Moving from single-thread all shared single
access data without locking to multithread is a very complex path.

Regards
Henrik



Re: R: Squid-smp : Please discuss

2009-09-15 Thread Henrik Nordstrom
tis 2009-09-15 klockan 08:39 +0200 skrev Guido Serassio:

 But MSYS + MinGW provides gcc 3.4.5 and the Squid 3 Visual Studio
 Project is based on Visual Studio 2005.

There is GCC-4.x for MinGW as well. What I have in my installations.
Just not classified as the current production release for some reason
which more and more is ignoring toda.

Regards
Henrik



  1   2   3   4   5   6   7   8   9   10   >