Re: Two patches for better heimdal support

2010-12-06 Thread Henrik Nordström
Applied

mån 2010-12-06 klockan 21:12 + skrev Markus Moeller:
> Sorry I have another addition for config.test
> 
> === modified file 'helpers/external_acl/kerberos_ldap_group/config.test'
> --- helpers/external_acl/kerberos_ldap_group/config.test2010-12-05 
> 00:25:25 +
> +++ helpers/external_acl/kerberos_ldap_group/config.test2010-12-06 
> 21:09:15 +
> @@ -9,6 +9,9 @@
> if [ -f /usr/lib/libsasl.la -o -f /usr/lib/libsasl2.la ]; 
> then
> exit 0
> fi
> +   if [ -f /usr/lib/libsasl.so -o -f /usr/lib/libsasl2.so ]; 
> then
> +   exit 0
> +   fi
> if [ -f  /usr/local/lib/libsasl.so -o -f 
> /usr/local/lib/libsasl2.so ]; then
> exit 0
> fi
> 
> Markus
> 
> 
> "Markus Moeller"  wrote in message 
> news:idde7p$8a...@dough.gmane.org...
> > Hi Amos,
> >
> >   Please find attached more patches for better heimdal support as new
> > heimdal version have gssapi_krb5 header files which were in the past only 
> > in
> > older MIT releases available.
> >
> >   1) kerberos_ldap_group_header.diff fixes the gssapi_krb5 header issue
> >
> >  Secondly to use kerberos_ldap_group on freebsd config.test has to be
> > changed as freebsd installs additional packages in /usr/local. The 
> > following
> > patch addresses this
> >
> >   2) kerberos_ldap_group_config.diff
> >
> >  Thirdly on freebsd 7 the krb5.h file does not work with C++. This patch
> > checks for it
> >
> >   3) kerberos_ldap_group_freebsd.diff  (it includes the
> > kerberos_ldap_group_header.diff patch)
> >
> > Regards
> > Markus
> >
> > "Markus Moeller"  wrote in message
> > news:ibpome$ps...@dough.gmane.org...
> >> Here is an update using only #if  / #elif  and changed the order a bit.
> >>
> >> Markus
> >>
> >> "Amos Jeffries"  wrote in message
> >> news:104be24899d2c3a232288ea0fa5a7...@mail.treenet.co.nz...
> >>> On Sun, 14 Nov 2010 18:37:39 -, "Markus Moeller"
> >>>  wrote:
>  Hi
> 
>   I noticed that the trunk does not compile on FreeBSD with Heimdal.
> >>> Here
>  are two patches against the trunk.
> 
>  Markus
> >>>
> >>> These appear to be reversions of the file-based inclusions. Would it not
> >>> be better just to add:
> >>>
> >>> +#if HAVE_GSSAPI_GSSAPI_EXT_H
> >>> +#include 
> >>> +#endif
> >>>
> >>> to the end of the include lists?
> >>>
> >>> Also, it is adding quite a messy mix of ifdef and if defined(). Please
> >>> just use #if / #elif either way.
> >>>
> >>> Amos
> >>>
> >>>
> >>
> > 
> 




Re: bootstrap.sh version lookup

2010-12-06 Thread Henrik Nordström
sön 2010-12-05 klockan 23:37 +1300 skrev Amos Jeffries:

> Looking into the remaining autoconf warnings they are generated by old 
> macros in libtool. Seems to be harmless, just annoyingly verbose. That 
> will need upgrade as well to silence them.

Done. libtool upgraded to libtool-2.2.10

Regards
Henrik



Re: Feature branch launched: deheader

2010-12-06 Thread Henrik Nordström
sön 2010-12-05 klockan 15:28 +0100 skrev Kinkie:
> Hi all,
>   Eric Raymond recently released a tool named deheader
> (http://www.catb.org/esr/deheader/) which goes through c/c++ project
> looking for unneeded includes.
> It does so by trying to compile each source file after removing one
> #include statement at a time, and seeing if it builds.

Which is interesting, but a lot of filtering is needed to make use of
the results.

Many times OS headers is quite forgiving, allowing you to skip many
includes which is documented as required for a certain function. For
example socket(2) officially requires sys/types.h and sys/socket.h, but
many OS:es accept if you just include sys/socket.h.

And we do have some magics in our own sources where stripping an Squid
include would make that file still compile but result in missing
functionality. Most times this will result in link failure however as
long as config.h is always included one way or another.

I would use the tool very cautious. It's interesting, but the results
can not automatically be taken as a truth, only a hint.

Personally I don't have a big problem with a bit too many includes. 

Regards
Henrik



Re: Fix 'can't create ./src/***: No such file' errors on linking

2010-12-06 Thread Henrik Nordström
tis 2010-12-07 klockan 03:06 +1300 skrev Amos Jeffries:

>   Fix 'can't create ./src/***: No such file' errors on linking
>   
>   Side effect of the AIX port fixes using .o instead of duplicating and
>   re-building the .cc individually. "It seemed a good idea at the time"(tm).
>   It is a bit strange that it should only show up now, those changes were
>   made long ago.

No idea, but for some reason /src/tests do not get created, and
as result it fails to compile the objects in there.

Note: I would prefer to avoid copying if possible. I find the copying
quite confusing.

Regards
Henrik



Re: Two patches for better heimdal support

2010-12-06 Thread Henrik Nordström
mån 2010-12-06 klockan 20:26 + skrev Markus Moeller:
> Hi Henrik,
> 
> That seems to be from another patch:

Rather seems Amos already fixed it. Sorry for the noise.

Regards
Henrik



Re: Two patches for better heimdal support

2010-12-06 Thread Henrik Nordström
The build farm now on kerberos_ldap_group due to int/time_t type
mismatches

../../../../helpers/external_acl/kerberos_ldap_group/support_log.cc: In 
function `const char* LogTime()':
../../../../helpers/external_acl/kerberos_ldap_group/support_log.cc:44: error: 
invalid conversion from `long int*' to `const time_t*'
../../../../helpers/external_acl/kerberos_ldap_group/support_log.cc:44: error:  
 initializing argument 1 of `tm* localtime(const time_t*)'

I assume this is related to the heimdal support patches.


Regards
Henrik



Re: bootstrap.sh version lookup

2010-12-04 Thread Henrik Nordström
sön 2010-12-05 klockan 13:19 +1300 skrev Amos Jeffries:

> IIRC those older RHEL boxes you support Alex were the worst cases for 
> legacy software. If you have no problem it gets my +1 as well.

+2 then. Done. And all functionality for searching kept if needed, just
default to use first found which is system default.

> While we are at it can we rename the configure.in to configure.ac ?

Agreed. Done.

Regards
Henrik



Re: bootstrap.sh version lookup

2010-12-04 Thread Henrik Nordström
lör 2010-12-04 klockan 16:16 -0700 skrev Alex Rousskov:

> Removal of search may break builds on boxes where the older autotools 
> are the default ones but Squid was able to find new ones and use them. I 
> guess it is not our problem.

build from tarball is unaffected. Only those needing to run bootstrap.sh
is affected by the change.

Regards
Henrik



bootstrap.sh version lookup

2010-12-04 Thread Henrik Nordström
The recent upgrade of automake & autoconf on squid-cache.org again broke
bootstrap.sh.

I think it's best we make squid-3 bootstrap.sh use whatever the system
has as default versions and not attempt searching. The version search is
mostly a legacy from when we depended on very specific autotools
versions, something neither Squid-3 or even Squid-2 does these days.

Regards
Henrik



Re: Build failed in Hudson: 3.HEAD-i386-opensolaris-SunStudioCc #502

2010-12-04 Thread Henrik Nordström
fre 2010-12-03 klockan 19:11 +0200 skrev Tsantilas Christos:

> The only way I found to solve the above is to define C++ wrappers 
> functions for X509_free, EVP_PKEY_free, BIG_NUM_free etc (~ 10 wrapper 
> functions)
> 
> Any opinion?

extern "C" in the DeAllocator declaration should be sufficient, maybe
"hidden" in an typedef for clarity.

Regards
Henrik



Re: Squid 3.2 Cannot bind socket

2010-12-02 Thread Henrik Nordström
tor 2010-12-02 klockan 13:12 +0100 skrev Menil Jean-Philippe:

> I think, it's related to the prefix or the cppinit-basedir?

var/run is derived localstatedir which is derived from prefix.

If you set prefix to /usr then localstatedir should be /var and you may
need to additionally detail tune it's sub locations with other configure
options

  with-pidfile
  with-logdir
  with-swapdir

Regards
Henrik



Re: hudsons hidden build errors

2010-12-01 Thread Henrik Nordström
mån 2010-11-29 klockan 18:58 +1300 skrev Amos Jeffries:

> Anyone with wheel access keen to delve into auto-tools dependency hell 
> again?

automake-1.11.1 has been installed. Hopefully that fixes the problem.

Regards
Henrik




Re: hudsons hidden build errors

2010-12-01 Thread Henrik Nordström
ons 2010-12-01 klockan 23:47 +1300 skrev Amos Jeffries:

> We support said OS because that is our packaging machine.

Well. FreeBSD is FreeBSD, and autotools is not FreeBSD but FreeBSD ports
independent of FreeBSD version.

Regards
Henrik



Re: [PATCH] Polish log formats

2010-11-30 Thread Henrik Nordström
tis 2010-11-30 klockan 03:25 +1300 skrev Amos Jeffries:

> So could you explain those condition please? It's not clear from the 
> code that they did anything beyond logging at the end of header parsing.
> Or do you mean that by being done there they logged multiple lines per 
> request?

Checking.. was some years since I looked at this while doing the
log_format stuff.

Both only logs if the corresponding header is in the request. And yes,
this can be simulated by using an acl for the condition.

useragent_log translates to

acl has_useragent useragent .
log_format useragentlog 
access_log /path/to/file useragentlog has_useragent

and similar for referer_log using referer instead of useragent.

Regards
Henrik



Re: [PATCH] Polish log formats

2010-11-29 Thread Henrik Nordström

> While I do not know anybody using the obsoleted or removed features, it 
> may be a good idea to warn about those logging changes on squid-users 
> and see if there are any reasonable objections.

referer_log is a little different. Can't be fully emulated by a custom log 
format. the log format can, but not the conditions on when to log.

Same applies to useragent_log.

The emulate_httpd_log directive is no problem to obsolete. Documentation been 
pretty clear on what is the current way of acheiving this via access_log 
directive. Additionally the default access_log have been wrongly defined for 
some time overriding emulate_httpd_log without anyone noticing.

Regards
Henrik


RE: [squid-users] Beta testers wanted for 3.2.0.1 - Changing 'workers' (from 1 to 2) is not supported and ignored

2010-11-27 Thread Henrik Nordström
fre 2010-11-26 klockan 21:08 + skrev Ming Fu:
> Ktrace shown that the bind failed because it try to open unix socket in 
> /usr/local/squid/var/run and it does not have the permission. So it is easy 
> to fix.
> 
> After the permission is corrected, I run into other problem, here is the log 
> snip:
> 
> 2010/11/26 20:55:35 kid2| Starting Squid Cache version 3.2.0.3 for 
> amd64-unknown-freebsd8.1...
> 2010/11/26 20:55:35 kid3| Starting Squid Cache version 3.2.0.3 for 
> amd64-unknown-freebsd8.1...
> 2010/11/26 20:55:35 kid1| Starting Squid Cache version 3.2.0.3 for 
> amd64-unknown-freebsd8.1...
> 2010/11/26 20:55:35 kid3| Set Current Directory to /usr/local/squid/var/cache
> 2010/11/26 20:55:35 kid2| Set Current Directory to /usr/local/squid/var/cache
> 2010/11/26 20:55:35 kid1| Set Current Directory to /usr/local/squid/var/cache

Each worker need their own cache location.

http://www.squid-cache.org/Versions/v3/3.2/RELEASENOTES.html#ss2.1

Regards
Henrik



Re: Other proxy types (sock,voip,irc) in squid

2010-11-27 Thread Henrik Nordström
fre 2010-11-26 klockan 22:33 -0800 skrev Arthur Tumanyan:
> Thanks for answer.
> I have an idea.
>  I want add to squid support for other protos and mysql_config
> support(config elements will be stored in mysql db).All  data about
> traffic will be stored in mysql db. An another process/software will
> operate with that data. Squid will became a complete(or almost
> complete) solution for traffic accounting for most  (small or medium)
> ISPs.

I would split that in two projects. One for config and one for log.

The log part is better done as a log daemon. See log_file_daemon
directive.

> About mirrors:
> As I said before, I intend to create a mirror for AM zone,with your
> permission of course.But I don't know what I need to do that.I already
> bоught a hosting place with 1GB place ,and the www.seagull-home.net
> domain.

Instructions on how to set up mirrors are found at
http://www.squid-cache.org/Download/howto-mirror.html

Current disk requirements:

www: 2.5GB
ftp archive: 600MB
ftp current: 60MB

> about my subscribtion: 
> I have already subscribed to squid-dev mailing list,but every time IDd
> post a message ,I get the "Your post is still pending" message for a
> very long time. Even now,when you reply to me,that message is still
> active. Whats wrong?

You look subscribed to me.

Next time you get a such message, please forward it to me including full
message headers.

Regards
Henrik




Re: Other proxy types (sock,voip,irc) in squid

2010-11-26 Thread Henrik Nordström
fre 2010-11-26 klockan 11:26 -0800 skrev Arthur Tumanyan:
> Hi.I havе a question about protos which supports squid.Can I add sock or VOIP
> feature to squid(by adding some piece of code,for example),and if yes,how?I
> mean,where to dig? I need some support.
> And can squid log all incoming data in access.log (very detailed log >
> in/out,portIn,portOut,serviceType/protoType,etc) theoretically.
> Thanks!

Squid is mainly an HTTP proxy. Adding other protocols would share very
little with the existing code.

Regarding access.log then yes. See log_format. But only summary data
(once per request/session), not full payload data.

Regards
Henrik



Re: ext_edirectory_userip-2010-11-11.patch

2010-11-25 Thread Henrik Nordström
There was no attached patch or bugzilla reference #.

tis 2010-11-23 klockan 12:52 -0500 skrev Chad Naugle:
> I submitted this to bugzilla, but it hasen't yet been it added to
> trunk.
> 
> * Currently working on re-writing some core functions, (ex.
> SplitString) will come in next patch.
> 
> 
> -
> Chad E. Naugle
> Tech Support II, x. 7981
> Travel Impressions, Ltd.
>  
> 
> 
> 
> Travel Impressions made the following annotations
> -
> "This message and any attachments are solely for the intended recipient
> and may contain confidential or privileged information.  If you are not
> the intended recipient, any disclosure, copying, use, or distribution of
> the information included in this message and any attachments is
> prohibited.  If you have received this communication in error, please
> notify us by reply e-mail and immediately and permanently delete this
> message and any attachments.
> Thank you."




Re: [PATCH] [RFC] custom error pages

2010-11-25 Thread Henrik Nordström
tor 2010-11-25 klockan 15:51 -0700 skrev Alex Rousskov:

> Opening files for each runtime error is pretty bad from performance 
> point of view, although we can hope that errors do not happen often 
> enough to cause problems in most setups. Preloading is a better 
> approach, IMO, but is outside this patch scope.

We could cache them as needed.

Regards
Henrik



Re: [PATCH] 3.2/3.HEAD: send HTTP/1.1 on CONNECT responses

2010-11-25 Thread Henrik Nordström
tis 2010-11-23 klockan 03:49 + skrev Amos Jeffries:
> 3.2 and later send HTTP/1.1 version on all regular response lines.
> 
> CONNECT seems to have been missed somehow. This corrects the omission so
> the hard-coded CONNECT reply strings send 1.1 as well.

Ouch, is that response hardcoded?

> I'm not certain, but this may explain some of the very rare CONNECT
> problems seen in 3.2+.

Like?

>   if its not too much trouble could you add this to the compliance testing
> queue to see if it fixes/breaks anything before it gets committed please?

It's a trivial change, not much it can break.

But even with the change it's a non-compliant response. Should minimally
include Date and preferably Server as well.

Regards
Henrik



Re: [PATCH] [RFC] custom error pages

2010-11-21 Thread Henrik Nordström
lör 2010-11-20 klockan 18:58 +1300 skrev Amos Jeffries:

> To simplify the error page loading a symlink is added from the templated 
> /usr/share/squid/errors/local to that /etc/squid/errors.d folder and 
> checked just before loading the errors/templates/ backup.

Why? I find having this symlink confusing, and yet another potential
source of configuration errors.

Also is errors.d language sensitive? If so, is there a "default"
language that overrides all languages?

Regards
Henrik



Re: [PATCH] [RFC] custom error pages

2010-11-21 Thread Henrik Nordström
lör 2010-11-20 klockan 18:58 +1300 skrev Amos Jeffries:

> To simplify the error page loading a symlink is added from the templated 
> /usr/share/squid/errors/local to that /etc/squid/errors.d folder and 
> checked just before loading the errors/templates/ backup.

Why? I find having this symlink confusing, and yet another potential
source of configuration errors.

Also is errors.d language sensitive? If so, is there a "default"
language that overrides all languages?

Regards
Henrik




Re: Failing build on opensolaris

2010-11-03 Thread Henrik Nordström
ons 2010-11-03 klockan 17:08 +0200 skrev Tsantilas Christos:

> Yes events  uses the cbdataReference/cbdataReferenceDone but if we do 
> not lock cbdata before pass to an event the cbdata will be deleted after 
> the event done (when the cbdataReferenceDone called)

Only if someone calls cbdataFree on the thing, which would invalidate
the cbdata object and free it completely when the last reference is
released.

Regards
Henrik



Re: patch for configure option --with-swapdir

2010-10-30 Thread Henrik Nordström
lör 2010-10-30 klockan 02:06 +0200 skrev Christian:
> Hi Amos,
> 
> just packaging 3.1.9 and miss the
> 
> --with-swapdir

Your patch is scheduled for Squid-3.2.0.3 and later. I do not think it
will get merged back to 3.1 at this time with 3.2 just around the
corner. Sorry if communication have been unclear on this.

The patch as merged into 3.2:

http://www.squid-cache.org/Versions/v3/3.2/changesets/squid-3.2-10762.patch

Regards
Henrik



Re: Failing build on opensolaris

2010-10-29 Thread Henrik Nordström
tor 2010-10-28 klockan 22:26 +0200 skrev Kinkie:

> Well, my aim is a very modest "let the damn thing build".
> I do not yet understand the intricacies of cbdata, and thus I am not
> able to understand when it is abused and when the abuse is benign.

There is two cbdata roles

a) Object owner, using "plain pointer" and freeing the object using
cbdataFree when done.

b) Other code needing to to a callback to the object owner passing the
object for owner state info. Uses cbdataReference to track the object
and cbdataReferenceValid & cbdataReferenceDone (or usually preferred the
combined cbdataReferenceValidDone) when making the callback.


Different cases of abuse:

* use of the return value of cbdataReference as a pointer to some
specific type of object. The API intention is to consider it anonymois
"void *" where the actual data type is only known by the object owner.

* use of cbdataReference as a refcount substitute. (we did not have
refcount when cbdata was added)

* no clear separation between "owner" and "other code needing to do a
callback".

* Direct uses of cbdataInternal* calls. 

* use of cbdata as a simple way to set up pooled allocation even when
the object is never intended to be used in callbacks.


> > Keep in mind that a lot of cbdata-using code violates these very good rules,

I would not say "a lot". There is some abuses, but most of the code uses
it right, at least last time I audited cbdata usage.

Regards
Henrik



Re: Failing build on opensolaris

2010-10-28 Thread Henrik Nordström
ons 2010-10-27 klockan 20:29 +0200 skrev Kinkie:
> The build is failing on
> 
> ../../src/comm.cc: In member function `void
> ClientInfo::setWriteLimiter(int, double, double)':
> ../../src/comm.cc:2156: warning: right-hand operand of comma has no effect
> 
> That line is
> cbdataReference(quotaQueue);
> 
> where  in cbdata.h
> #define cbdataReference(var)  (cbdataInternalLock(var), var)
> 
> Fixing this would seem simple, but it probably breaks encapsulation
> (that "Internal" soeems suspicious to me).
> Anyone willing to take on this, or to share a hint on how to do it?


The compiler is right in erroring out on this. The result of
cbdataReference MUST NOT be thrown away. It's your reference to the
object and should later be passed to cbdataReferenceDone.

It's also usable for accessing the object, but it's not really the
intention that holders of cbdata references should access the content of
the object, just pass it back in a callback.

Regards
Henrik



Re: [PATCH] Solaris /dev/poll support for Squid 3 (how can I contribute)

2010-10-13 Thread Henrik Nordström
ons 2010-10-13 klockan 22:25 + skrev Amos Jeffries:

> 
> +1 from me with merge tweaks.
> 
> Unless anyone has objections I will commit with tweaks at the next
> opportunity.

No objection from me. But have not reviewed the changes outside
comm_devpoll.cc.

Regards
Henrik



Re: [MERGE] branch prediction hints

2010-10-13 Thread Henrik Nordström
ons 2010-10-13 klockan 20:50 +0200 skrev Kinkie:
> I agree. That's why I propose to only use it - if we do use it for
> anything - for singletons, where it should be a no-brainer.

Have you checked if it makes any difference in the proposed use you see
today?

Regards
Henrik



Re: [PATCH] Solaris /dev/poll support for Squid 3 (how can I contribute)

2010-10-13 Thread Henrik Nordström
ons 2010-10-13 klockan 15:40 +0100 skrev Peter Payne:
> Hello Amos,
> 
> apologies to the dev list for what must appear to be spamming.

No apologies needed. We are all for release early an often, and
discussing code is what this list is for.

Regards
Henrik



Re: [MERGE] branch prediction hints

2010-10-13 Thread Henrik Nordström
tis 2010-10-12 klockan 17:47 +0200 skrev Kinkie:
> Hi all,
>   this patch-let makes implements a GCC feature to hint the branch
> prediction algorithms about the likely outcome of a test. This is useful
> to optimize the case of singleton patterns (e.g.
> CacheManager::GetInstance).
> This implements the likely() and unlikely() macros.

My experience is that unless one is very careful the use of these hints
often backfire after the code evolves a bit.

The compiler is generally pretty good at doing the right choice, and the
runtime profiling support can be used to fill in the missing spots
automatically, with no risk of decaying over time.

Yes, a profiling based build requires a bit of effort in each build, as
it requires you to first make a profiling build an put some reasonable
workload on it and then build again providing the profiling data as
input to the compiler.

Regards
Henrik




Re: debugging Squid ICAP interface

2010-10-13 Thread Henrik Nordström
tis 2010-10-12 klockan 14:51 -0300 skrev Marcus Kool:

> I have various observations and questions about the Squid ICAP interface
> and like to discuss these with the persons who wrote or know much about
> the ICAP client part of Squid.
> I like to know with whom I can discuss this and which mailing list to use.

This list is the right place for such discussions.

Regards
Henrik



Re: REWRITE directive

2010-09-29 Thread Henrik Nordström
mån 2010-09-27 klockan 20:24 -0300 skrev Miguel Castellanos:

> I really missed it L, external redirect programs are not good for high
> performance servers.

url rewrite program using the concurrent protocol performs pretty well
for high performace servers in my experience.

the old non-concurrent helper protocol however is a major bottleneck
even at lower moderate loads.

Regards
Henrik



bzr usage update

2010-09-20 Thread Henrik Nordström
A small bzr usage update. When committing bugfixes please use "bzr
commit --fixes squid:".

You need a little configuration to enable the squid bugtracker. In
$HOME/.bazaar/bazaar.conf add

[DEFAULT]
bugzilla_squid_url = http://bugs.squid-cache.org

You can add as many --fixes arguments as needed. For example if working
on an bug registered in launchpad then use --fixes lp: and
launchpad will automatically close the bug when the change gets pushed
there.


Regards
Henrik



Re: HTTP Compliance: do not remove ETag header from partial responses

2010-09-20 Thread Henrik Nordström
mån 2010-09-20 klockan 10:26 -0600 skrev Alex Rousskov:
> HTTP Compliance: do not remove ETag header from partial responses
> 
> RFC 2616 section 10.2.7 says that partial responses MUST include ETag
> header, if it would have been sent in a 200 response to the same request.

Oh! Is that still in Squid-3?

Kind of related to this we should also use If-None-Match when validating
an entry with a known ETag.

Regards
Henrik



Re: [RFC] helper API

2010-09-19 Thread Henrik Nordström
fre 2010-09-17 klockan 12:20 -0600 skrev Alex Rousskov:

> If needed, the Framework can expose the descriptor or class for writing 
> responses but it will become more complex and rigid if we allow such raw 
> access. I do not know whether it is needed. I am not a helper expert.

It's not needed. But the API need to provide some kind of handle which
abstracts the old/concurrent details ("channel").

> I do not know much about auth and external ACL needs, but if they are 
> also hungry for more info, the same argument applies.

auth is intentionally stripped down from providing more info.

external acl by definition provides a lot of information, but only the
specific information requested by the acl definition.

> Again, compatibility with eCAP does _not_ imply that helpers become 
> embedded. It just leaves that possibility open in the future, without 
> rewriting all helpers again.

url rewriting is a prime candidate to be replaced by eCAP I think.

This said the main benefit of the current helper interface is it's
simplicity and language independence. There is various helpers in C, C
++, perl, python, shell script, php and probably a number of other
languages as well. I do not see C or C++ as the primary language for
helper development, rather perl & python today. Many of the existing
helpers would benefit from being rewritten in perl or python.

I do not see it as a desirable goal to encourage others to write more
helpers in C or C++. Generally better if they use a scripting language
such as perl or python as it's much less risk doing something stupid,
easier to maintain and easily integrates with pretty much whatever which
is what helpers usually is about.

Regards
Henrik




Re: EPSV/EPRT support

2010-09-15 Thread Henrik Nordström
tor 2010-09-16 klockan 02:11 +1200 skrev Amos Jeffries:

> This is the first issue with them in some months (last ones were me 
> buggering up the v4/v6 connection types used).

Had to ask a user to set ftp_epsv off some weeks ago due to a broken
firewall at the requested server.

And very very many NAT devices & home/small business firewalls will fail
badly on any use of EPRT.

Regards
Henrik



EPSV/EPRT support

2010-09-15 Thread Henrik Nordström
tis 2010-09-14 klockan 01:23 + skrev Amos Jeffries:

>  * Squid should be starting with EPSV not EPRT anyway. Check that your
> ftp_pasv directive is set to "on" (default), or remove it from the config
> altogether.

Shouldn't we start with PASV if it's a IPv4 connection?

There is no big need for EPSV/EPRT in IPv4, and many NATs and Firewalls
have issues tracking the E* requests/responses.

Sure, they are designed to actually be easier for NATs and Firewalls,
and they are, but things do fail if the directives are not understood.

The old PASV/PORT commands are well known for ages, and supported by
virtually every device out there.

This problem is seen on both client and server sides, for both EPSV and
EPRT.

Yes, it's somewhat depressing to still frequently see these issues when
the EPSV and EPRT directives have been official standards track changes
to the FTP protocol for over a decade (1998), but that's the reality of
Internet.

Regards
Henrik



Re: How to ignore query terms for store key?

2010-09-08 Thread Henrik Nordström
tis 2010-09-07 klockan 18:59 -0700 skrev Guy Bashkansky:

> /usr/local/squid/bin/strip-query.pl
> #!/usr/local/bin/perl -Tw
> $| = 1; while(<>) { chomp; s/\?\S*//; print; } ### my strip query test

If you chomp the newline then you need to add it back when printing the
result.

Regards
Henrik



Re: How to ignore query terms for store key?

2010-09-07 Thread Henrik Nordström
fre 2010-09-03 klockan 18:03 -0700 skrev Guy Bashkansky:
> Is there a way to ignore URI query terms when forming store keys?
> Maybe some rule or extension?

http://wiki.squid-cache.org/Features/StoreUrlRewrite

needs to be implemented for Squid-3 as well.. currently a Squid-2 only
feature.

Regards
Henrik



Re: [PREVIEW] Dechunk incoming requests as needed and pipeline them to the server side.

2010-09-04 Thread Henrik Nordström
Looks like you are doing good progress in the right direction. Good work!

- Ursprungsmeddelande -
> Dechunk incoming requests as needed and pipeline them to the server side.
> 
> The server side will eventually either chunk the request or fail. That 
> code is not ready yet and is not a part of this patch. This patch iis 
> not enough because the dechunked requests end up being sent without 
> chunking and without Content-Length. However, these client-side changes 
> are ready and seem to be working. It may be easier to review them now, 
> without the server-side code.
> 
> Details are below.
> 
> 
> Removed clientIsRequestBodyValid() as unused. It was called with a 
> content-length>0 precondition that made the function always return true.
> 
> Removed old dechunking hack that was trying to buffering the entire 
> request body, pretending that we are still reading the headers. Adjusted 
> related code. More work may be needed to identify client-side code that 
> assumes the request size is always known.
> 
> Removed ConnStateData::bodySizeLeft() because we do not always know how 
> much body is left to read -- chunked requests do not have known sizes 
> until we read the last-chunk. Moreover, it was possibly used wrong 
> because sometimes we want to know whether we want to comm_read more body 
> bytes and sometimes we want to know whether we want to "produce" more 
> body bytes (i.e., copy already read bytes into the BodyPipe buffer, 
> which can get full).
> 
> Added ConnStateData::mayNeedToReadMoreBody() to replace 
> conn->bodySizeLeft() with something more usable and precise.
> 
> Removed my wrong XXX related to closing after initiateClose.
> 
> Removed my(?) XXX related to endless chunked requests. There is nothing 
> special about them, I guess, as a non-chunked request can be virtually 
> endless as well if it has a huge Content-Length value.
> 
> Use commIsHalfClosed() instead of fd_table[fd].flags.socket_eof for 
> consistency with other client-side code and to improve readability. I 
> think these should return the same value in our context but I am not
> sure.
> 
> Correctly handle identity encoding. TODO: double check that it is still 
> in the HTTP standard.
> 
> Fixed HttpStateData::doneSendingRequestBody to call its parent. I am not 
> sure it helps with correctly processing transactions, but the parent 
> method was designed to be called, and calling it make the transaction 
> state more clear.
> 
> 
> Thank you,
> 
> Alex.
> 
   dechunk-requests-t0.patch



Re: [Bug 3034] HTTP/1.0 chunked replies break Firefox

2010-09-03 Thread Henrik Nordström
ons 2010-09-01 klockan 21:52 -0600 skrev Alex Rousskov:

> An alternative is to add a fast ACL option to be able to enable HTTP/1.0 
> chunked responses to selected user agents (none by default).

+1 on that from me. Allows the chunked response support to evolve
experimentally until we feel comfortable with announcing as HTTP/1.1.

Use of chunked in HTTP/1.0 is purely experimental. 

> A yet another alternative is to change chunked response version to 
> HTTP/1.1. This would be very easy to implement, but may confuse clients 
> that try to track the proxy version (I suspect Opera might be doing 
> that, for example).

We have discussed that a lot already.

I don't mind seeing an option for setting the HTTP version of responses.
A simple per http_port "on/off" kind of directive as in Squid-2.7 is
quite sufficient.

If this is added then the chunked response version check discussed
earlier is needed, to only chunk the response if our response version is
HTTP/1.1.


Once we do have acceptable HTTP/1.1 level compliance then the need for
this tuning option should go away, making the default HTTP/1.1. I doubt
there will be need for HTTP/1.0 downgrade once we can forward HTTP/1.1
properly.

Regards
Henrik



Re: [Bug 3034] HTTP/1.0 chunked replies break Firefox

2010-09-03 Thread Henrik Nordström
ons 2010-09-01 klockan 22:54 + skrev Amos Jeffries:

> If the de-chunker could be converted to not needing the entire object
> before de-chunking that would allow us to avoid half the workaround and
> extra options being proposed to get the other bits working.

You either dechunk or reject the request. There is no middleground in
request forwarding.

> Can we just de-chunk requests over a certain size (8KB,16KB,?) into old
> fashioned unknown-length request one chunk at a time sending out the data
> and pass FIN/RST back at the end like would have happened previously?

No. HTTP do not work like that. HTTP requests only have two possible
message delimiters:

a) Content-Lenght with a known length.
b) Chunked encoding

TCP/FIN is only used on replies, not requests.

HTTP is very clear on what is the standard order of operation when
forwarding requests, and is to reject chunked request with 411 Length
Requires if it's not known the request can safely be forwarded using
chunked encoding, else forward it using chunked encoding.

Dechunking is a deviation from protocol specifications and not how
HTTP/1.1 is supposed to operate. Allowed, but by far the default
intended mode of operation. In HTTP/1.1 everyone is supposed to support
chunked encoding.

Also, when HTTP/1.1 (2616) operates the way it's designed (Expect:
100-continue) then you won't even get the request data until an 100
Continue have been seen, and having a proxy send 100 Continue only so
that it can dechunk the request is again not how 100 Continue is meant
to be used. 100 Continue is really meant to be end-to-end avoiding to
transmit the request body until it's known the target server want's to
see it. It's not forbidden to use it hop by hop but logics on forwarding
100 Continue responses and/or the request body gets a little muddy when
intermediaries insert 100 Continue messages.


> As I see it the collateral damage of lost connections is unpleasant but no 
> worse
> than in HTTP/1.0.

?

Regards
Henrik



Re: Compliance: reply with 400 (Bad Request) if request header is too big

2010-08-30 Thread Henrik Nordström

- Ursprungsmeddelande -
> On 08/30/2010 01:44 AM, Henrik Nordström wrote:
> > The added comment applies to the whole 6xx class, not just 601.
> 
> We have only two 6xx constants. Are you saying HTTP_INVALID_HEADER (600) 
> is never sent to the client? I will adjust the comment if that is the
> case.

Correct. Bad header parsing a request is 400 bad request. bad header parsing a 
response is 504 bad gateway.

Regards
Henrik


Re: Compliance: reply with 400 (Bad Request) if request header is too big

2010-08-30 Thread Henrik Nordström
The added comment applies to the whole 6xx class, not just 601.

Note: Not entirely sure we need these as internal http status codes in
squid-3, but that's separate from this change.

sön 2010-08-29 klockan 14:57 -0600 skrev Alex Rousskov:
>  HTTP_INSUFFICIENT_STORAGE = 507,/**< RFC2518 section 10.6 */
>  HTTP_INVALID_HEADER = 600,  /**< Squid header parsing
> error */
> -HTTP_HEADER_TOO_LARGE = 601 /* Header too large to
> process */
> +HTTP_HEADER_TOO_LARGE = 601 /**< Header too large to
> process. Used
> + internally only,
> replying to client
> + with HTTP_BAD_REQUEST
> instead. */ 



Re: GPLv3 license

2010-08-29 Thread Henrik Nordström
lör 2010-08-28 klockan 14:26 +1200 skrev Amos Jeffries:

> The author confirmed in bugzilla that he was happy with it being labeled 
> GPLv2 and changed the COPYING file over before I merged. 
> http://bugs.squid-cache.org/show_bug.cgi?id=2905
> 
> We seem to have missed some of the GPL references in the update. Sorry.
> 
> Are there any other requirements that you know of apart from switching 
> the 3 to a 2 in those statements?

I don't think so.

But please copy the confirmation message and reference the bugzilla
entry in the commit for copyright tracking purposes.

Regards
Henrik



Re: CrossCompile

2010-08-29 Thread Henrik Nordström
sön 2010-08-29 klockan 15:17 +0200 skrev kromo...@user-helfen-usern.de:
> A
> No need to do, I think. It's like cross compiling python. maybe, it's
> possible to first create cf_gen with host compiler and set a variable
> for make command, which says, which cf_gen to use. So, host cf_gen could
> be renamed to hostcf_gen.
> 
> cf_gen='hostcf_gen' make

Problem here is that cf_gen gets a lot of configure details compiled ín.
The above only works if the configure arguments when you compile cf_gen
is very close to what you use for building squid.

But it's also pretty much what the cross compile setups I have seen for
Squid is doing.

Regards
Henrik



Re: [RFC] origin peer type

2010-08-29 Thread Henrik Nordström
sön 2010-08-29 klockan 11:16 -0600 skrev Alex Rousskov:

> It may be confusing to have origin type peer and originserver flag at 
> the same time. Please consider either renaming or documenting to resolve 
> the naming conflict.

Should be easy to solve by documenting the option reasonably.

Regards
Henrik



Re: [RFC] origin peer type

2010-08-29 Thread Henrik Nordström
sön 2010-08-29 klockan 12:02 +1200 skrev Amos Jeffries:
> Right now we have siblings, parents, and multicast peers with their own 
> types. By origin servers require both parent and origin flags to be set.

Always been the intention to have an origin type peer, just haven't got
implemented yet. But keep the originserver flag please as there is some
configurations of sibling originserver that do make sense.

Regards
Henrik



Re: CrossCompile

2010-08-29 Thread Henrik Nordström
sön 2010-08-29 klockan 18:59 +1200 skrev Amos Jeffries:

> I *think* this will remove the need for such pre-seeding.

cross-compiles should always have all run tests preconfigured for the
target, or result in a default results in "probe at runtime". The tests
are there for a reason, if not we could always use the default.

> The cf_gen remains a problem though. I really think we should go back to 
> having that as a perl script. All its doing is text-manipulation or 
> arrays during build and the code needed to switch build-hosts underneath 
> automake so it can run is not trivial.

Go back to? It's always been a compiled program.

Regards
Henrik





Re: CrossCompile

2010-08-28 Thread Henrik Nordström
How I have done it in the past is to use a configure cache presetting
the runtime test appropriately for the target.

But this requires that all AC_TRY_RUN checks are made cacheable.


ons 2010-08-25 klockan 19:16 -0600 skrev Alex Rousskov:
> Forwarding from info@ to squid-dev@:
> 
> On 08/25/2010 12:12 PM, kromo...@user-helfen-usern.de wrote:
> > Hi,
> >
> > I need to cross compile squid for http://squidsheep.org but I only get
> > message:
> >
> > configure: error: cannot run test program while cross compiling
> >
> > Is there a way to crosscompile 3.1.6, without creating a complete
> > buildsystem onto a planned tiny linux?
> >
> > Best regards
> > Bob Kromonos Achten
> >




GPLv3 license

2010-08-27 Thread Henrik Nordström
Just noticed the external_acl/eDirectory_userip is licensed by GPLv3 or
later. This is inconsistent with the rest of the code which is GPLv2 or
later, and is also what we announce as main license for the distribution
as a whole. I see a risk here that the eDirectory_userip gets mislabeled
as having a GPLv2 or later license like the rest.

I do not think moving Squid as such to GPLv3 or later is appropriate
solution.

Regards
Henrik



Re: [PATCH] Optimize HttpVersion comparison

2010-08-24 Thread Henrik Nordström
tis 2010-08-24 klockan 19:15 -0600 skrev Alex Rousskov:

> The current header sequence (somewhere) violates the squid-then-sys rule 
> and causes the problem. A header sequence that follows the 
> squid-then-sys rule will not cause the problem. I suspect such sequence 
> does not exist (beyond one file scope) because some Squid headers have 
> to include system headers.

And I say the opposite. The sequence that can fork is sys headers first
then squid headers with #undef. The keywords are used both in the squid
header and in squid code.

squid heders first then sys headers renders the squid code using these
members directly broken.

Regards
Henrik



Re: [PATCH] Optimize HttpVersion comparison

2010-08-24 Thread Henrik Nordström
tis 2010-08-24 klockan 17:21 -0600 skrev Alex Rousskov:

> Or, if Amos' rules are followed and system headers are always included 
> _after_ Squid ones (the problem would not even exist in this case).

Would it? Most if not all Squid headers also depends on numerous system
headers. That rule is mostly to detect when one of our headers is
missing a required #include of a system header.

It would still exist if any code tries to access the member variables
directly. Code is after headers, both Squid and system. 

Regards
Henrik



Re: 1xx response forwarding and ignore_expect_100

2010-08-24 Thread Henrik Nordström
tis 2010-08-24 klockan 17:16 -0600 skrev Alex Rousskov:

> RFC 2616 implies that we must forward 100-continue to HTTP/1.0 clients 
> that send Expect: 100-continue header

I know, and something I disagree with. HTTP/1.0 says:

9.1  Informational 1xx

   This class of status code indicates a provisional response,
   consisting only of the Status-Line and optional headers, and is
   terminated by an empty line. HTTP/1.0 does not define any 1xx status
   codes and they are not a valid response to a HTTP/1.0 request.
   However, they may be useful for experimental applications which are
   outside the scope of this specification.

Plus 7.2  Entity Body

   All 1xx (informational), 204 (no content), and
   304 (not modified) responses must not include a body.

which makes 1xx forwarding via a HTTP/1.0 proxy practically impossible.

At the same time HTTP/1.0 says nothing about Expect or Via.

Result is very likely breakdown if there is an HTTP/1.0 proxy in the
path as thi to the HTTP/1.1 server is indistinguisible from an directly
connected HTTP/1.0 client using Expect: 100-continue.

Not sure we have discussed this on HTTPbis. Dropped a mail there now to
discuss this.

Regards
Henrik



Re: 1xx response forwarding and ignore_expect_100

2010-08-24 Thread Henrik Nordström
mån 2010-08-23 klockan 15:18 -0600 skrev Alex Rousskov:

> drop_expect_100 on|off
> 
>  but still send an Expect: 100-continue request. As a side effect,
>  it will prevent forwarding of 100 (Continue) control messages to
>  HTTP/1.0 clients that send Expect: 100-continue headers.

I do not think we ever should forward 100-continue to HTTP/1.0 clients
even if there was an Expect: 100-continue header. Doing so just asks for
trouble. Just see our own HTTP/1.0 history of handling 100 responses..

Regards
Henruj



Re: auth_param ntlm keep_alive interaction with new http/1.1 keepalive behaviour

2010-08-24 Thread Henrik Nordström
tis 2010-08-24 klockan 10:17 +1000 skrev Stephen Thorne:

> But the situation I am experiencing is after a rejected authentication 
> attempt.

Squid do not consider the two cases much different.

But yes, it's generally a bad idea to keep the connection open when
trying to renegotiate NTLM, much more so than on the initial negotiation
to use NTLM.

fwiw many browsers will give you multiple auth popups even when using
Basic auth. Can easily be triggered if you visit a packe with many
inlined/embedded objects and the page body is cached or do not require
auth but the inlined/embedded objects is not cached and require auth.

Regards
Henrik



Re: [PATCH] Optimize HttpVersion comparison

2010-08-24 Thread Henrik Nordström
mån 2010-08-23 klockan 21:44 -0600 skrev Alex Rousskov:

> I am not going to commit this optimization until there is consensus on 
> how to handle the major/minor naming conflict with GCC.

The #undef should be fine, as long as we do it after including any
system headers which may depend on those macros.

If not, rename them?

And perhaps get rid of most if not all direct member accesses isolating
the problem?

Regards
Henrik



Re: [MERGE] Clean up htcp cache_peer options collapsing them into a single option with arguments

2010-08-24 Thread Henrik Nordström
mån 2010-08-23 klockan 01:17 +1200 skrev Amos Jeffries:

> Updated version of Henriks patch. (why did it not get committed last 
> year when approved?)

Because I forgot?

> * parser bug fixed to handle a list of exactly one parameter without 
> trailing comma (which the original would call bungled).
> 
> * special parse case for htcp-oldsquid fully combined with new parser.
> 
> * alters the cachemgr config dump to show the new syntax.
> 
> Other than parse no operational changes. Fully backward-compatible and 
> tested.

Thanks!

Regards
Henrik



Re: [PATCH] Compliance: rename Trailers header to Trailer everywhere.

2010-08-24 Thread Henrik Nordström
tis 2010-08-24 klockan 23:43 +1200 skrev Amos Jeffries:

> Given that Squid releases have been emitting the wrong header text for 
> at least 4 years do you not think we should retain recognition of the 
> incorrect name and upgrade to the correct one?

No. We have not been sending any Trailers headers. Only been stripping
the wrong header.

> This might be accomplished by having two texts pointing to HDR_TRAILER 
> in src/HttpHeader.cc chunk @106:
> 
> -{"Trailers", HDR_TRAILERS, ftStr},
> +{"Trailers", HDR_TRAILER, ftStr},
> +{"Trailer", HDR_TRAILER, ftStr},

No. Trailers != Trailer

Regards
Henrik




Re: perhaps OT: problem compiling 3.1.6 on SLES10-SP3

2010-08-23 Thread Henrik Nordström
mån 2010-08-23 klockan 16:42 +0200 skrev Christian:

> maintaining the squid3 RPM on build.openuse.org.
> 3.1.4 was the last version I was able to compile without problems.
> for 3.1.5 I need to apply a small patch. (squid-bootstrap.patch)
> since 3.1.6 I not able to compile it for SLES10-SP3

Why are you autotool bootstrapping the source tree? The distributed
tarball is already bootstrapped and ready for building. There is no need
to run bootstrap.sh unless you are patching configure.in or Makefile.am.

The LIBTOOL / AC_PROG_LIBTOOLS errors you are seeing on bootstrap is due
to the source tree is now needing Libtool 2.x for bootstrapping. Libtool
2.x uses slightly different autoconf syntax compared to Libtool 1.x.

> In file included from ../libltdl/ltdl.h:37,
>  from LoadableModule.cc:10:
> ../libltdl/libltdl/lt_error.h:35:31: error: libltdl/lt_system.h: No such
> file or directory

I also saw this a couple of days ago when missing libtool-ltdl-devel
package on one of my Fedora boxes. Have not yet looked into why this
happens, it's supposed to work fine even without a system copy of ltdl.

>From your output it looks like the culpit is that -I $top/libltdl is
missing from the compiler command. Try using the (new) configure flags
for forcing internal libltdl to be used. Also try building without
overriding CFLAGS & CXXFLAGS in case configure adds it to the wrong
variable..

Regards
Henrik



Re: new/delete overloading, why?

2010-08-22 Thread Henrik Nordström
sön 2010-08-22 klockan 17:18 +1200 skrev Amos Jeffries:

> We should add a small compiler unit-test into configure for a while 
> before removing either way. No guess work then.

Do you mean a unit-test to test if the compiler warns about free()
abuses on data allocated by new?

Regards
Henrik



Re: RFC-2640 support in Squid

2010-08-22 Thread Henrik Nordström
tis 2010-08-17 klockan 17:55 +0400 skrev Valery Savchuk:

> I've made some changes in src/ftp.cc for partial support of RFC-2640
> (Internalization of the FTP).
> 
> This issue allows correctly see and use ftp-directories/files with
> national characters, if ftp-server supports RFC-2640.

Sounds great!

> What I have to do to include my changes to project ?

To get your changes included the first step is to submit them for review
to squid-dev@squid-cache.org, preferably as an unified diff

  diff -ru squid-x.y.z squid-x.y.z-modified

Remember to review your own changes before submitting them, making sure
that the diff only contains the relevant changes and not other unrelated
stuff.

Regards
Henrik



Re: [MERGE] Initial netfilter mark patch for comment

2010-08-21 Thread Henrik Nordström
lör 2010-08-21 klockan 23:41 +0100 skrev Andrew Beverley:

> I have documented all the functions and class data members. Could you
> clarify whether *every* variable should be documented with doxygen
> comments (including short-lived temporary ones within functions), or
> just those that are part of classes/structs?

classes/structs.

temporary variables if their use may not be obvious to someone else
reading your code, but preferably code in such shape that variable use
is obvious.

> For example, should 'tos' in the function below have doxygen comments?

No need for that imho.

Regards
Henrik



Re: compat/unsafe.h

2010-08-21 Thread Henrik Nordström
lör 2010-08-21 klockan 20:07 +1200 skrev Amos Jeffries:

> IMO some of them such as the malloc/calloc/free which only force a 
> xfoo() version internal to Squid to be hard-coded should be done with a 
> real symbol swap-in in the relevant header files. That way the code can 
> go to using malloc/calloc/free and our custom wrappers plug-in silently 
> to src/ code where appropriate.

Not entirely sure what you mean. If you mean that free() should silently
redirect to xfree() in src/ then I disagree. The two have slightly
different usage.

> Others like sprintf which are still actually enforcing non-use of unsafe 
> functions should stay.

Many compilers and most auditing tools barfs on sprintf etc these days.
Not sure why gcc do not..

Regards
Henrik




Re: new/delete overloading, why?

2010-08-21 Thread Henrik Nordström
lör 2010-08-21 klockan 20:26 +1200 skrev Amos Jeffries:
> Henrik Nordström wrote:
> > lör 2010-08-21 klockan 07:02 +1200 skrev Robert Collins:
> > 
> >> it was to stop crashes with code that had been cast and was freed with
> >> xfree(); if you don't alloc with a matching allocator, and the
> >> platform has a different default new - *boom*.
> > 
> > Allocating with new and freeing with free() is a coding error in all
> > books.
> > 
> >> There may be nothing like that left to worry about now.
> > 
> > I sure hope not.
> > 
> 
> valgrind reports it as an error. I think I recall seeing gcc 4.3+ report 
> it as a warning which we catch with -Werror. The older ones (ala gcc 2.* 
> and 3.* did not AFAIK, thus the high potential for flashing lights and 
> sound effects).

So you argree that there should be nothing like that left to worry
about?

Regards
Henrik



Re: [PREVIEW] 1xx response forwarding

2010-08-20 Thread Henrik Nordström
fre 2010-08-20 klockan 13:00 -0600 skrev Alex Rousskov:
> On 08/20/2010 09:26 AM, Henrik Nordström wrote:
> > See RFC on use and meaning of HTTP version numbers.
> 
> The only relevant RFC text I can find is an informal discussion that 
> HTTP version is tied to a "message sender", an undefined concept. 
> However, even if we replace "message sender" with "client or server", my 
> assertion that HTTP does not guarantee that one host:port corresponds to 
> one "client or server" appears to be valid.

RFC 2145 section 2.3 Which version number to send in a message

   An HTTP client SHOULD send a request version equal to the highest
   version for which the client is at least conditionally compliant

   An HTTP server SHOULD send a response version equal to the highest
   version for which the server is at least conditionally compliant

   An HTTP server MAY send a lower response version, if it is known or
   suspected that the client incorrectly implements the HTTP
   specification, but this should not be the default, and this SHOULD
   NOT be done if the request version is HTTP/1.1 or greater.


Note: Proxy servers are both servers and clients depending on which side
you look at.

RFC 2616 3.2.2 http URL

  The semantics are that the identified resource is located at
  the server listening for TCP connections on that port of that host


Remember that use of NAT, TCP/IP load balancers etc is pretty much
outside all normal TCP/IP specifications. IP derived specifications
assumes end-to-end semantics at IP level unless otherwise explicitly
stated, where relevant.

Or put in other words, if you use NAT or TCP/IP load balancing or
similar techniques making several different servers answer on the same
ip:port then it's your responsibility to make sure your server as a
whole acts in a coherent manner. As far as specifications is concerned
it's still a single server, even if it internally splits the load across
several physically distinct servers. Many implementations gets bitten by
this at various levels, most notably for the HTTP specifications is
ETag, Content-Location and Location mismatches. HTTP version is in this
same category.

Regards
Henrik



Re: new/delete overloading, why?

2010-08-20 Thread Henrik Nordström
lör 2010-08-21 klockan 07:02 +1200 skrev Robert Collins:

> it was to stop crashes with code that had been cast and was freed with
> xfree(); if you don't alloc with a matching allocator, and the
> platform has a different default new - *boom*.

Allocating with new and freeing with free() is a coding error in all
books.

> There may be nothing like that left to worry about now.

I sure hope not.

Regards
Henrik



Re: [PREVIEW] 1xx response forwarding

2010-08-20 Thread Henrik Nordström
See RFC on use and meaning of HTTP version numbers.

fre 2010-08-20 klockan 08:58 -0600 skrev Alex Rousskov:
> On 08/20/2010 08:36 AM, Henrik Nordström wrote:
> 
> > Some aspects of http is hop-by-hop not end-to-end. Processing of Expect is 
> > one such thing. Transfer encoding and message delimiting another.
> 
> Sure. We can consider the "next hop == origin server" case to avoid 
> distractions. I am only wondering whether http://host/path1 and 
> http://host/path2 responses are guaranteed to have the same protocol 
> version. I do not think HTTP gives such guarantees, and yet its 
> requirements imply that remembering versions using host names should work.
> 
> Alex.
> 
> > We just look at what we know about the nexthop we select. Actual URI is 
> > pretty irrelevant unless used as selecting factor for selecting the nexthop.
> >
> > Yes proper Expect processing needs to be done in our client (server side in 
> > our terminology).
> >
> > regards
> > Henrik
> > - Ursprungsmeddelande -
> >> On 08/20/2010 06:30 AM, Henrik Nordström wrote:
> >>> tor 2010-08-19 klockan 10:41 -0600 skrev Alex Rousskov:
> >>>
> >>>> The patch removes the ignore_expect_100 feature because we now
> >>>> forward 100 Continue messages. Is everybody OK with that removal?
> >>>
> >>> May need to keep/resurrect it when adding next hop version check as
> >>> required by Expect..
> >>
> >> Good point. The next hop version check is better done on the server side
> >> though, right? We may not yet know the next hop when accepting the
> >> request.
> >>
> >> BTW, most things in HTTP are URI- and not hostname-based. I wonder what
> >> "server" or "next hop" means when checking for supported versions. Do we
> >> look just at the host name:port and hope that it reflects the version of
> >> everything running there?
> >>
> >> Thanks,
> >>
> >> Alex.




Re: [PREVIEW] 1xx response forwarding

2010-08-20 Thread Henrik Nordström
Some aspects of http is hop-by-hop not end-to-end. Processing of Expect is one 
such thing. Transfer encoding and message delimiting another.

We just look at what we know about the nexthop we select. Actual URI is pretty 
irrelevant unless used as selecting factor for selecting the nexthop.

Yes proper Expect processing needs to be done in our client (server side in our 
terminology).

regards
Henrik
- Ursprungsmeddelande -
> On 08/20/2010 06:30 AM, Henrik Nordström wrote:
> > tor 2010-08-19 klockan 10:41 -0600 skrev Alex Rousskov:
> > 
> > > The patch removes the ignore_expect_100 feature because we now
> > > forward 100 Continue messages. Is everybody OK with that removal?
> > 
> > May need to keep/resurrect it when adding next hop version check as
> > required by Expect..
> 
> Good point. The next hop version check is better done on the server side 
> though, right? We may not yet know the next hop when accepting the
> request.
> 
> BTW, most things in HTTP are URI- and not hostname-based. I wonder what 
> "server" or "next hop" means when checking for supported versions. Do we 
> look just at the host name:port and hope that it reflects the version of 
> everything running there?
> 
> Thanks,
> 
> Alex.



Re: [PREVIEW] 1xx response forwarding

2010-08-20 Thread Henrik Nordström
tor 2010-08-19 klockan 10:41 -0600 skrev Alex Rousskov:

> The patch removes the ignore_expect_100 feature because we now forward 
> 100 Continue messages. Is everybody OK with that removal?

May need to keep/resurrect it when adding next hop version check as
required by Expect..

Regards
Henrik



new/delete overloading, why?

2010-08-20 Thread Henrik Nordström
Why are we overloading new/delete with xmalloc/xfree?

   include/SquidNew.h

this is causing "random" linking issues every time some piece of code
forgets to include SquidNew.h, especially when building helpers etc. And
I fail to see what benefit we get from overloading the new/delete
operators in this way.

Regards
Henrik



[MERGE] Kill compat/unsafe.h, not really needed and causes more grief than gain

2010-08-19 Thread Henrik Nordström

# Bazaar merge directive format 2 (Bazaar 0.90)
# revision_id: hen...@henriknordstrom.net-20100820031034-\
#   0o3f9jw06pqkgmwa
# target_branch: http://www.squid-cache.org/bzr/squid3/trunk/
# testament_sha1: 7c770c668bbf0875624a280061c125890faeda6d
# timestamp: 2010-08-20 05:10:39 +0200
# base_revision_id: hen...@henriknordstrom.net-20100820023828-\
#   kguboyrr0hxkhj1g
# 
# Begin patch
=== modified file 'compat/GnuRegex.c'
--- compat/GnuRegex.c	2010-07-28 20:16:31 +
+++ compat/GnuRegex.c	2010-08-20 03:10:34 +
@@ -32,7 +32,6 @@
 #define _GNU_SOURCE 1
 #endif
 
-#define SQUID_NO_ALLOC_PROTECT 1
 #include "config.h"
 
 #if USE_GNUREGEX /* only if squid needs it. Usually not */

=== modified file 'compat/Makefile.am'
--- compat/Makefile.am	2010-07-25 08:10:12 +
+++ compat/Makefile.am	2010-08-20 03:10:34 +
@@ -29,7 +29,6 @@
 	strtoll.h \
 	tempnam.h \
 	types.h \
-	unsafe.h \
 	valgrind.h \
 	\
 	os/aix.h \

=== modified file 'compat/compat.h'
--- compat/compat.h	2010-08-10 15:37:53 +
+++ compat/compat.h	2010-08-20 03:10:34 +
@@ -108,7 +108,4 @@
  */
 #include "compat/GnuRegex.h"
 
-/* some functions are unsafe to be used in Squid. */
-#include "compat/unsafe.h"
-
 #endif /* _SQUID_COMPAT_H */

=== modified file 'compat/os/dragonfly.h'
--- compat/os/dragonfly.h	2010-03-21 03:08:26 +
+++ compat/os/dragonfly.h	2010-08-20 03:10:34 +
@@ -20,11 +20,5 @@
 #undef HAVE_MALLOC_H
 #endif
 
-/* Exclude CPPUnit tests from the allocator restrictions. */
-/* BSD implementation uses these still */
-#if defined(SQUID_UNIT_TEST)
-#define SQUID_NO_ALLOC_PROTECT 1
-#endif
-
 #endif /* _SQUID_DRAGONFLY_ */
 #endif /* SQUID_OS_DRAGONFLY_H */

=== modified file 'compat/os/freebsd.h'
--- compat/os/freebsd.h	2010-07-25 08:10:12 +
+++ compat/os/freebsd.h	2010-08-20 03:10:34 +
@@ -27,12 +27,6 @@
 
 #define _etext etext
 
-/* Exclude CPPUnit tests from the allocator restrictions. */
-/* BSD implementation uses these still */
-#if defined(SQUID_UNIT_TEST)
-#define SQUID_NO_ALLOC_PROTECT 1
-#endif
-
 /*
  *   This OS has at least one version that defines these as private
  *   kernel macros commented as being 'non-standard'.

=== modified file 'compat/os/netbsd.h'
--- compat/os/netbsd.h	2010-07-25 08:10:12 +
+++ compat/os/netbsd.h	2010-08-20 03:10:34 +
@@ -13,12 +13,6 @@
  *--*
  /
 
-/* Exclude CPPUnit tests from the allocator restrictions. */
-/* BSD implementation uses these still */
-#if defined(SQUID_UNIT_TEST)
-#define SQUID_NO_ALLOC_PROTECT 1
-#endif
-
 /* NetBSD does not provide sys_errlist global for strerror */
 #define NEED_SYS_ERRLIST 1
 

=== modified file 'compat/os/openbsd.h'
--- compat/os/openbsd.h	2010-07-25 08:10:12 +
+++ compat/os/openbsd.h	2010-08-20 03:10:34 +
@@ -20,12 +20,6 @@
 #undef HAVE_MALLOC_H
 #endif
 
-/* Exclude CPPUnit tests from the allocator restrictions. */
-/* BSD implementation uses these still */
-#if defined(SQUID_UNIT_TEST)
-#define SQUID_NO_ALLOC_PROTECT 1
-#endif
-
 /*
  *   This OS has at least one version that defines these as private
  *   kernel macros commented as being 'non-standard'.

=== modified file 'compat/os/solaris.h'
--- compat/os/solaris.h	2010-08-11 00:12:56 +
+++ compat/os/solaris.h	2010-08-20 03:10:34 +
@@ -82,12 +82,6 @@
 #define __FUNCTION__ ""
 #endif
 
-/* Exclude CPPUnit tests from the allocator restrictions. */
-/* BSD implementation uses these still */
-#if defined(SQUID_UNIT_TEST)
-#define SQUID_NO_STRING_BUFFER_PROTECT 1
-#endif
-
 /* Bug 2500: Solaris 10/11 require s6_addr* defines. */
 //#define s6_addr8   _S6_un._S6_u8
 //#define s6_addr16  _S6_un._S6_u16

=== removed file 'compat/unsafe.h'
--- compat/unsafe.h	2010-03-21 03:08:26 +
+++ compat/unsafe.h	1970-01-01 00:00:00 +
@@ -1,33 +0,0 @@
-#ifndef SQUID_CONFIG_H
-#include "config.h"
-#endif
-
-#ifndef _SQUID_COMPAT_UNSAFE_H
-#define _SQUID_COMPAT_UNSAFE_H
-
-/*
- * Trap unintentional use of functions unsafe for use within squid.
- */
-
-#if !SQUID_NO_ALLOC_PROTECT
-#ifndef free
-#define free(x) ERROR_free_UNSAFE_IN_SQUID(x)
-#endif
-#ifndef malloc
-#define malloc ERROR_malloc_UNSAFE_IN_SQUID
-#endif
-#ifndef calloc
-#define calloc ERROR_calloc_UNSAFE_IN_SQUID
-#endif
-#endif /* !SQUID_NO_ALLOC_PROTECT */
-
-#if !SQUID_NO_STRING_BUFFER_PROTECT
-#ifndef sprintf
-#define sprintf ERROR_sprintf_UNSAFE_IN_SQUID
-#endif
-#ifndef strdup
-#define strdup ERROR_strdup_UNSAFE_IN_SQUID
-#endif
-#endif /* SQUID_NO_STRING_BUFFER_PROTECT */
-
-#endif /* _SQUID_COMPAT_UNSAFE_H */

=== modified file 'helpers/basic_auth/LDAP/basic_ldap_auth.cc'
--- helpers/basic_auth/LDAP/basic_ldap_auth.cc	2010-07-08 11:58:30 +
+++ helpers/basic_auth/LDAP/basic_ldap_auth.cc	2010-08-20 03:10:34 +
@@ -82,7 +82,6 @@
  * - Allow full filter specifications in -f
  */
 
-#define SQUID_NO_ALLOC_PROTECT 1
 #include "config

compat/unsafe.h

2010-08-19 Thread Henrik Nordström
Stumbled over compat/unsafe.h again when trying to compile trunk after
the purge merge.

Imho these rules in compat/unsafe.h should be dropped, replaced by
coding standards for the different sections and auditing.

- The rules originally come from lazyness in Squid-2 where we did not
want to check return code of malloc() or is data had been allocated
before free().
- The way they are implemented (#define) causes issues with perfectly
valid code such as system headers..
- these rules makes it harder to integrate other code.

Regards
Henrik



Re: 1xx response forwarding

2010-08-18 Thread Henrik Nordström
mån 2010-08-16 klockan 15:53 -0600 skrev Alex Rousskov:

> Both approaches may have to deal with crazy offset management, 
> clientStreams manipulations, and other client-side mess.

Yes.

For now I think we need to bypass store to make this sane, and it's
probably also a step in the right direction in general.

In long run the forwarding mechanism need to be separated from store.

Regards
Henrik



Re: HTTP/1.1 to clients in v3.2

2010-08-16 Thread Henrik Nordström
mån 2010-08-16 klockan 14:08 -0600 skrev Alex Rousskov:

> Should we try implement what you described then? In summary:
> 
>- If the next hop is known to be 1.1, send a chunked request.
>- Otherwise, try to accumulate the full body, with a timeout.
>  Send 411 if we timed out while accumulating.
> 
> At first, the accumulation will still happen on the client side, like 
> today. Eventually, the accumulation code can be moved to the server side.

Yes.

Suggested implementaiton order:

1. Forward of chunked requests.
2. Next hop version cache, rejecting with 411 if next hop not known to
be 1.1
3. Delay 411 response condition a bit, buffering the request hoping to
be able to dechunk instead. Respond with 411 on timeout or buffer full.
4. tuning knob to selectively assume next hop is 1.1 if unknown,
enabling out of band knowledge of 1.1 capability via configuration. This
should also include tuning for selectively disabling chunked forwarding
enabling ban of broken nex-hops.
5. Option to add "Expect: 100-continue" on forwarded chunked requests
when forced 1.1, with it's requirements on delaying forwarding and
retrying without expectation if seeing a 417 in response or returning
411 if retrying is not possible at time of 417.

Note that when Expect: 100-continue is used by the client and complete
path is 1.1 then 3 should not really happen due to the client delaying
it's transmission for some considerable amount of time. Here we SHOULD
instead respond with 411 immediately to follow the 100 Continue expected
flow model, enabling client fallback to a much shorter delay.


In paralell to this it's also needed to deal with 1xx responses in an
reasonable manner, especially 100 Continue. Without these it's hard to
get the expected flow of events running.

Regards
Henrik



Re: HTTP/1.1 to clients in v3.2

2010-08-16 Thread Henrik Nordström
mån 2010-08-16 klockan 13:30 -0600 skrev Alex Rousskov:

> Since Squid is a program and not a human being, we do need to hard-code 
> a single default. Clearly, there will be ACLs to change the behavior, 
> but if no options apply, we still need to do something.
> 
> Yu have more-or-less said "no" to every option I listed :-). Correct me 
> if I am wrong, but I sense that option #1 (always send chunked request) 
> is the "least bad" default. I will try to implement that unless you stop me.

I did not say no, just the complications each proposal involves..

The correct per spec is to reject with 411 unless next hop is known to
be 1.1.

Alternatively dechunk if we have already received the full body (by
waiting a little, not sending 100 Continue)

> An interception client is arguably more likely to know the next hop 
> capabilities because it thinks it is talking directly to that hop. 

I would argue the opposite. It's less likely to make a correct decision
as it do not expect the proxy to be there messing with things so it
quite likely will take the response version of the proxy as an
indication of the origin server capabilities not caring to look for Via
etc..

> Similarly, we are less likely to be blamed for screwing things up if we 
> just repeat what the intercepted client did.

Until someone slaps us with the specifications.

Regards
Henrik



Re: HTTP/1.1 to clients in v3.2

2010-08-16 Thread Henrik Nordström
mån 2010-08-16 klockan 11:43 -0600 skrev Alex Rousskov:

> I am revisiting this issue in hope to enable HTTP/1.1 to clients. If 
> Squid properly dechunks requests on the client side, what should happen 
> to that dechunked request on the server side? Let's start with the most 
> general case were we do _not_ know the origin server version and where 
> there are no matching ACLs to control Squid behavior.
>
>  I see several options:
> 
> 1. Always send a chunked request (because the client did so and perhaps 
> the client knows that the server is HTTP/1.1 and can handle a chunked 
> request.)

But the client doesn't really know.. it knows Squid. HTTP version,
transfer encoding etc is hop-by-hop, not end-to-end. Well, there is Via,
but not sent on 100 Continue for example.

Specs says "MUST include a valid Content-Length header field unless the
server is known to be HTTP/1.1 compliant".

But we should at least have a knob for tuning this. I don't really agree
with the specs here. As you note if the client sends chunked then in
reality it's quite likely the server supports chunked as the client most
likely have some out of band information about the server capabilities.

> 2. Always send dechunked request with a Content-Length field, after 
> buffering all the chunks and calculating content length (because we do 
> not know whether the server is HTTP/1.1 and can handle a chunked request).

Not really doable. Request may be huge and buffer space limited.

I am entirely fine with dechunking and converting to content-length
should we already have the data buffered. But not to use it as default
mechanism.

Limited buffering plays somewhat badly with 100 Continue. I am not very
comfortable with sending a 411 after 100 Continue even if not strictly
illegal. Very unlikely clients deal well with such situation as specs
says the 100 Continue indicates explicitly that the origin server is
willing to accept the body.

> 3. Always add an "Expect: 100-continue" if not already there, send the 
> headers, and then pick option #1 or #2, depending on the server version 
> if we get a 100 Continue response.

This should work out reasonably well I think, but adds a fair amount of
complexity.

If we get 100 Continue then the server is supposed to be 1.1. But MAY be
an intermediary seding 100 Continue (not strictly legal..)

If we add 100-continue then we must also retry without 100-continue if
receiving a 417. Which also means that we need to buffer the request
while it's forwarded, which risks running into the same limited
buffering issue, requiring us to send a 411 response if buffer space has
been exceeded and receiving a 417 response.

> Any other options?

I don't think a single hard answer can be given. More likely a mix and
some knobs to tune to real world brokenness will be needed. But in the
short run blindly forwarding quite liekly works.


> Should the choice depend on whether we are a direct 
> or interception proxy?

Not really. End result is pretty much the same, except that the client
have even less knowledge about the state of things when being
intercepted.

Regards
Henrik



Re: Note about auth refcounting state / ntlm in trunk

2010-08-16 Thread Henrik Nordström
mån 2010-08-16 klockan 21:32 +1200 skrev Amos Jeffries:

> I don't think thats making them scheme-specific as such. The child 
> classes inheriting from AuthUser will be doing that part.

It does. Each instance of AuthUser is scheme specific, and because of
this the related acls break down when more than once scheme is
configured.

> Config can be unlinked by moving the TTL into AuthUser as a generic 
> timestamp set at point of last validation. IIRC thats all its used for 
> so the validating scheme's TTL-when-validated can persist across 
> reconfigure.

What validation?

AuthUser is (or should be) Squids general view of a user/login. Not for
tracking scheme details.

> auth_type is just a meta data flag. Useful for cachemgr display and any 
> scheme-specific logics handling the AuthUser that fell like they need to 
> check the type.

Then they are abusing AuthUser.

Regards
Henrik



Re: Note about auth refcounting state / ntlm in trunk

2010-08-16 Thread Henrik Nordström
mån 2010-08-16 klockan 18:15 +1200 skrev Amos Jeffries:

> So.. the difference being that you are suggesting the credentials 
> text and parsing should be done on AuthUserRequest and no AuthUser 
> should exist associated with it until fully authed?
>   AuthUser only created/looked up at the point currently doing absorb()?

Correct.

> Sounds like a good plan.
> NOTE: this is why I specifically asked for details on the 
> RefCountCount() value on the assert bug:

==2 I think. Looks up IRC logs.. yes. "from->RefCountCount() == 2"

> > AuthUser should be scheme-independent, but need to softly link to the
> > schemes using it allowing clean garbage collection and association of
> > scheme state (basic credentials cache, confirmed digest nonces and their
> > related H(A1))
> 
> It is already like that. AuthUser is the mostly-private part persisting 
> across requests/conn in the username cache. AuthUserRequest is the 
> public hanging off each request and conn indicating state of auth. 
> Pointing to whatever AuthUser has the credentials text.

Almost.. each AuthUser is scheme specific today. Shouldn't be, but still
is. Not the class as such, but it's data makes it unique per scheme.
References auth_type and AuthConfig.

Regards
Henrik



Re: FYI: github

2010-08-15 Thread Henrik Nordström
fre 2010-08-13 klockan 18:54 -0500 skrev Mark Nottingham:

> P.S., if any other squid-dev people are on github, we can add you to the 
> group, FWIW, although like I said, this is read-only...

I have a github account. hno as mostly everywhere else. But I see you
already noticed that.

Regards
Henrik



Re: Note about auth refcounting state / ntlm in trunk

2010-08-15 Thread Henrik Nordström
mån 2010-08-16 klockan 01:43 + skrev Amos Jeffries:

> Basic flow around that absorb is:
>  create empty AuthUser "local_auth_user"

Gah.. I think it should be

* Perform auth. Uses and results in an AuthState (scheme specific) or if
you prefer AuthRequest but matches badly with both ntlm & digest.

* On successful auth an AuthUser is associated with the AuthState to
keeptrack of the user long term between authentications.

* Failure to perform Auth MAY result in something like an AuthUser to
carry the username only, but preferably just keeping an internal record
to the AuthState in such case.

* High level access to the auth state of the request always goes via
AuthState. AuthUser is internal.

An AuthUser should not be required to perform Auth.

On success also update the ip list for max_user_ip use, linked to
AuthUser.

No absorb of anything.


AuthUser should be scheme-independent, but need to softly link to the
schemes using it allowing clean garbage collection and association of
scheme state (basic credentials cache, confirmed digest nonces and their
related H(A1))

Regards
Henrik



Re: Note about auth refcounting state / ntlm in trunk

2010-08-15 Thread Henrik Nordström
sön 2010-08-15 klockan 23:26 + skrev Amos Jeffries:

> > - Fails if external acls is used (any, not just with grace=.. as in 3.1)
> 
> any ideas why?

No, haven't really dug into the code yet. Was just observing to verify
that the claims in 2936 could be reproduced and found a much worse
situation than expected.

> > - Fails with refcount error on second NTLM handshake.
> 
> If I've been following the snippets in IRC right, this is due to the
> absorb() function which attempts to combine duplicate credentials and
> maintain a single state. Which requires that the absorbed credentials be
> discarded immediately after. The assert is there to make it obvious when
> this requirement is broken.

Design error?

Was too long since I was in NTLM/Negotiate land (haven't seriously
touched it since throwing out the challenge reuse layer years ago) and
no longer remembers the details about this temp credentials and
absorbing.. but I do have a memory of it being a bit strange.

Regards
Henrik



Note about auth refcounting state / ntlm in trunk

2010-08-15 Thread Henrik Nordström
While trying to investigate Bug 2936 it seems the auth refcounting state
in trunk is somewhat borked.

The state of ntlm auth is considerably worse in trunk than 3.1.

- Fails if external acls is used (any, not just with grace=.. as in 3.1)
- Fails with refcount error on second NTLM handshake.

Regards
Henrik



Re: Patch for squidclient

2010-08-15 Thread Henrik Nordström
sön 2010-08-15 klockan 19:01 +0100 skrev Markus Moeller:

> Not sure if it is under revision control, but I get it with rsync.  I have 
> removed it from the patch

If you can then it's better to access the sources using bzr.

http://wiki.squid-cache.org/Squid3VCS

Regards
Henrik



Re: Patch for squidclient

2010-08-14 Thread Henrik Nordström
lör 2010-08-14 klockan 21:10 +0100 skrev Markus Moeller:
> Hi,
> 
>  Please find attached a patch to add Proxy- and WWW-Authenticate.
> 
> Regards
> Markus

Looks fine, but need to be wrapped up in kerberos ifdefs, same as used
for the main code kerberos client.

Regards
Henrik



Re: [PATCH] HttpMsg::persistent (Was: Client-side pconns and r10728)

2010-08-14 Thread Henrik Nordström
lör 2010-08-14 klockan 12:26 -0600 skrev Alex Rousskov:

> The patch does not change the time when the message headers are 
> examined. Moreover, the [unchanged] persistency checking code does not 
> look at Transfer-Encoding at all. Did I completely misunderstood your 
> concern?

Just confusion. Looking at the code again it's clearer. No more
questions.

Regards
Henrik



Re: Client-side pconns and r10728

2010-08-14 Thread Henrik Nordström
lör 2010-08-14 klockan 07:07 -0600 skrev Alex Rousskov:
> On 08/14/2010 02:58 AM, Henrik Nordström wrote:
> > fre 2010-08-13 klockan 19:42 -0600 skrev Alex Rousskov:
> >
> >> You may want to test the attached fix instead. I do not know whether it
> >> helps with Bug 2936 specifically, but it does fix a bug that smells
> >> related to those issues because Bug 2936 test script uses HTTP/1.0 
> >> messages.
> >
> > Looks right to me, but I don't see how it's related to 2936..
> 
> Just for the record: I am not trying to fix bug 2936 (yet).

Ok.

And to keep on track wrt 2936 there is two issues in that bug

a. Spurious 407 reject where none were expected. Looks like Squid
partially forgot the connection had been authenticated.

b. Bad headers on that challenge.

The main issue is the first. It's quite likely the second is a byproduct
from the unexpected state.

Regards
Henrik



Re: [PATCH] HttpMsg::persistent (Was: Client-side pconns and r10728)

2010-08-14 Thread Henrik Nordström
lör 2010-08-14 klockan 10:31 -0600 skrev Alex Rousskov:

> This move makes it clear that the logic applies only to the message 
> being examined and not some irrelevant information such as HTTP version 
> supported by Squid.

How do chunking fit into this?

Is the Transfer-Encoding property of the message at this stage?

Regards
Henrik



Re: Client-side pconns and r10728

2010-08-14 Thread Henrik Nordström
fre 2010-08-13 klockan 19:42 -0600 skrev Alex Rousskov:

> You may want to test the attached fix instead. I do not know whether it 
> helps with Bug 2936 specifically, but it does fix a bug that smells 
> related to those issues because Bug 2936 test script uses HTTP/1.0 messages.

Looks right to me, but I don't see how it's related to 2936..

Regards
Henrik



Re: Client-side pconns and r10728

2010-08-14 Thread Henrik Nordström
I don't get how any of these changes can be relevant to the bug in
question.

Regards
Henrik

fre 2010-08-13 klockan 19:42 -0600 skrev Alex Rousskov:
> On 08/12/2010 03:37 AM, Amos Jeffries wrote:
> > 
> > revno: 10728
> > committer: Amos Jeffries
> > branch nick: trunk
> > timestamp: Thu 2010-08-12 21:37:14 +1200
> > message:
> >Author: Stephen Thorne
> >Bug 2936: NTLM-Authenticate 407 and Proxy-Connection: Close in same 
> > response.
> >
> >Squid default from the days of HTTP/1.0 was to close connections unless
> >keep-alive was explicitly known. This changes the default to send
> >keep-alive unless we have a good reason to close.
> > modified:
> >src/client_side_reply.cc
> 
> > === modified file 'src/client_side_reply.cc'
> > --- a/src/client_side_reply.cc  2010-07-13 14:27:25 +
> > +++ b/src/client_side_reply.cc  2010-08-12 09:37:14 +
> > @@ -1383,6 +1383,9 @@
> >  } else if (fdUsageHigh()&& !request->flags.must_keepalive) {
> >  debugs(88, 3, "clientBuildReplyHeader: Not many unused FDs, can't 
> > keep-alive");
> >  request->flags.proxy_keepalive = 0;
> > +} else if (request->http_ver.major == 1 && request->http_ver.minor == 
> > 1) {
> > +debugs(88, 3, "clientBuildReplyHeader: Client is HTTP/1.1, send 
> > keep-alive, no overriding reasons not to");
> > +request->flags.proxy_keepalive = 1;
> >  }
> >
> 
> Persistent connections have been semi-broken since 3.0, but was the 
> above fix discussed somewhere? I think it contradicts the overall flow 
> of the persistency handling code in general and clientSetKeepaliveFlag 
> intent/documentation in particular. I do not know whether it introduces 
> more bugs, but I would not be surprised if it does because the 
> if-statements above the new code do not enumerate "all overriding reasons"!
> 
> To add insult to the injury, the commit message is also misleading 
> because, bugs notwithstanding, Squid did "send keep-alive unless we had 
> a good reason to close" even before this change.
> 
> Can we revert the above change, please?
> 
> 
> You may want to test the attached fix instead. I do not know whether it 
> helps with Bug 2936 specifically, but it does fix a bug that smells 
> related to those issues because Bug 2936 test script uses HTTP/1.0 messages.
> 
> Thank you,
> 
> Alex.




Re: /bzr/squid3/trunk/ r10691: Bug 2994: pt 2: Open ports as IP4-only when IPv6 disabled

2010-08-02 Thread Henrik Nordström
sön 2010-08-01 klockan 16:33 +1200 skrev Amos Jeffries:

> +// if IPv6 is split-stack, prefer IPv4
> +if (Ip::EnableIpv6&IPV6_SPECIAL_SPLITSTACK) {
> +// NP: This is not a great choice of default,
> +// but with the current Internet being IPv4-majority has a higher 
> success rate.
> +// if setting to IPv4 fails we dont care, that just means to use 
> IPv6 outgoing.
> +outgoing.SetIPv4();
> +}

Doesn't this break connectivity to IPv6-only hosts?

What happened with the comm reoredring moving socket type & outgoing
address selection down to after the destination address have been
selected?

Regards
Henrik



3.1.5.1 do not build

2010-07-28 Thread Henrik Nordström
Looks like the new src/stub_debug.cc file have not been tested?

Seems to be missing a whole lot #include files. It's not pulling in
squid.h so it needs to include every header it needs.

The errors I get are:

stub_debug.cc: In function 'void _db_print(const char*, ...)':
stub_debug.cc:20: error: 'BUFSIZ' was not declared in this scope
stub_debug.cc:29: error: 'f' was not declared in this scope
stub_debug.cc:31: error: 'snprintf' was not declared in this scope
stub_debug.cc: In function 'void _db_print_stderr(const char*, 
__va_list_tag*)':
stub_debug.cc:49: error: 'stderr' was not declared in this scope
stub_debug.cc:49: error: 'vfprintf' was not declared in this scope

which all seem to be due to missing #include 

Regards
Henrik



Re: PAC serving

2010-07-13 Thread Henrik Nordström
ons 2010-07-14 klockan 00:53 + skrev Amos Jeffries:

> Thats the plan. Those three objects I said we have to work with are the
> parameters available to internalStart().

My copy only have two parameters..

internalStart(HttpRequest * request, StoreEntry * entry)

> So far I have the else case of internalStart() doing stat() on the file
> and generating the HttpReply headers. But doing async serving of the file
> is muddy. I suspect if we are happy to loose caching and delay pools, ICAP,
> eCAP, etc. etc then using sendfile() in non-blocking mode straight to the
> client_fd would be fine.

I would avoid that at this level. Needs too many shortcuts in the rest
of the code..

we could however aim for working on an infrastructure where sendfile()
optimization would fall out naturally by the client streams figuring out
that there is no processing being done other than shuffling the data.

> I left it with a simple AsyncJob comm reader which pushed data into the
> StoreEntry and looped.  Patch to follow when its had some error handling
> added and been run-tested.

Ok. Sounds good.

Please remember that you need an abort handler as well on the
StoreEntry, to handle when the client aborts the request.

Regards
Henrik



Re: [squid-users] squid_db_auth to support md5 encrypted passwords patch

2010-07-13 Thread Henrik Nordström
Applied.

tor 2010-07-08 klockan 12:35 +0300 skrev Milen Pankov:
> Hi,
> 
> Recently I needed to use squid_db_auth against mysql database with md5
> encrypted passwords.
> 
> I read a recent discussion on this list (Joomla DB authentication
> support hits Squid!:)) that was regarding integration with joomla
> database, but this wasn't working for me.
> 
> Here's a patch that makes it possible to read md5 encrypted passwords
> from the database with the --md5 option.
> 
> *** helpers/basic_auth/DB/squid_db_auth.in2010-05-30
> 09:21:12.0 -0400
> --- helpers/basic_auth/DB/squid_db_auth.in.milen2010-07-08
> 08:17:21.0 -0400
> ***
> *** 22,27 
> --- 22,28 
>   my $db_passwdcol = "password";
>   my $db_cond = "enabled = 1";
>   my $plaintext = 0;
> + my $md5 = 0;
>   my $persist = 0;
>   my $isjoomla = 0;
>   my $debug = 0;
> ***
> *** 72,77 
> --- 73,82 
>  
>   Database contains plain-text passwords
>  
> + =item   B<--md5>
> +
> + Database contains md5 passwords
> +
>   =itemB<--salt>
>  
>   Selects the correct salt to evaluate passwords
> ***
> *** 98,103 
> --- 103,109 
>   'passwdcol=s' => \$db_passwdcol,
>   'cond=s' => \$db_cond,
>   'plaintext' => \$plaintext,
> + 'md5' => \$md5,
>   'persist' => \$persist,
>   'joomla' => \$isjoomla,
>   'debug' => \$debug,
> ***
> *** 143,148 
> --- 149,155 
>   return 1 if defined $hashsalt && crypt($password, $hashsalt)
> eq $key;
>   return 1 if crypt($password, $key) eq $key;
>   return 1 if $plaintext && $password eq $key;
> + return 1 if $md5 && md5_hex($password) eq $key;
>   }
>  
>   return 0;




Re: [squid-users] what's the configuration with vim when developing squid 2.7?

2010-06-29 Thread Henrik Nordström
tis 2010-06-29 klockan 16:58 +0800 skrev Weibin Yao:

> I noticed the
> guideline(http://wiki.squid-cache.org/Squid2CodingGuidelines) is that:
> Code MUST be formatted with GNU indent version 1.9.1 and these exact
> options indent -br -ce -i4 -ci4 -l80 -nlp -npcs -npsl -d0 -sc -di0 -psl
> 
> How could I to convert this indent directives to the type of vim?

Normal C indent mode works pretty decent for me, with just 

  set sw=4

to get the indentation level right.

It's not critical you get this entirely correct. As long as the code you
produce is readable. The above GNU indent rule is mostly for committing
code to the main CVS repository and is enforced there.

Regards
Henrik



Re: [RFC] Removing most use of #if USE_IPV6 conditional

2010-06-28 Thread Henrik Nordström
mån 2010-06-28 klockan 00:05 + skrev Amos Jeffries:

> Do you want to go through and remove or should I?

I'll do it and submit it as a merge.

Regards
Henrik



Re: [RFC] Removing most use of #if USE_IPV6 conditional

2010-06-27 Thread Henrik Nordström
sön 2010-06-27 klockan 17:19 +1200 skrev Amos Jeffries:

> At present it means enable/disable IPv6 support in IP::Address, with a 
> corollary of also disabling stuff that requires that storage support.

ipv6 address storage is not an issue imho. First of all I very much
doubt we have any target platform not having IPv6 address storage
support, and if we are then adding the needed structs for Squid is not
an issue.

> If you really wished it could be broken down into two, one for disabling 
> just connectivity, the other for disabling full storage and up support.

What I am proposing is that we always have IPv6 address storage support,
and consequently always have all code related to that enabled. So just
one for IPv6 connectivity.

> The cases Henrik points to:
> 
>   * URL parsing - require IPv6 address storage and library support to 
> verify its an IP and not "[example.com]"
>** okay the url.cc ones can all go.
>** IPv4-only case will end up dying with parsing unable to resolve 
> host name "dead::beef" instead of dying with invalid hostname "[dead::beef]"

Why? It's an IPv6 address. Would die with a "Cannot forward" as there is
no IPv4 compatible address..

>   * DNS  resolution - requires storage of the  results to be 
> worth the extra processing. The only gain in allowing this would be to 
> show extra stats to admin of "you could have used IPv6 to get domain X"

I do not really propose that we actually do  lookups, just that the
code is there. But maybe we should..

>   * FTP - requires IPv6 connectivity to even advertise EPSV (2) to the 
> server. Or the server is liable to respond saying "I am opening a port 
> at dead::beef:1"

Which should be conditional on the address type of the control channel,
not USE_IPV6.

>** the other two (response handling) cases where the IPv6-only code 
> is run-time wrapped could go:
>   if ( AI->ai_addrlen != sizeof(struct sockaddr_in) )
>   if (addr.IsIPv6())

Yes.

>   * The UDP and TCP logging module ones can go as run-time wrapped.
>** if (addr.IsIPv4())
> 
>   * Some of the SNMP ones are run-time tested. But the rest require v6 
> storage types.
> 
> Are there any others annoying you Henrik?

Quite likely.

Regards
Henrik



Re: [RFC] removing HEAD label

2010-06-26 Thread Henrik Nordström
lör 2010-06-26 klockan 14:24 -0600 skrev Alex Rousskov:

> That is better. Although, in the ideal world, it is Squid2 that should
> be more specific and we could just use trunk for Squid3, Squid4, etc.

Don't think there will be a Squid-4 any time soon, but who knows.

For me Squid-2 is not a branch. It's a trunk (2.HEAD), just not as
active as the Squid-3 trunk.

But I see your point.

Regards
Henrik



Re: [RFC] Removing most use of #if USE_IPV6 conditional

2010-06-26 Thread Henrik Nordström
lör 2010-06-26 klockan 14:22 -0600 skrev Alex Rousskov:

> In other words, define USE_IPV6 as "use IPv6 addresses in system calls"?

Correct.

>  And currently it is defined as "handle IPv6 addresses in URLs and use
> IPv6 addresses in system calls"?

Yes. Without USE_IPV6 we fails to even parse anything that looks like
IPv6.

>  +1. Just document what USE_IPV6 means.

Great.

Regards
Henrik



<    1   2   3   4   5   6   7   >