Re: autoconf-refactor and netfilter-based transparent proxy

2010-05-21 Thread Kinkie
Hi all.

To avoid risking this to happen again, i'll do more frequent interim
merges while using trunk to baseline the output.

Any objections?

Thanks

On 5/10/10, Henrik Nordström hen...@henriknordstrom.net wrote:
 mån 2010-05-10 klockan 18:13 +0200 skrev Kinkie:
  revno: 10425
  committer: Francesco Chemolli kin...@squid-cache.org
  branch nick: trunk
  timestamp: Sun 2010-04-25 23:40:51 +0200
  message:
Interim merge from autoconf-refactor feature-branch.
 
  Kinkie, could you please check that netfilter-based interception proxies
  are still supported?

 Will do ASAP (probably tomorrow).

 I have added back the missing define for LINUX_NETFILTER, but this is
 the second odd thing in the autoconf refactor merge. Can you please do a
 full review of your merge to see if there is anything else that's odd?

  It would also be nice to get rid of libcap and TPROXY warnings when the
  user wants just netfilter-based interception proxy support and is
  willing to --disable the rest. In Squid v3.1, we now get these
  irrelevant (for the said configuration) warnings:

 I'll check.

 trunk does not even have a configure option for controlling TPROXY. It's
 assumed to always be available by configure.in, and disabled in compiled
 code based on system header defines.

 Also the libcap warning message is a bit misguided. It's not only about
 TPROXY but also about security.

 Regards
 Henrik




-- 
/kinkie


Re: autoconf-refactor and netfilter-based transparent proxy

2010-05-21 Thread Alex Rousskov
On 05/21/2010 06:43 AM, Kinkie wrote:

 To avoid risking this to happen again, i'll do more frequent interim
 merges while using trunk to baseline the output.
 
 Any objections?

I do not really know what you mean. If you are comfortable with your
changes, post your changes for review, and nobody objects to a commit,
you should commit your changes to trunk.

IMO, we should not post and commit half-baked or poorly-scoped parts of
a bigger project because they would be difficult to review without
proper context. Reviewing large, complex changes is also difficult but I
would still prefer to review and commit self-contained changes.

Cheers,

Alex.


 On 5/10/10, Henrik Nordström hen...@henriknordstrom.net wrote:
 mån 2010-05-10 klockan 18:13 +0200 skrev Kinkie:
 revno: 10425
 committer: Francesco Chemolli kin...@squid-cache.org
 branch nick: trunk
 timestamp: Sun 2010-04-25 23:40:51 +0200
 message:
   Interim merge from autoconf-refactor feature-branch.
 Kinkie, could you please check that netfilter-based interception proxies
 are still supported?
 Will do ASAP (probably tomorrow).
 I have added back the missing define for LINUX_NETFILTER, but this is
 the second odd thing in the autoconf refactor merge. Can you please do a
 full review of your merge to see if there is anything else that's odd?

 It would also be nice to get rid of libcap and TPROXY warnings when the
 user wants just netfilter-based interception proxy support and is
 willing to --disable the rest. In Squid v3.1, we now get these
 irrelevant (for the said configuration) warnings:
 I'll check.
 trunk does not even have a configure option for controlling TPROXY. It's
 assumed to always be available by configure.in, and disabled in compiled
 code based on system header defines.

 Also the libcap warning message is a bit misguided. It's not only about
 TPROXY but also about security.

 Regards
 Henrik


 
 



Re: autoconf-refactor and netfilter-based transparent proxy

2010-05-21 Thread Kinkie
On Fri, May 21, 2010 at 5:37 PM, Alex Rousskov
rouss...@measurement-factory.com wrote:
 On 05/21/2010 06:43 AM, Kinkie wrote:

 To avoid risking this to happen again, i'll do more frequent interim
 merges while using trunk to baseline the output.

 Any objections?

 I do not really know what you mean. If you are comfortable with your
 changes, post your changes for review, and nobody objects to a commit,
 you should commit your changes to trunk.

 IMO, we should not post and commit half-baked or poorly-scoped parts of
 a bigger project because they would be difficult to review without
 proper context. Reviewing large, complex changes is also difficult but I
 would still prefer to review and commit self-contained changes.

As the autoconf-refactor should imply little or no visible changes,
the idea is to simply use a consistent baseline: trunk. Diffing
include/autoconf.h and the output of the configure script should be
enough ensure that the objective is met, but not if they diverge too
much.

This is, at least, the plan. The purpose is for this to remain a
refactoring branch: all merges will have to be able to build, no
question about it. I agree with getting a consensus before merging
anything with user-visible impact (such as the changes to the
authentication helpers build options). I don't expect the merges to be
too frequent, maybe once a week or so.

If you prefer to have a voting for all merges, it's also OK but may
slow things down somewhat.

Ciao!
-- 
/kinkie


Re: autoconf-refactor and netfilter-

2010-05-21 Thread Henrik Nordström
Ok for me, recommended even with incremental changes easier to review
and verify. But please remember to visually inspect each change with bzr
diff before commit. Catches most stupid things. This applies to all of
us myself included. 

  Originalmedd.  
Från: Kinkie gkin...@gmail.com
Skickat: 21 maj 2010 05:43 -07:00
Till: Henrik Nordström hen...@henriknordstrom.net,  Alex Rousskov 
rouss...@measurement-factory.com,  Squid Developers 
squid-dev@squid-cache.org
Ämne: Re: autoconf-refactor and netfilter-based transparent proxy

Hi all.

To avoid risking this to happen again, i'll do more frequent interim
merges while using trunk to baseline the output.

Any objections?

Thanks

On 5/10/10, Henrik Nordström hen...@henriknordstrom.net wrote:
 mån 2010-05-10 klockan 18:13 +0200 skrev Kinkie:
  revno: 10425
  committer: Francesco Chemolli kin...@squid-cache.org
  branch nick: trunk
  timestamp: Sun 2010-04-25 23:40:51 +0200
  message:
Interim merge from autoconf-refactor feature-branch.
 
  Kinkie, could you please check that netfilter-based interception proxies
  are still supported?

 Will do ASAP (probably tomorrow).

 I have added back the missing define for LINUX_NETFILTER, but this is
 the second odd thing in the autoconf refactor merge. Can you please do a
 full review of your merge to see if there is anything else that's odd?

  It would also be nice to get rid of libcap and TPROXY warnings when the
  user wants just netfilter-based interception proxy support and is
  willing to --disable the rest. In Squid v3.1, we now get these
  irrelevant (for the said configuration) warnings:

 I'll check.

 trunk does not even have a c




Re: autoconf-refactor and netfilter-based transparent proxy

2010-05-21 Thread Henrik Nordström
fre 2010-05-21 klockan 09:37 -0600 skrev Alex Rousskov:

 I do not really know what you mean. If you are comfortable with your
 changes, post your changes for review, and nobody objects to a commit,
 you should commit your changes to trunk.

In this case I am happy to have Kinkie review his own changes. It's not
likely anyone else of us will spot issues in proposed autoconf changes
before they hit trunk.

 Reviewing large, complex changes is also difficult but I
 would still prefer to review and commit self-contained changes.

Sure. Any change which goes to trunk should be self-contained and not
require further changes to actually work the way intended.

Regards
Henrik



Fwd: APT do not work with Squid as a proxy because of pipelining default

2010-05-21 Thread Luigi Gangitano
Hi guys,
I'm sorry to bother you here but a long thread on pipelining support is going 
on on debian-devel. You may wish to add some comments there. :-)

Thread start can be found here:

  http://lists.debian.org/debian-devel/2010/05/msg00494.html

Thanks,

L

Inizio messaggio inoltrato:

 Rinvia-Da: debian-de...@lists.debian.org
 Da: Petter Reinholdtsen p...@hungry.com
 Data: 17 maggio 2010 07.05.00 GMT+02.00
 A: debian-de...@lists.debian.org
 Oggetto: APT do not work with Squid as a proxy because of pipelining default
 
 I am bothered by URL: http://bugs.debian.org/56 , and the fact
 that apt(-get,itude) do not work with Squid as a proxy.  I would very
 much like to have apt work out of the box with Squid in Squeeze.  To
 fix it one can either change Squid to work with pipelining the way APT
 uses, which the Squid maintainer and developers according to the BTS
 report is unlikely to implement any time soon, or change the default
 setting in apt for Aquire::http::Pipeline-Depth to zero (0).  I've
 added a file like this in /etc/apt/apt.conf.d/ to solve it locally:
 
  Aquire::http::Pipeline-Depth 0;
 
 My question to all of you is simple.  Should the APT default be
 changed or Squid be changed?  Should the bug report be reassigned to
 apt or stay as a bug with Squid?
 
 Happy hacking,
 -- 
 Petter Reinholdtsen

--
Luigi Gangitano -- lu...@debian.org -- gangit...@lugroma3.org
GPG: 1024D/924C0C26: 12F8 9C03 89D3 DB4A 9972  C24A F19B A618 924C 0C26



Re: autoconf-refactor and netfilter-based transparent proxy

2010-05-21 Thread Amos Jeffries

Henrik Nordström wrote:

fre 2010-05-21 klockan 09:37 -0600 skrev Alex Rousskov:


I do not really know what you mean. If you are comfortable with your
changes, post your changes for review, and nobody objects to a commit,
you should commit your changes to trunk.


In this case I am happy to have Kinkie review his own changes. It's not
likely anyone else of us will spot issues in proposed autoconf changes
before they hit trunk.


Reviewing large, complex changes is also difficult but I
would still prefer to review and commit self-contained changes.


Sure. Any change which goes to trunk should be self-contained and not
require further changes to actually work the way intended.

Regards
Henrik



Agreed on both.

Kinkie, perhapse if you aimed for doing audit submits just prior to the 
weekends with a short summary of which configure options have been 
touched, we could synchronize some short period for us to test before it 
gets committed to trunk for full use. The full 10-day (or more) delay 
only happens if another dev can't +1 the change.


Though of course if there are any doubts in your mind about a change 
skip a weeks submission rather than rushing in incompletely tested 
change in.


The last two bumps were annoying to some, yes. But they were to be 
expected with such a low-level set of changes and were less impact than 
I personally was expecting to have to deal after the fallout we had on 
previous attempts.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.3


Re: Fwd: APT do not work with Squid as a proxy because of pipelining default

2010-05-21 Thread Amos Jeffries

Luigi Gangitano wrote:

Hi guys,
I'm sorry to bother you here but a long thread on pipelining support is going 
on on debian-devel. You may wish to add some comments there. :-)

Thread start can be found here:

  http://lists.debian.org/debian-devel/2010/05/msg00494.html

Thanks,

L

Inizio messaggio inoltrato:


Rinvia-Da: debian-de...@lists.debian.org
Da: Petter Reinholdtsen p...@hungry.com
Data: 17 maggio 2010 07.05.00 GMT+02.00
A: debian-de...@lists.debian.org
Oggetto: APT do not work with Squid as a proxy because of pipelining default

I am bothered by URL: http://bugs.debian.org/56 , and the fact
that apt(-get,itude) do not work with Squid as a proxy.  I would very
much like to have apt work out of the box with Squid in Squeeze.  To
fix it one can either change Squid to work with pipelining the way APT
uses, which the Squid maintainer and developers according to the BTS
report is unlikely to implement any time soon, or change the default
setting in apt for Aquire::http::Pipeline-Depth to zero (0).  I've
added a file like this in /etc/apt/apt.conf.d/ to solve it locally:

 Aquire::http::Pipeline-Depth 0;

My question to all of you is simple.  Should the APT default be
changed or Squid be changed?  Should the bug report be reassigned to
apt or stay as a bug with Squid?



Thanks Luigi, you may have to relay this back to the list. I can't seem 
to post a reply to the thread.



I looked at that Debian bug a while back when first looking at 
optimizing the request parsing for Squid. With the thought of increasing 
the Squid threshold for pipelined requests as many are suggesting.



There were a few factors which have so far crushed the idea of solving 
it in Squid alone:


 * Replies with unknown-length need to be end-of-data signalled by 
closing the client TCP link.


 * The IIS and ISA servers behaviour on POST requests with auth or such 
as outlined in our bug http://bugs.squid-cache.org/show_bug.cgi?id=2176 
cause the same sort of problem as above, even if the connection could 
otherwise be kept alive.


 This hits a fundamental flaw in pipelining which Robert Collins 
alluded to but did not explicitly state: that closing the connection 
will erase any chance of getting replies to the following pipelined 
requests. Apt is not alone in failing to re-try unsatisfied requests via 
a new connection.


  Reliable pipelining in Squid requires evading the closing of 
connections. HTTP/1.1 and chunked encoding shows a lot of promise here 
but still require a lot of work to get going.



As noted by David Kalnischkies in 
http://lists.debian.org/debian-devel/2010/05/msg00666.html the Squid 
currently in Debian can be configured trivially to pipeline 2 requests 
concurrently, plus a few more requests in the networking stack buffers 
which will be read in by Squid once the first pipelined request is 
completed.


 A good solution seems to me to involve fixes on both sides. To alter 
the default apt configuration down to a number where the pipeline 
timeouts/errors are less likely to occur. As noted by people around the 
web 1-5 seems to work better than 10; 0 or 1 works flawlessly for most. 
While we work on getting Squid doing more persistent connections and 
faster request handling.


Amos
Squid Proxy Project


Re: Poll: Which bzr versions are you using?

2010-05-21 Thread Alex Rousskov
On 05/19/2010 06:25 PM, Henrik Nordström wrote:
 For repository maintenance reasons I need to know which minimum bzr
 version all who work with the Squid repository need to be able to use.
 And I mean all, including those who do not have direct commit access.
 
 I.e the output of bzr --version | head -1 on the oldest platform you
 need or want to access the main bzr repository from.

Bazaar (bzr) 1.3.1

Alex.


Re: [DRAFT][MERGE] Cleanup comm outgoing connections in trunk

2010-05-21 Thread Alex Rousskov
On 05/19/2010 07:05 AM, Amos Jeffries wrote:
 Henrik Nordström wrote:
 tis 2010-05-18 klockan 23:34 + skrev Amos Jeffries:

 I've discovered the VC connections in DNS will need a re-working to
 handle
 the new TCP connection setup handling. I've left that for now since it
 appears that you are working on redesigning that area anyway. The new
 setup
 routines will play VERY nicely with persistent TCP links to the
 nameservers.

 I have not started on the DNS rewrite yet.

 I took some extra time last night and broke the back of the selection
 and
 forwarding rewrite. I'm now down to the fine detail build errors. When
 those are done I'll push the branch to LP for you to do the DNS fixes on
 top of.

 Ok.

 
 Pushed to launchpad:   lp:~yadi/squid/cleanup-comm

How can I review the changes as one patch without checking out your branch?

Thank you,

Alex.


 This builds, but has not yet been run tested.
 
 What has changed:
 
 ConnectionDetails objects have been renamed Comm::Connection and been
 extended to hold the FD and Squids' socket flags.
 
 Peer selection has been extended to do DNS lookups on the peers chosen
 for forwarding to and produce a vector of possible connection
 endpoints (squid local IP via tcp_outgoing_address or tproxy) and remote
 server.
 
 Various connection openers have been converted to use the new
 ConnectStateData API and CommCalls (function based so far).
 
 
 ConnectStateData has been moved into src/comm/ (not yet namespaced) and
 had all its DNS lookup operations dropped. To be replaced by a looping
 process of attempting to open a socket and join a link as described by
 some Comm::Connection or vector of same.
 
 ConnectStateData::connect() will go away and do some async work. Will
 come back at some point by calling the handler with COMM_OK,
 COMM_ERR_CONNECT, COMM_TIMEOUT and ptrs to the Comm::Connection or
 vector (whichever were passed in).
  On COMM_OK the Comm::Connection pointer or the first entry of the
 vector will be an open conn which we can now use.
  On COMM_ERR_CONNECT the vector will be empty (all tried and
 discarded),  the single ptr will be closed if not NULL.
  On COMM_TIMEOUT their content is as per COMM_ERR_CONNECT but the vector
 may have untried paths still present but closed.
 
 FD opening, FD problems, connection errors, timeouts, early remote
 TCP_RST or NACK closure during the setup are all now wrapped out of
 sight inside ConnectStateData.
 
 The main-level component may set FD handlers as needed for read/write
 and closure of the link in their connection-done handler where the FD
 first becomes visible to them.
 
 
 Besides the testing there is some work to:
  * make it obey squid.conf limits on retries and paths looked up.
  * make DNS TCP links ('VC') work again.
  * make the CommCalls proper AsynCalls and not function handler based.
  * make Comm::Connection ref-counted so we can have them stored
in the peer details and further reduce the DNS steps.
  * make ICAP do DNS lookups to set its server Comm::Connection properly.
For now it's stuck with the gethostbyname() blocking lookup.
 
 
 Future work once this is stable is to:
  a) push the IDENT, NAT, EUI and TLS operations down into the Comm layer
 with simple flags for other layers to turn them on/off as desired.
  b) make the general code pass Comm::Connection around so everything
 like ACLs can access the client and server conn when they need to.
 
 Amos



Re: Joining squid-dev List

2010-05-21 Thread Mark Nottingham
Bug w/ patch for 2.HEAD at:
  http://bugs.squid-cache.org/show_bug.cgi?id=2931


On 18/05/2010, at 4:33 PM, Henrik Nordström wrote:

 tis 2010-05-18 klockan 15:12 +1000 skrev Mark Nottingham:
 
  /*
   * The 'need_validation' flag is used to prevent forwarding
   * loops between siblings.  If our copy of the object is stale,
   * then we should probably only use parents for the validation
   * request.  Otherwise two siblings could generate a loop if
   * both have a stale version of the object.
   */
  r-flags.need_validation = 1;
 
 Is the code in Squid3 roughly the same?
 
 Should be.
 
 I'm tempted to get rid of the need_validation flag, as there are other
 ways that Squid does loop suppression (e.g., only-if-cached on peer
 requests, icp_stale_hit). What do people think of this? Is this howyou
 addressed it.
 
 Don't get rid of the flag, but an option to not skip siblings based on
 it unless the sibling is configured with allow-miss
 (peer-options.allow_miss) is fine.
 
 When using ICP or Digests forwarding loop conditions is quite common,
 triggered by clients sending their own freshness requirements or slight
 difference in configuration between the siblings.
 
 Regards
 Henrik