Re: [PATCH] Cache-Control overwriting hack

2009-05-05 Thread Mark Nottingham
I'm neutral on this. Chris, the what that Robert talks about is what I  
had originally thought, but I'm not dead-set either way...


On 05/05/2009, at 12:25 PM, Robert Collins wrote:


On Tue, 2009-05-05 at 12:17 +1000, Mark Nottingham wrote:

The functionality? Very much so; I've been thinking about adding this
sort of thing for a while. Very useful if you're running an  
accelerator.


No, a rewrite of the approach - seems to me that a functional version
many things support  a new version that few things support.

That said, I did have one concern - I think its clearer to say:
'surrogates use this header, clients get the original cache-control',
than to say:
'surrogates use cache-control, and if there is a header X they replace
cache-control with X'.

The latter will be harder to debug by network traces I think.

-Rob


--
Mark Nottingham   m...@yahoo-inc.com




Re: Status of Squid 3 on Windows

2009-05-05 Thread Kinkie
Hi all,
  I've also planned hosting a shared windows dev system on eu, but we
need to secure the necessary OS and devkit licenses first.

On Monday, May 4, 2009, Guido Serassio guido.seras...@acmeconsulting.it wrote:
 Hi Alex,

 Il 02.13 04/05/2009 Alex Rousskov ha scritto:

 Hi Guido,

     Thanks a lot for the update. It is sad to see the results of many
 years of work dissipating and I sympathize with your situation. I wonder
 whether the demand for Squid on Windows is just not there OR it is the
 current lack of Windows support that makes it difficult for Acme to
 quickly monetize the potential demand. I know very little about folks
 running Windows servers so it is difficult for me to judge what they are
 using and what they want to use.


 Squid on Windows suffers two big issues: the only available comm loop is 
 select() and the maximum FDs number is hard coded to 2048 in the MSVCRT 
 library.
 This limits the scalability and restrict its usage to low end implementation, 
 and go over this is really not so simple, because very big changes in the 
 comm code are needed.

 My feel is that Squid on Windows is mainly used on very low cost 
 implementation, and only a few of user could be interested on very limited 
 investment.

 I have compiled an experimental binary build of Squid 3.0 STABLE 13 asking on 
 squid-users to test it and give some feedback to the list. After two months, 
 the feedback number was NONE, really I cannot understand this ...



 Can you say what it would take for you and Acme to resurrect Windows
 support for Squid3? It might be easier for others to pitch in with time,
 money, and/or hardware if the requirements are known. We can post your
 estimates to both mailing lists and see what happens.


 Mainly is not a money/hardware problem, its a time/resource problem. I cannot 
 leave my current activity out but I could try to fix the build environment 
 and after manage the work of some other developer, but  the availability of 
 other human resource in the Windows project is essential.

 Regards

 Guido



 -
 
 Guido Serassio
 Acme Consulting S.r.l. - Microsoft Certified Partner
 Via Lucia Savarino, 1           10098 - Rivoli (TO) - ITALY
 Tel. : +39.011.9530135  Fax. : +39.011.9781115
 Email: guido.seras...@acmeconsulting.it
 WWW: http://www.acmeconsulting.it/



-- 
/kinkie


[PATCH] helper closing sequence cleanup

2009-05-05 Thread Amos Jeffries

This one will definitely need a review and second opinion.

While looking over the helper shutdown sequence to investigate bug 2648.

If I understand it now the two states for end-of-life are:
 - shutdown :: closure is needed, but for some reason it can't be done 
immediately.

 - closing :: actual comm_close() has been called on the read socket.


I found a few things not quite right with the handling assuming both of 
the above are true ...


 * line 1553 :: all of the comm_close calls appeared to be matched with 
flags.closing=1  except this one in the stateful delayed shutdown 
closure. This is probably the actual cause of bug 2648.



 * line 1381, 1424 :: The lookups for next available helper were 
skipping helpers with shutdown pending. But identifying those currently 
closing as candidates. This leaves the helpers in an error-state closure 
as candidates for use.  It may not be an issue now with immediate comm 
callbacks, but considering the closing state has a potential async step 
in the middle I don't think its a good idea to ignore any longer.



 * helper*Shutdown() :: were marking helpers as non-active before 
checking to see if they had pending work to do, instead of marking only 
the closed helpers as inactive. The effects of this are probably hidden 
behind the comm shutdown closures on reconfigure.



 * line 1011, 1123 :: error-state closures are called directly on the 
FD, but the helper state is not updated to indicate the closure state of 
the helper state object. At the very least flags.closing=1 needs to be 
done here too. I've opted for making these closures part of the 
hard-close (see below)


The rest of the patch is removing duplicate code around comm_close() 
into the two states of closure ( x2 for the two types of helper):



 helper*DoAbortClose()

Where the read socket for receiving further data goes down first. The 
helper and its pending requests are abandoned. The shutdown terminator 
is pushed to the write socket. The callbacks cleanup the remaining state 
and close the write.


I've pulled that closure code out of the shutdown sequences, this moves 
a step closer to merging the two. Also allows the error-case closure 
noted above to use the full close sequence on the state object with 
windows shutdown etc. Which was missing previously.


 This one has windows shutdown handlers.


 helper*DoGracefulClose()

Where the write socket is closed so no more data can be pushed at the 
helper, but the read socket remains open for the last pending data 
results to come back.


This is used for delayed shutdown completion, and for closure of the 
write socket during read socket shutdown. Also enforces mark of the 
closing flag for when its the first point of closure.


  This one I'm not sure of why, but its missing windows shutdown 
handlers. The unwrapped comm_close may screw with the HANDLE closure. 
May need to become a NOP for windows.



Amos
=== modified file 'configure.in'
--- configure.in	2009-04-30 11:51:29 +
+++ configure.in	2009-05-03 10:43:21 +
@@ -65,6 +65,10 @@
 AC_LIBTOOL_DLOPEN
 if test $use_loadable_modules = yes;
 then
+dnl Debian libtool 2.2.6 is broken.
+top_build_prefix=${top_build_dir}
+_AC_HAVE_TOP_BUILD_PREFIX=0
+
 AC_LIBLTDL_CONVENIENCE(lib/libLtdl)
 fi
 AC_PROG_LIBTOOL

=== modified file 'src/helper.cc'
--- src/helper.cc	2009-04-17 22:09:22 +
+++ src/helper.cc	2009-05-05 12:09:27 +
@@ -68,6 +68,30 @@
 static void helperStatefulServerKickQueue(helper_stateful_server * srv);
 static bool helperStartStats(StoreEntry *sentry, void *hlp, const char *label);
 
+/**
+ * Abort reading from this helper immediately.
+ *
+ * This is a sharp short closure for the helper which may be used
+ * for quick closure or error aborting.
+ * Any pending requests are abandoned.
+ *
+ * It's used during helprShutdown, but that is just graceful enough
+ * to schedule the close and wait for existing queues to drain.
+ */
+static void helperDoAbort(helper_server * srv);
+static void helperStatefulDoAbort(helper_stateful_server * srv);
+
+/**
+ * Graceful closure of the helper.
+ * Write socket is closed and helper marked as closing so that
+ * further requests are not queued.
+ *
+ * Pending queued items are currently abandoned (need to fix that),
+ * but requests already sent to the helper are waited for
+ * and the sockets only fully closed when none remain to receive back.
+ */
+static void helperDoGracefulClose(helper_server * srv);
+static void helperStatefulDoGracefulClose(helper_stateful_server * srv);
 
 CBDATA_TYPE(helper);
 CBDATA_TYPE(helper_server);
@@ -419,7 +443,7 @@
 if (srv-flags.reserved == S_HELPER_RESERVED)
 continue;
 
-if (!srv-flags.shutdown)
+if (srv-flags.shutdown || srv-flags.closing)
 continue;
 
 if ((hlp-IsAvailable != NULL)  (srv-data != NULL) 
@@ -587,7 +611,7 @@
 storeAppendPrintf(sentry,B = BUSY\n);
 

Re: [PATCH] helper closing sequence cleanup

2009-05-05 Thread Bundle Buggy

Bundle Buggy has detected this merge request.

For details, see: 
http://bundlebuggy.aaronbentley.com/project/squid/request/%3C4A003331.1050406%40treenet.co.nz%3E

Project: Squid


Re: Status of Squid 3 on Windows

2009-05-05 Thread Alex Rousskov
On 05/05/2009 12:51 AM, Kinkie wrote:

   I've also planned hosting a shared windows dev system on eu, but we
 need to secure the necessary OS and devkit licenses first.

Securing somebody who will be working on that windows dev system should
come first, I guess. If nobody is going to be responsible for Squid on
Windows, there is no point in obtaining licenses.

Alex.

 On Monday, May 4, 2009, Guido Serassio guido.seras...@acmeconsulting.it 
 wrote:
 Hi Alex,

 Il 02.13 04/05/2009 Alex Rousskov ha scritto:

 Hi Guido,

 Thanks a lot for the update. It is sad to see the results of many
 years of work dissipating and I sympathize with your situation. I wonder
 whether the demand for Squid on Windows is just not there OR it is the
 current lack of Windows support that makes it difficult for Acme to
 quickly monetize the potential demand. I know very little about folks
 running Windows servers so it is difficult for me to judge what they are
 using and what they want to use.


 Squid on Windows suffers two big issues: the only available comm loop is 
 select() and the maximum FDs number is hard coded to 2048 in the MSVCRT 
 library.
 This limits the scalability and restrict its usage to low end 
 implementation, and go over this is really not so simple, because very big 
 changes in the comm code are needed.

 My feel is that Squid on Windows is mainly used on very low cost 
 implementation, and only a few of user could be interested on very limited 
 investment.

 I have compiled an experimental binary build of Squid 3.0 STABLE 13 asking 
 on squid-users to test it and give some feedback to the list. After two 
 months, the feedback number was NONE, really I cannot understand this ...



 Can you say what it would take for you and Acme to resurrect Windows
 support for Squid3? It might be easier for others to pitch in with time,
 money, and/or hardware if the requirements are known. We can post your
 estimates to both mailing lists and see what happens.


 Mainly is not a money/hardware problem, its a time/resource problem. I 
 cannot leave my current activity out but I could try to fix the build 
 environment and after manage the work of some other developer, but  the 
 availability of other human resource in the Windows project is essential.

 Regards

 Guido



 -
 
 Guido Serassio
 Acme Consulting S.r.l. - Microsoft Certified Partner
 Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
 Tel. : +39.011.9530135  Fax. : +39.011.9781115
 Email: guido.seras...@acmeconsulting.it
 WWW: http://www.acmeconsulting.it/


 



Re: [PATCH] helper closing sequence cleanup

2009-05-05 Thread Alex Rousskov
On 05/05/2009 06:38 AM, Amos Jeffries wrote:
 This one will definitely need a review and second opinion.
 
 While looking over the helper shutdown sequence to investigate bug 2648.
 
 If I understand it now the two states for end-of-life are:
  - shutdown :: closure is needed, but for some reason it can't be done
 immediately.
  - closing :: actual comm_close() has been called on the read socket.

Could you please document the corresponding helper_server flags,
n_running, and n_active? And the stats members as well? This will help
those reviewers who do not know the original code intent behind all
those state variables and flags :-).

The patch overall seems to be cleaning up the code quite a bit, thank you.

Should we make _helper_stateful a kid of _helper to avoid duplicate
declarations and reduce the number of things we need to document? This
may also allow you to write one DoAbort method for the two helper kinds.

I am concerned about n_active management in the current code because it
is decremented in helperShutdown even when we call continue and do
almost nothing for that particular link. Your patch seems more logical
than the current code, but still it decrements the n_active counter
before something specific has happened. Perhaps n_active documentation
would clarify that, but maybe we should move n_active change to
helperDoAbort or elsewhere?

Please add a description to helperDoAbort and friends. Can this be
renamed to match shutdown or closing state terminology (e.g.,
helperDoShutdown) or are we doing something different than shutting down
or closing here? I think you have a good beginning of this documentation
in this email already.

Can we remove flags.closing and rely on the corresponding wfd state
instead? Is wfd always set to -1 when we set flags.closing to 1?

What happens to the helper state after we set flags.closing and
comm_close closes the descriptor? We do not have a flags.closed state.
Is that because the helper is completely destroyed when comm_close does
its job and calls us back?

Thank you,

Alex.


 I found a few things not quite right with the handling assuming both of
 the above are true ...
 
  * line 1553 :: all of the comm_close calls appeared to be matched with
 flags.closing=1  except this one in the stateful delayed shutdown
 closure. This is probably the actual cause of bug 2648.
 
 
  * line 1381, 1424 :: The lookups for next available helper were
 skipping helpers with shutdown pending. But identifying those currently
 closing as candidates. This leaves the helpers in an error-state closure
 as candidates for use.  It may not be an issue now with immediate comm
 callbacks, but considering the closing state has a potential async step
 in the middle I don't think its a good idea to ignore any longer.
 
 
  * helper*Shutdown() :: were marking helpers as non-active before
 checking to see if they had pending work to do, instead of marking only
 the closed helpers as inactive. The effects of this are probably hidden
 behind the comm shutdown closures on reconfigure.
 
 
  * line 1011, 1123 :: error-state closures are called directly on the
 FD, but the helper state is not updated to indicate the closure state of
 the helper state object. At the very least flags.closing=1 needs to be
 done here too. I've opted for making these closures part of the
 hard-close (see below)
 
 The rest of the patch is removing duplicate code around comm_close()
 into the two states of closure ( x2 for the two types of helper):
 
 
  helper*DoAbortClose()
 
 Where the read socket for receiving further data goes down first. The
 helper and its pending requests are abandoned. The shutdown terminator
 is pushed to the write socket. The callbacks cleanup the remaining state
 and close the write.
 
 I've pulled that closure code out of the shutdown sequences, this moves
 a step closer to merging the two. Also allows the error-case closure
 noted above to use the full close sequence on the state object with
 windows shutdown etc. Which was missing previously.
 
  This one has windows shutdown handlers.
 
 
  helper*DoGracefulClose()
 
 Where the write socket is closed so no more data can be pushed at the
 helper, but the read socket remains open for the last pending data
 results to come back.
 
 This is used for delayed shutdown completion, and for closure of the
 write socket during read socket shutdown. Also enforces mark of the
 closing flag for when its the first point of closure.
 
   This one I'm not sure of why, but its missing windows shutdown
 handlers. The unwrapped comm_close may screw with the HANDLE closure.
 May need to become a NOP for windows.
 
 
 Amos
 



Re: [PATCH] helper closing sequence cleanup

2009-05-05 Thread Amos Jeffries

Alex Rousskov wrote:

On 05/05/2009 06:38 AM, Amos Jeffries wrote:

This one will definitely need a review and second opinion.

While looking over the helper shutdown sequence to investigate bug 2648.

If I understand it now the two states for end-of-life are:
 - shutdown :: closure is needed, but for some reason it can't be done
immediately.
 - closing :: actual comm_close() has been called on the read socket.


Could you please document the corresponding helper_server flags,
n_running, and n_active? And the stats members as well? This will help
those reviewers who do not know the original code intent behind all
those state variables and flags :-).


A one of those, yes it would. I only looked at the shutdown/closing 
pathways. Will give the running pathways a look a well with that in mind.




The patch overall seems to be cleaning up the code quite a bit, thank you.

Should we make _helper_stateful a kid of _helper to avoid duplicate
declarations and reduce the number of things we need to document? This
may also allow you to write one DoAbort method for the two helper kinds.


Yea that was part of the idea for next step. I didn't go so far as to 
alter the C/C++ nature of the code though.




I am concerned about n_active management in the current code because it
is decremented in helperShutdown even when we call continue and do
almost nothing for that particular link. Your patch seems more logical
than the current code, but still it decrements the n_active counter
before something specific has happened. Perhaps n_active documentation
would clarify that, but maybe we should move n_active change to
helperDoAbort or elsewhere?


One of the first things I looked at was moving it. It's done in the 
current place because the shutdown procedure becomes one-way with the 
next function call at that point.


They n_active-- should all be now after the continue and directly in 
front of the *DoAbort() in helper*Shutdown(). Is there one particular 
you noticed out of place still?




Please add a description to helperDoAbort and friends. Can this be
renamed to match shutdown or closing state terminology (e.g.,
helperDoShutdown) or are we doing something different than shutting down
or closing here? I think you have a good beginning of this documentation
in this email already.


The docs are all at the point of function declaration. :)

helperDoShutdown is already there and one of the processes that call these.



Can we remove flags.closing and rely on the corresponding wfd state
instead? Is wfd always set to -1 when we set flags.closing to 1?


Did I not comment that some closures are rfd closed first, others are 
wfd closed? The helepr closing happens both ways depending on whether 
its Abort or Graceful close happening.



What happens to the helper state after we set flags.closing and
comm_close closes the descriptor? We do not have a flags.closed state.
Is that because the helper is completely destroyed when comm_close does
its job and calls us back?


As far as I can tell this 

+/**
+ * Abort reading from this helper immediately.
+ *
+ * This is a sharp short closure for the helper which may be used
+ * for quick closure or error aborting.
+ * Any pending requests are abandoned.
+ *
+ * It's used during helprShutdown, but that is just graceful enough
+ * to schedule the close and wait for existing queues to drain.
+ */
+static void helperDoAbort(helper_server * srv);
+static void helperStatefulDoAbort(helper_stateful_server * srv);
+
+/**
+ * Graceful closure of the helper.
+ * Write socket is closed and helper marked as closing so that
+ * further requests are not queued.
+ *
+ * Pending queued items are currently abandoned (need to fix that),
+ * but requests already sent to the helper are waited for
+ * and the sockets only fully closed when none remain to receive back.
+ */
+static void helperDoGracefulClose(helper_server * srv);
+static void helperStatefulDoGracefulClose(helper_stateful_server * srv);

Then the close handler associated with the rfd is *Free.

This is why I named them that way around. Because marking thw wfd and 
closing flags and waiting for the state to fall naturally into unused 
and do its free looks more graceful than closing read and abandoning data.


I'd really like these to always be done the same way in a simple Close() 
call, but can't see a good way to do it easily yet.




Thank you,

Alex.



I found a few things not quite right with the handling assuming both of
the above are true ...

 * line 1553 :: all of the comm_close calls appeared to be matched with
flags.closing=1  except this one in the stateful delayed shutdown
closure. This is probably the actual cause of bug 2648.


 * line 1381, 1424 :: The lookups for next available helper were
skipping helpers with shutdown pending. But identifying those currently
closing as candidates. This leaves the helpers in an error-state closure
as candidates for use.  It may not be an issue now with immediate comm
callbacks, 

Re: [PATCH] helper closing sequence cleanup

2009-05-05 Thread Alex Rousskov
On 05/05/2009 04:49 PM, Amos Jeffries wrote:
 Alex Rousskov wrote:

 Can we remove flags.closing and rely on the corresponding wfd state
 instead? Is wfd always set to -1 when we set flags.closing to 1?
 
 Did I not comment that some closures are rfd closed first, others are
 wfd closed? The helepr closing happens both ways depending on whether
 its Abort or Graceful close happening.

I am sorry for not being able to connect the dots faster. I will reshape
the question: Can we remove flags.closing and rely on wfd and rfd state
instead? If one of the descriptors is negative, can we assume that we
are in the flags.closing state?

 What happens to the helper state after we set flags.closing and
 comm_close closes the descriptor? We do not have a flags.closed state.
 Is that because the helper is completely destroyed when comm_close does
 its job and calls us back?
 
 As far as I can tell this 
 
 +/**
 + * Abort reading from this helper immediately.
 + *
 + * This is a sharp short closure for the helper which may be used
 + * for quick closure or error aborting.
 + * Any pending requests are abandoned.
 + *
 + * It's used during helprShutdown, but that is just graceful enough
 + * to schedule the close and wait for existing queues to drain.

The above two paragraphs contradict each other, IMHO. A sharp short
closure with pending requests abandoned cannot be used during a
graceful shutdown that waits for existing queues to drain. What I am
missing here?

 + */
 +static void helperDoAbort(helper_server * srv);
 +static void helperStatefulDoAbort(helper_stateful_server * srv);
 +
 +/**
 + * Graceful closure of the helper.
 + * Write socket is closed and helper marked as closing so that
 + * further requests are not queued.
 + *
 + * Pending queued items are currently abandoned (need to fix that),
 + * but requests already sent to the helper are waited for
 + * and the sockets only fully closed when none remain to receive back.
 + */
 +static void helperDoGracefulClose(helper_server * srv);
 +static void helperStatefulDoGracefulClose(helper_stateful_server * srv);
 
 Then the close handler associated with the rfd is *Free.
 
 This is why I named them that way around. Because marking thw wfd and
 closing flags and waiting for the state to fall naturally into unused
 and do its free looks more graceful than closing read and abandoning data.
 
 I'd really like these to always be done the same way in a simple Close()
 call, but can't see a good way to do it easily yet.

Can I interpret the above that flags.closing means we have called
comm_close on at least one of the descriptors?

Thanks,

Alex.


 I found a few things not quite right with the handling assuming both of
 the above are true ...

  * line 1553 :: all of the comm_close calls appeared to be matched with
 flags.closing=1  except this one in the stateful delayed shutdown
 closure. This is probably the actual cause of bug 2648.


  * line 1381, 1424 :: The lookups for next available helper were
 skipping helpers with shutdown pending. But identifying those currently
 closing as candidates. This leaves the helpers in an error-state closure
 as candidates for use.  It may not be an issue now with immediate comm
 callbacks, but considering the closing state has a potential async step
 in the middle I don't think its a good idea to ignore any longer.


  * helper*Shutdown() :: were marking helpers as non-active before
 checking to see if they had pending work to do, instead of marking only
 the closed helpers as inactive. The effects of this are probably hidden
 behind the comm shutdown closures on reconfigure.


  * line 1011, 1123 :: error-state closures are called directly on the
 FD, but the helper state is not updated to indicate the closure state of
 the helper state object. At the very least flags.closing=1 needs to be
 done here too. I've opted for making these closures part of the
 hard-close (see below)

 The rest of the patch is removing duplicate code around comm_close()
 into the two states of closure ( x2 for the two types of helper):


  helper*DoAbortClose()

 Where the read socket for receiving further data goes down first. The
 helper and its pending requests are abandoned. The shutdown terminator
 is pushed to the write socket. The callbacks cleanup the remaining state
 and close the write.

 I've pulled that closure code out of the shutdown sequences, this moves
 a step closer to merging the two. Also allows the error-case closure
 noted above to use the full close sequence on the state object with
 windows shutdown etc. Which was missing previously.

  This one has windows shutdown handlers.


  helper*DoGracefulClose()

 Where the write socket is closed so no more data can be pushed at the
 helper, but the read socket remains open for the last pending data
 results to come back.

 This is used for delayed shutdown completion, and for closure of the
 write socket during read socket shutdown. Also enforces mark of the
 closing flag for