Re: [squid-dev] [PATCH] adapting 100-Continue / A Bug 4067 fix

2015-01-07 Thread Tsantilas Christos


On 01/01/2015 01:47 AM, Alex Rousskov wrote:

On 11/09/2014 02:02 PM, Tsantilas Christos wrote:


  void
  Http::Server::processParsedRequest(ClientSocketContext *context)
  {
+if (!buildHttpRequest(context))
+return;
+
+if (Config.accessList.forceRequestBodyContinuation) {
+ClientHttpRequest *http = context-http;
+HttpRequest *request = http-request;
+ACLFilledChecklist 
bodyContinuationCheck(Config.accessList.forceRequestBodyContinuation, request, 
NULL);
+if (bodyContinuationCheck.fastCheck() == ACCESS_ALLOWED) {
+debugs(33, 5, Body Continuation forced);
+request-forcedBodyContinuation = true;



The HTTP code above sends 100-Continue responses to HTTP GET messages
unless the admin is very careful with the ACLs. This can be reproduced
trivially with

   force_request_body_continuation allow all

We should not evaluate force_request_body_continuation if the request
does not have a body IMO. The force_request_body_continuation
documentation makes that option specific to upload requests. If you
agree, please adjust the committed code accordingly.


:-(

I am attaching a patch which fixes this. The patch attached in both 
normal form and -b diff option to allow squid developers examine the 
proposed changes.


In my patch I am moving the check for Expect: 100-Continue header from 
clientProcessRequest() (located in client_side.cc file), which checks 
for valid Expect header values, to Http::Server::processParsedRequest 
method to minimise the required checks.


 I believe this relocation is valid because this check is needed only 
for HTTP protocol. For FTP protocol the Expect header is generated by 
squid and is not possible to include not supported values.


Alternately we can just check if Expect: 100-continue header exist 
inside Http::Server::processParsedRequest.





The similar FTP check seems to be inside the upload-specific code and,
hence, should not need additional do we expect a body? guards.


Yep.




Thank you,

Alex.



=== modified file 'src/client_side.cc'
--- src/client_side.cc	2015-01-01 08:57:18 +
+++ src/client_side.cc	2015-01-07 11:28:41 +
@@ -2605,23 +2605,6 @@
 return;
 }
 
-if (request-header.has(HDR_EXPECT)) {
-const String expect = request-header.getList(HDR_EXPECT);
-const bool supportedExpect = (expect.caseCmp(100-continue) == 0);
-if (!supportedExpect) {
-clientStreamNode *node = context-getClientReplyContext();
-clientReplyContext *repContext = dynamic_castclientReplyContext *(node-data.getRaw());
-assert (repContext);
-conn-quitAfterError(request.getRaw());
-repContext-setReplyToError(ERR_INVALID_REQ, Http::scExpectationFailed, request-method, http-uri,
-conn-clientConnection-remote, request.getRaw(), NULL, NULL);
-assert(context-http-out.offset == 0);
-context-pullData();
-clientProcessRequestFinished(conn, request);
-return;
-}
-}
-
 clientSetKeepaliveFlag(http);
 // Let tunneling code be fully responsible for CONNECT requests
 if (http-request-method == Http::METHOD_CONNECT) {

=== modified file 'src/servers/HttpServer.cc'
--- src/servers/HttpServer.cc	2014-12-20 12:12:02 +
+++ src/servers/HttpServer.cc	2015-01-07 11:18:41 +
@@ -248,10 +248,29 @@
 if (!buildHttpRequest(context))
 return;
 
-if (Config.accessList.forceRequestBodyContinuation) {
 ClientHttpRequest *http = context-http;
-HttpRequest *request = http-request;
-ACLFilledChecklist bodyContinuationCheck(Config.accessList.forceRequestBodyContinuation, request, NULL);
+HttpRequest::Pointer request = http-request;
+
+if (request-header.has(HDR_EXPECT)) {
+const String expect = request-header.getList(HDR_EXPECT);
+const bool supportedExpect = (expect.caseCmp(100-continue) == 0);
+if (!supportedExpect) {
+clientStreamNode *node = context-getClientReplyContext();
+quitAfterError(request.getRaw());
+// setLogUri should called before repContext-setReplyToError
+setLogUri(http, urlCanonicalClean(request.getRaw()));
+clientReplyContext *repContext = dynamic_castclientReplyContext *(node-data.getRaw());
+assert (repContext);
+repContext-setReplyToError(ERR_INVALID_REQ, Http::scExpectationFailed, request-method, http-uri,
+clientConnection-remote, request.getRaw(), NULL, NULL);
+assert(context-http-out.offset == 0);
+context-pullData();
+clientProcessRequestFinished(this, request);
+return;
+}
+
+if (Config.accessList.forceRequestBodyContinuation) {
+ACLFilledChecklist bodyContinuationCheck(Config.accessList.forceRequestBodyContinuation, request.getRaw(), 

Re: [squid-dev] [PATCH] adapting 100-Continue / A Bug 4067 fix

2014-11-10 Thread Tsantilas Christos

patch applied to trunk as revno:13697

Regards,
   Christos

On 11/10/2014 12:14 PM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/11/2014 11:06 p.m., Tsantilas Christos wrote:

On 11/10/2014 09:36 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE- Hash: SHA1

On 10/11/2014 10:02 a.m., Tsantilas Christos wrote:

I am re-posting the patch. There are not huge changes.


Looking over this in more detail...

Whats the point of having buildHttpRequest() a separate method
from processRequest() ?


It makes the code more readable. It is a private method for use
internally by the Http::Server class. I am suggesting to leave it
as is. We can fix method name or its documentation.



The documentation for buildHttpRequest() is wrong, no parsing
takes place there. It is purely post-parse processing of some
parsed HTTP request. ie processParsedRequest() action.


The HttpRequest::CreateFromUrlAndMethod still does parsing. But yes
we are using pre-parsed informations to build HTTP request object.


Is it ok the following documentation for buildHttpRequest?

/// Handles parsing results and build an error reply if parsing


I would say Handles parsing results. May generate then deliver an
error reply to the client if parsing

To make it clear that actual socket I/O might take place.



/// is failed, or parses the url and build the HttpRequest object
/// using parsing results. /// Return false if parsing is failed,
true otherwise.



Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUYJAfAAoJELJo5wb/XPRjs58H/3Dya79ljhnSO1H/tuh5aRFI
vp9cWogMg6OSZT640JMAxsUqL3SkbskUxj8a7aVP6nBW8Xu2VnFJ3PfEIPhq2l09
CI/wt2erj4iPqabZOhdWxNXBGfrG5t+4qIWZe9ALuXAZsjxAGABWU4Z7tYPn3P7A
cGKuwbV/3rWGBiXXJgWZuuQeEmD6+QX2j31ErGCJbvzje1acyLRG8Fo8YABiqE6N
jveeTVJ44ktzSmP2Lf9OneTLp5ktrc/vag0N0zLu6qY3xKxix/QuSBH+7Ff88wX8
ehctZxnWwnN41brIoZUXIb9Zhqr7GaJ/qYyjFCmJgyCp/bumfHJV7OIld+v5Pck=
=78Lp
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] adapting 100-Continue / A Bug 4067 fix

2014-11-09 Thread Tsantilas Christos

I am re-posting the patch.
There are not huge changes.

Regards,
  Christos

On 11/07/2014 10:54 AM, Amos Jeffries wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2/11/2014 6:59 a.m., Tsantilas Christos wrote:

Hi all,

This patch is a fix for bug 4067:
http://bugs.squid-cache.org/show_bug.cgi?id=4067

Currently squid fails to handle correctly 100 Continue
requests/responses  when adaptation/ICAP  is used. When a
data-upload request enters squid (eg a PUT request with  Expect:
100-continue header), squid sends ICAP headers and HTTP headers to
ICAP server and stuck waiting for ever  the request body data.

This patch implements the force_request_body_continuation access
list directive which controls how Squid handles data upload
requests from HTTP and send the request body to Squid.

An allow match tells Squid to respond with the HTTP 100 or FTP 150
(Please Continue) control message on its own, before forwarding
the request to an adaptation service or peer. Such a response
usually forces the request sender to proceed with sending the body.
A deny match tells Squid to delay that control response until the
origin server confirms that the request body is needed. Delaying is
the default behavior.

This is a Measurement Factory project



Sorry its taken so long. The patch looks fine, but will need a rebase
and re-audit on top of parser-ng changes that just landed in trunk.

Amos

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUXIizAAoJELJo5wb/XPRjONUH/3D6jRYleA7FrjbylZhZlJEF
LuirY4YtJtgIJSL0Vqn/6IzSQaRLBds1X4hEdGMNJJU3WftvwRLQ8vEpM6MPadDb
gQ5aPJVkRYMRg56t/ZjxVA+oLJyb5J3kASdg2zE9LPJjRSq14lcQ+vCdOve7ceDY
nin/1hTC9Yl6LFyoLC0oQGcaBv6X9vGOXDQ74v1MCSo6Rt3m83UnqjNtSLDt1sOl
W3WLTyuEQwlc+c1Pz58bqnq1CY/wYebEET8rSIIV3rH/ohdDKV+HHb5NDRhnDeVa
eZkecsRuIdTc3LS3vSzZC9MNvSE310AsjY8+G9no5G5o61U0n90lROH/yh1m8rE=
=/UQi
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



Adapting 100-continue

Currently squid fails to handle correctly 100 Continue requests/responses 
when ICAP is used. The problems discussed in squid bugzilla:
 http://bugs.squid-cache.org/show_bug.cgi?id=4067

A short discussion of the problem:
  When a data upload request enters squid (eg a PUT request with 
Expect: 100-continue header), squid sends ICAP headers and HTTP headers to
ICAP server and stucks waiting for ever  the request body data.

This patch implements the force_request_body_continuation access list 
directive which controls how Squid handles data upload requests from HTTP and
send the request body to Squid. 
An allow match tells Squid to respond with the HTTP 100 or FTP 150
(Please Continue) control message on its own, before forwarding the
request to an adaptation service or peer. Such a response usually forces
the request sender to proceed with sending the body. A deny match tells
Squid to delay that control response until the origin server confirms
that the request body is needed. Delaying is the default behavior.

This is a Measurement Factory project
=== modified file 'src/HttpRequest.cc'
--- src/HttpRequest.cc	2014-11-05 10:18:15 +
+++ src/HttpRequest.cc	2014-11-09 10:54:49 +
@@ -95,40 +95,41 @@
 vary_headers = NULL;
 myportname = null_string;
 tag = null_string;
 #if USE_AUTH
 extacl_user = null_string;
 extacl_passwd = null_string;
 #endif
 extacl_log = null_string;
 extacl_message = null_string;
 pstate = psReadyToParseStartLine;
 #if FOLLOW_X_FORWARDED_FOR
 indirect_client_addr.setEmpty();
 #endif /* FOLLOW_X_FORWARDED_FOR */
 #if USE_ADAPTATION
 adaptHistory_ = NULL;
 #endif
 #if ICAP_CLIENT
 icapHistory_ = NULL;
 #endif
 rangeOffsetLimit = -2; //a value of -2 means not checked yet
+forcedBodyContinuation = false;
 }
 
 void
 HttpRequest::clean()
 {
 // we used to assert that the pipe is NULL, but now the request only
 // points to a pipe that is owned and initiated by another object.
 body_pipe = NULL;
 #if USE_AUTH
 auth_user_request = NULL;
 #endif
 safe_free(canonical);
 
 safe_free(vary_headers);
 
 url.clear();
 urlpath.clean();
 
 header.clean();
 
@@ -235,40 +236,42 @@
 adaptHistory_ = aReq-adaptHistory();
 #endif
 #if ICAP_CLIENT
 icapHistory_ = aReq-icapHistory();
 #endif
 
 // This may be too conservative for the 204 No Content case
 // may eventually need cloneNullAdaptationImmune() for that.
 flags = aReq-flags.cloneAdaptationImmune();
 
 errType = aReq-errType;
 errDetail = aReq-errDetail;
 #if USE_AUTH
 auth_user_request = aReq-auth_user_request;
 extacl_user = aReq-extacl_user;
 extacl_passwd = aReq-extacl_passwd;
 #endif
 
 myportname = aReq-myportname;
 
+forcedBodyContinuation = aReq-forcedBodyContinuation;
+
 // main property is which connection the request was received on (if any)
 

Re: [squid-dev] [PATCH] adapting 100-Continue / A Bug 4067 fix

2014-11-09 Thread Amos Jeffries
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/11/2014 10:02 a.m., Tsantilas Christos wrote:
 I am re-posting the patch. There are not huge changes.

Looking over this in more detail...

Whats the point of having buildHttpRequest() a separate method from
processRequest() ?

The documentation for buildHttpRequest() is wrong, no parsing takes
place there. It is purely post-parse processing of some parsed HTTP
request. ie processParsedRequest() action.

I can sort of see a vague use of something like buildHttpRequest() in
ICAP traffic processing, but not in its current form. The logic there
directly dumps HTTP error messages into the client socket FD. ONly
when that is fixed will the HttpRequest object creation be re-usable code.

+1 whether you inline the two functions again or not.

Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUYGrlAAoJELJo5wb/XPRjPXgIAL6kboz0n+SF32henbq/iVQx
p6VSr3l/vhyN4Ti1F9e74fzr7IChOp2viZGzxzue/fLIfCYztqPxXWmZ0D4ytL3n
LoM00Q08CZ2TuOvOGD8VZ/6FtBsEyh2CYOrVw17HXfcQlGJgEqLngsKtk3tvDkN2
qDC+NpaAM3Ik1d9fHDR9jghSZn1uWrM+6Tcc50jAZdrOAFH3NKG2m/XyvKHtJBIh
LAyNqlt2ctiGOlyke76+mY3DvqVkTR2/ivwgffEgiCpWlcE4ni1cykID+KYfpKGq
5dM9E9vwyfoWA4hNQJKEYLz1p2TR9RjGLcG9KaEaVyTcCXzGFF6qFnkQZOpd6Fw=
=I66x
-END PGP SIGNATURE-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev