Re: [Regression in httpd 2.4.52] mod_dav: Potentially unbounded memory usage in PROPFIND with dav_get_props() and dav_propfind_walker()

2022-01-18 Thread Ruediger Pluem



On 1/18/22 2:58 PM, Evgeny Kotkov wrote:
> Ruediger Pluem  writes:
> 
>> Can you please check if the below patch fixes your issue?
> 
> I have to say that the reason and the original intention of using
> resource->pool's userdata here are still somewhat unclear to me.

An application seems to be here:

https://github.com/minfrin/mod_dav_calendar/blob/main/mod_dav_calendar.c#L2170

As far as I understand the code needs the data stored in the userdata and has 
no other means to retrieve it.

> 
> But it does look like the patch performs the allocation only once per
> resource->pool, and so it should fix the unbounded memory usage issue.

r1897182

Regards

Rüdiger




Re: [Regression in httpd 2.4.52] mod_dav: Potentially unbounded memory usage in PROPFIND with dav_get_props() and dav_propfind_walker()

2022-01-18 Thread Evgeny Kotkov
Ruediger Pluem  writes:

> Can you please check if the below patch fixes your issue?

I have to say that the reason and the original intention of using
resource->pool's userdata here are still somewhat unclear to me.

But it does look like the patch performs the allocation only once per
resource->pool, and so it should fix the unbounded memory usage issue.

Thanks!


Regards,
Evgeny Kotkov


Re: [Regression in httpd 2.4.52] mod_dav: Potentially unbounded memory usage in PROPFIND with dav_get_props() and dav_propfind_walker()

2022-01-14 Thread Ruediger Pluem



On 1/14/22 1:57 PM, Evgeny Kotkov wrote:
> Hi,
> 
> I might have stumbled across a regression in httpd 2.4.52 where mod_dav was
> changed in a way where dav_get_props() now allocates data in resource->pool.
> 
> I think that r1879889 [1] is the change that is causing the new behavior.
> This change has been backported to 2.4.x in r1895893 [2].
> 
> Consider the new part of the dav_get_props() function:
> 
> DAV_DECLARE(dav_get_props_result) dav_get_props()
> {
>…
>element = apr_pcalloc(propdb->resource->pool, sizeof(dav_liveprop_elem));
>element->doc = doc;
>…
> 
> This code unconditionally allocates data in resource->pool (this is the only
> such place, other allocations performed by this function happen in propdb->p).
> 
> The dav_get_props() is called by dav_propfind_walker().  The walker function
> is driven by a specific provider.  Both of the two implementations known to
> me, mod_dav_fs and mod_dav_svn, happen to use a long-living pool as the
> resource->pool during the walk.
> 
> I think that with this change, the PROPFIND walks are going to make O(N)
> allocations for O(N) walked items — which is an unbounded memory usage
> and a regression, compared to 2.4.51.
> 
> [1] https://svn.apache.org/r1879889
> [2] https://svn.apache.org/r1895893

Can you please check if the below patch fixes your issue?

Index: modules/dav/main/props.c
===
--- modules/dav/main/props.c(revision 1896810)
+++ modules/dav/main/props.c(working copy)
@@ -805,9 +805,15 @@
 /* we lose both the document and the element when calling (insert_prop),
  * make these available in the pool.
  */
-element = apr_pcalloc(propdb->resource->pool, sizeof(dav_liveprop_elem));
+element = dav_get_liveprop_element(propdb->resource);
+if (!element) {
+element = apr_pcalloc(propdb->resource->pool, 
sizeof(dav_liveprop_elem));
+apr_pool_userdata_setn(element, DAV_PROP_ELEMENT, NULL, 
propdb->resource->pool);
+}
+else {
+memset(element, 0, sizeof(dav_liveprop_elem));
+}
 element->doc = doc;
-apr_pool_userdata_setn(element, DAV_PROP_ELEMENT, NULL, 
propdb->resource->pool);

 /* ### NOTE: we should pass in TWO buffers -- one for keys, one for
the marks */


Regards

Rüdiger



Re: [Regression in httpd 2.4.49] mod_dav: REPORT requests no longer return errors

2021-09-23 Thread Ruediger Pluem



On 9/22/21 4:08 PM, Ruediger Pluem wrote:
> 
> 
> On 9/22/21 3:49 PM, Evgeny Kotkov wrote:
>> Ruediger Pluem  writes:
>>
>>> Does the attached patch solve your issue?
>>
>> It does appear to solve the problem with missing errors, thanks!
>>
>> I haven't checked that in detail, but I think there might be a discrepancy
>> in how `err` is handled in the patch and for example when calling the
>> method_precondition() hook.
>>
>> With the patch, `err` is checked even if all hooks DECLINE the operation.
>> Not too sure if that's intended, because the variable could potentially
>> contain an arbitrary value or a leftover from some previous call.
> 
> The below new version should address the case that there was a left over in 
> err from
> calling dav_run_method_precondition by resetting err to NULL.
> If we should ignore err if all hooks return DECLINED but set err, to be 
> honest I don't know. I hope for someone with more DAV
> insights to comment and tell me, what is correct here :-).
> Depending on this it should be easy to adjust the patch accordingly if needed.

r1893589

Regards

Rüdiger



Re: [Regression in httpd 2.4.49] mod_dav: REPORT requests no longer return errors

2021-09-22 Thread Ruediger Pluem



On 9/22/21 3:49 PM, Evgeny Kotkov wrote:
> Ruediger Pluem  writes:
> 
>> Does the attached patch solve your issue?
> 
> It does appear to solve the problem with missing errors, thanks!
> 
> I haven't checked that in detail, but I think there might be a discrepancy
> in how `err` is handled in the patch and for example when calling the
> method_precondition() hook.
> 
> With the patch, `err` is checked even if all hooks DECLINE the operation.
> Not too sure if that's intended, because the variable could potentially
> contain an arbitrary value or a leftover from some previous call.

The below new version should address the case that there was a left over in err 
from
calling dav_run_method_precondition by resetting err to NULL.
If we should ignore err if all hooks return DECLINED but set err, to be honest 
I don't know. I hope for someone with more DAV
insights to comment and tell me, what is correct here :-).
Depending on this it should be easy to adjust the patch accordingly if needed.


Index: modules/dav/main/mod_dav.c
===
--- modules/dav/main/mod_dav.c  (revision 1893507)
+++ modules/dav/main/mod_dav.c  (working copy)
@@ -4403,10 +4403,32 @@
 /* set up defaults for the report response */
 r->status = HTTP_OK;
 ap_set_content_type(r, DAV_XML_CONTENT_TYPE);
+err = NULL;

 /* run report hook */
 result = dav_run_deliver_report(r, resource, doc,
 r->output_filters, &err);
+if (err != NULL) {
+
+if (! r->sent_bodyct) {
+  /* No data has been sent to client yet;  throw normal error. */
+  return dav_handle_err(r, err, NULL);
+}
+
+/* If an error occurred during the report delivery, there's
+   basically nothing we can do but abort the connection and
+   log an error.  This is one of the limitations of HTTP; it
+   needs to "know" the entire status of the response before
+   generating it, which is just impossible in these streamy
+   response situations. */
+err = dav_push_error(r->pool, err->status, 0,
+ "Provider encountered an error while streaming"
+ " a REPORT response.", err);
+dav_log_err(r, err, APLOG_ERR);
+r->connection->aborted = 1;
+
+return DONE;
+}
 switch (result) {
 case OK:
 return DONE;
@@ -4414,27 +4436,7 @@
 /* No one handled the report */
 return HTTP_NOT_IMPLEMENTED;
 default:
-if ((err) != NULL) {
-
-if (! r->sent_bodyct) {
-  /* No data has been sent to client yet;  throw normal error. */
-  return dav_handle_err(r, err, NULL);
-}
-
-/* If an error occurred during the report delivery, there's
-   basically nothing we can do but abort the connection and
-   log an error.  This is one of the limitations of HTTP; it
-   needs to "know" the entire status of the response before
-   generating it, which is just impossible in these streamy
-   response situations. */
-err = dav_push_error(r->pool, err->status, 0,
- "Provider encountered an error while 
streaming"
- " a REPORT response.", err);
-dav_log_err(r, err, APLOG_ERR);
-r->connection->aborted = 1;
-
-return DONE;
-}
+return DONE;
 }

 return DONE;





Re: [Regression in httpd 2.4.49] mod_dav: REPORT requests no longer return errors

2021-09-22 Thread Evgeny Kotkov
Ruediger Pluem  writes:

> Does the attached patch solve your issue?

It does appear to solve the problem with missing errors, thanks!

I haven't checked that in detail, but I think there might be a discrepancy
in how `err` is handled in the patch and for example when calling the
method_precondition() hook.

With the patch, `err` is checked even if all hooks DECLINE the operation.
Not too sure if that's intended, because the variable could potentially
contain an arbitrary value or a leftover from some previous call.


Thanks,
Evgeny Kotkov


Re: [Regression in httpd 2.4.49] mod_dav: REPORT requests no longer return errors

2021-09-21 Thread Ruediger Pluem
Does the attached patch solve your issue?

Regards

Rüdiger

On 9/20/21 8:01 PM, Evgeny Kotkov wrote:
> Hi,
> 
> I think that I have stumbled across a significant regression in httpd 2.4.49
> where mod_dav has been changed in a way that makes it ignore the errors
> returned by a versioning provider during REPORT requests.
> 
> I haven't checked that in detail, but I think that r1892513 [1] is the change
> that is causing the new behavior.
> 
> Consider the new core part of the dav_method_report() function:
> 
> …
> result = dav_run_deliver_report(r, resource, doc,
> r->output_filters, &err);
> switch (result) {
> case OK:
> return DONE;
> case DECLINED:
> /* No one handled the report */
> return HTTP_NOT_IMPLEMENTED;
> default:
> if ((err) != NULL) {
> … handle the error
> }
> 
> Assuming there are no deliver_report hooks, this code is going to call
> dav_core_deliver_report(), whose relevant part is as follows:
> 
> …
> if (vsn_hooks) {
> *err = (*vsn_hooks->deliver_report)(r, resource, doc,
> r->output_filters);
> return OK;
> }
> …
> 
> Here the "return OK" part skips the error handling on the calling site,
> even if there is an associated error returned in *err.
> 
> In turn, this causes the following regression:
> 
> - For a failed request where the server hasn't started sending the response,
>   it is now going to erroneously send a 200 OK with an empty body instead of
>   an error response with the appropriate HTTP status.
> 
> - For a failed request where the server has started sending the response body,
>   it is now going to handle it as if it was successfully completed instead of
>   aborting the connection.  The likely outcome of this is that the server is
>   going to send a truncated payload with a 2xx status indicating success.
> 
> This regression can cause unexpected behavior of the Subversion clients,
> for example in cases where the (ignored) error occurred due to a 
> non-successful
> authorization check.  Other DAV clients may be susceptible to some kinds of
> unexpected behavior as well.
> 
> [1] https://svn.apache.org/r1892513
> 
> 
> Thanks,
> Evgeny Kotkov
> 
Index: modules/dav/main/mod_dav.c
===
--- modules/dav/main/mod_dav.c	(revision 1893352)
+++ modules/dav/main/mod_dav.c	(working copy)
@@ -4407,6 +4407,27 @@
 /* run report hook */
 result = dav_run_deliver_report(r, resource, doc,
 r->output_filters, &err);
+if (err != NULL) {
+
+if (! r->sent_bodyct) {
+  /* No data has been sent to client yet;  throw normal error. */
+  return dav_handle_err(r, err, NULL);
+}
+
+/* If an error occurred during the report delivery, there's
+   basically nothing we can do but abort the connection and
+   log an error.  This is one of the limitations of HTTP; it
+   needs to "know" the entire status of the response before
+   generating it, which is just impossible in these streamy
+   response situations. */
+err = dav_push_error(r->pool, err->status, 0,
+ "Provider encountered an error while streaming"
+ " a REPORT response.", err);
+dav_log_err(r, err, APLOG_ERR);
+r->connection->aborted = 1;
+
+return DONE;
+}
 switch (result) {
 case OK:
 return DONE;
@@ -4414,27 +4435,7 @@
 /* No one handled the report */
 return HTTP_NOT_IMPLEMENTED;
 default:
-if ((err) != NULL) {
-
-if (! r->sent_bodyct) {
-  /* No data has been sent to client yet;  throw normal error. */
-  return dav_handle_err(r, err, NULL);
-}
-
-/* If an error occurred during the report delivery, there's
-   basically nothing we can do but abort the connection and
-   log an error.  This is one of the limitations of HTTP; it
-   needs to "know" the entire status of the response before
-   generating it, which is just impossible in these streamy
-   response situations. */
-err = dav_push_error(r->pool, err->status, 0,
- "Provider encountered an error while streaming"
- " a REPORT response.", err);
-dav_log_err(r, err, APLOG_ERR);
-r->connection->aborted = 1;
-
-return DONE;
-}
+return DONE;
 }
 
 return DONE;


Re: Regression?

2019-02-19 Thread Stefan Eissing



> Am 18.02.2019 um 23:55 schrieb Gregg Smith :
> 
> When setting a header it used to set the header case-sensitive as configured. 
> Now with 2.4.38 it sets in all lower case. Regression?
> 
> Header always set X-Xss-Protection "1; mode=block"
> Result;
> 2.4.37: X-Xss-Protection: 1; mode=block
> 2.4.38: x-xss-protection: 1; mode=block

You are not using a HTTP/2 client by any chance?

> If I'm reading the RFC correctly, sensitivity doesn't matter when parsing the 
> header but the 2.4 docs show it outputting as configured as 2.4 has been 
> prior to .38.
> 
> Cheers
> 
> G



Re: Regression?

2019-02-18 Thread Marion & Christophe JAILLET



Le 18/02/2019 à 23:55, Gregg Smith a écrit :
When setting a header it used to set the header case-sensitive as 
configured. Now with 2.4.38 it sets in all lower case. Regression?


Header always set X-Xss-Protection "1; mode=block"
Result;
2.4.37: X-Xss-Protection: 1; mode=block
2.4.38: x-xss-protection: 1; mode=block

If I'm reading the RFC correctly, sensitivity doesn't matter when 
parsing the header but the 2.4 docs show it outputting as configured 
as 2.4 has been prior to .38.


Cheers

G


Hi,

Everything looks fine to me.
I'm currently working on extending headers.t in order to test things 
other than ('set', 'append', 'add', 'unset');


If I add a specific test for 'set', with 2.4.39(dev), I get the 
following log:


Header received n°0:
  header:   X-Xss-Protection
  expected: 1; mode=block
  received: 1; mode=block

Response received is:
HTTP/1.1 200 OK
Connection: close
Date: Tue, 19 Feb 2019 05:28:56 GMT
Accept-Ranges: bytes
ETag: "0-52169385a8a8a"
Server: Apache/2.4.39-dev (Unix) OpenSSL/1.1.1
Vary: In-If1
Content-Length: 0
Content-Type: text/html
Last-Modified: Tue, 06 Oct 2015 05:51:24 GMT
Client-Date: Tue, 19 Feb 2019 05:28:56 GMT
Client-Peer: 127.0.0.1:8529
Client-Response-Num: 1
DMMATCH1: 1
X-Xss-Protection: 1; mode=block

ok 372

So, the case looks good to me.


If it helps, I can provide the updated headers.t as-is.
It still needs more cases (and probably some perl syntax clean-up ).

I also plan to update the doc, at least about the 'echo' command.
Doc states that in 'echo' command,  'header' MAY be a regular 
expression. In fact it IS ALWAYS considered as a regex and "header echo 
x" echoes everything that has a 'x'. Should you want only 'x', 
apparently you need something like "header echo ^x$".



What do you mean by "the 2.4 docs show it outputting as configured as 
2.4 has been prior to .38"?


CJ


[RESOLVED] Re: Regression with CMake Windows builds, 2.4.29 is the last one passing

2018-07-20 Thread Michal Karm
On 07/19/2018 08:18 PM, William A Rowe Jr wrote:
> Hi Michal, thanks for your report.
>
> On Thu, Jul 19, 2018 at 10:36 AM, Michal Karm  > wrote:
>
> Hi guys,
>
> I would like to ask whether there are any plans on
> accepting [1] patch to 2.4.x as it keeps failing the CMake Windows build.
>
> Furthermore, there is a regression [2] in 2.4.34 CMake Windows build that
> did not
> exist in 2.4.29.
>
> If anyone has a hit or an idea handy to get me unstuck I try to come up 
> with a
> patch.
>
> Thank you for your time
> K.
>
> [1] https://bz.apache.org/bugzilla/show_bug.cgi?id=62190
> 
>
>
> Suggestion adopted, as should have happened in the original backport. The
> changes are now merged complete from trunk.
>  
>
> [2] https://bz.apache.org/bugzilla/show_bug.cgi?id=62557
> 
>
>
> Proposed a solution for you. If this builds clean for you, I'll commit to
> trunk and 2.4. 

Thank you for the hint. Updated patches are on the Bugzilla, including links to
passing builds.



Michal Karm Babacek

-- 
Sent from my Hosaka Ono-Sendai Cyberspace 7




Re: Regression with CMake Windows builds, 2.4.29 is the last one passing

2018-07-19 Thread William A Rowe Jr
FWIW, the correct fix seems to be to fix the regression in
http://svn.apache.org/viewvc?rev=1832609&view=rev rather than the windows
build.

On Thu, Jul 19, 2018, 13:18 William A Rowe Jr  wrote:

> Hi Michal, thanks for your report.
>
> On Thu, Jul 19, 2018 at 10:36 AM, Michal Karm 
> wrote:
>
>> Hi guys,
>>
>> I would like to ask whether there are any plans on
>> accepting [1] patch to 2.4.x as it keeps failing the CMake Windows build.
>>
>> Furthermore, there is a regression [2] in 2.4.34 CMake Windows build that
>> did not
>> exist in 2.4.29.
>>
>> If anyone has a hit or an idea handy to get me unstuck I try to come up
>> with a
>> patch.
>>
>> Thank you for your time
>> K.
>>
>> [1] https://bz.apache.org/bugzilla/show_bug.cgi?id=62190
>
>
> Suggestion adopted, as should have happened in the original backport. The
> changes are now merged complete from trunk.
>
>
>> [2] https://bz.apache.org/bugzilla/show_bug.cgi?id=62557
>
>
> Proposed a solution for you. If this builds clean for you, I'll commit to
> trunk and 2.4.
>


Re: Regression with CMake Windows builds, 2.4.29 is the last one passing

2018-07-19 Thread William A Rowe Jr
Hi Michal, thanks for your report.

On Thu, Jul 19, 2018 at 10:36 AM, Michal Karm 
wrote:

> Hi guys,
>
> I would like to ask whether there are any plans on
> accepting [1] patch to 2.4.x as it keeps failing the CMake Windows build.
>
> Furthermore, there is a regression [2] in 2.4.34 CMake Windows build that
> did not
> exist in 2.4.29.
>
> If anyone has a hit or an idea handy to get me unstuck I try to come up
> with a
> patch.
>
> Thank you for your time
> K.
>
> [1] https://bz.apache.org/bugzilla/show_bug.cgi?id=62190


Suggestion adopted, as should have happened in the original backport. The
changes are now merged complete from trunk.


> [2] https://bz.apache.org/bugzilla/show_bug.cgi?id=62557


Proposed a solution for you. If this builds clean for you, I'll commit to
trunk and 2.4.


Re: Regression: mod_http2 continues to process abandoned connection

2016-06-18 Thread Michael Kaufmann

Hi Stefan,

Yes, the patch solves the problem for me :-) Thanks a bunch!

Regards,
Michael


Zitat von Stefan Eissing :

Michael, I can reproduce the problem and habe a fix. Can you test if  
the following patch also solves the problem for you? Thanks!


Index: modules/http2/h2_mplx.c
===
--- modules/http2/h2_mplx.c (revision 1748955)
+++ modules/http2/h2_mplx.c (working copy)
@@ -436,6 +436,9 @@
 if (stream->input) {
 m->tx_handles_reserved += h2_beam_get_files_beamed(stream->input);
 h2_beam_on_consumed(stream->input, NULL, NULL);
+/* Let anyone blocked reading know that there is no more to come */
+h2_beam_abort(stream->input);
+/* Remove mutex after, so that abort still finds cond to signal */
 h2_beam_mutex_set(stream->input, NULL, NULL, NULL);
 }
 h2_stream_cleanup(stream);






Re: Regression: mod_http2 continues to process abandoned connection

2016-06-18 Thread Stefan Eissing
Michael, I can reproduce the problem and habe a fix. Can you test if the 
following patch also solves the problem for you? Thanks!

Index: modules/http2/h2_mplx.c
===
--- modules/http2/h2_mplx.c (revision 1748955)
+++ modules/http2/h2_mplx.c (working copy)
@@ -436,6 +436,9 @@
 if (stream->input) {
 m->tx_handles_reserved += h2_beam_get_files_beamed(stream->input);
 h2_beam_on_consumed(stream->input, NULL, NULL);
+/* Let anyone blocked reading know that there is no more to come */
+h2_beam_abort(stream->input);
+/* Remove mutex after, so that abort still finds cond to signal */
 h2_beam_mutex_set(stream->input, NULL, NULL, NULL);
 }
 h2_stream_cleanup(stream);

> Am 17.06.2016 um 16:40 schrieb Michael Kaufmann :
> 
> Hi,
> 
> I have found a regression in mod_http2. When the client stops sending data 
> and closes the connection, mod_http2 doesn't detect that the client has left 
> and continues to "read" request data (until the request times out because of 
> the server's TimeOut value).
> 
> The bug has been introduced with mod_http2 version 1.5.8 (SVN 1747532). It is 
> also present in the httpd 2.4.21 release candidate. mod_http2 version 1.5.7 
> (SVN 1747194) works.
> 
> 
> How to reproduce:
> 
> curl --http2 -k -v --data-binary @bigfile.dat --limit-rate 1 
> https://http2-enabled-apache-server/
> 
> ... then kill the curl process with "kill "
> 
> 
> Log messages:
> 
> h2_session.c(1827): h2_session(66): NO_IO event, 1 streams open
> h2_session.c(1691): AH03078: h2_session(66): transit [BUSY] -- no io --> 
> [WAIT]
> h2_session.c(2260): h2_session: wait for data, 20 micros
> h2_mplx.c(775): h2_mplx(66): trywait on data for 200.00 ms)
> h2_session.c(1691): AH03078: h2_session(66): transit [WAIT] -- wait cycle --> 
> [BUSY]
> h2_filter.c(113): core_input(66): read, NONBLOCK_READ, mode=0, readbytes=8000
> h2_filter.c(164): (104)Connection reset by peer: AH03046: h2_conn_io: error 
> reading
> h2_session.c(1576): (104)Connection reset by peer: h2_session(66): input gone
> h2_session.c(1777): h2_session(66): conn error -> shutdown
> h2_session.c(789): h2_session(66): malloc(120)
> h2_session.c(643): AH03068: h2_session(66): sent FRAME[GOAWAY[error=0, 
> reason='', last_stream=1]], frames=3/3 (r/s)
> h2_session.c(799): h2_session(66): free()
> h2_session.c(799): h2_session(66): free()
> h2_conn_io.c(289): h2_conn_io: pass_output
> h2_conn_io.c(124): bb_dump(66-0)-master conn pass: heap[17] flush
> h2_conn_io.c(309): (32)Broken pipe: AH03044: h2_conn_io(66): pass_out brigade 
> 17 bytes
> h2_session.c(740): AH03069: session(66): sent GOAWAY, err=0, msg=
> h2_session.c(1691): AH03078: h2_session(66): transit [BUSY] -- local goaway 
> --> [LSHUTDOWN]
> h2_mplx.c(1356): h2_mplx(66): dispatch events
> h2_session.c(1827): h2_session(66): NO_IO event, 1 streams open
> h2_session.c(1691): AH03078: h2_session(66): transit [LSHUTDOWN] -- no io --> 
> [WAIT]
> h2_session.c(2260): h2_session: wait for data, 20 micros
> h2_mplx.c(775): h2_mplx(66): trywait on data for 200.00 ms)
> h2_session.c(1691): AH03078: h2_session(66): transit [WAIT] -- wait cycle --> 
> [LSHUTDOWN]
> h2_filter.c(113): core_input(66): read, NONBLOCK_READ, mode=0, readbytes=8000
> h2_filter.c(164): (103)Software caused connection abort: AH03046: h2_conn_io: 
> error reading
> h2_session.c(1576): (103)Software caused connection abort: h2_session(66): 
> input gone
> h2_session.c(1691): AH03078: h2_session(66): transit [LSHUTDOWN] -- conn 
> error --> [DONE]
> h2_mplx.c(1356): h2_mplx(66): dispatch events
> h2_session.c(2312): (70014)End of file found: h2_session(66): [DONE] process 
> returns
> h2_conn_io.c(289): h2_conn_io: pass_output
> h2_conn_io.c(124): bb_dump(66-0)-master conn pass: h2eoc flush
> h2_session.c(963): session(66): cleanup and destroy
> h2_mplx.c(539): h2_mplx(66): release_join with 1 streams open, 0 streams 
> resume, 0 streams ready, 1 tasks
> h2_mplx.c(518): h2_mplx(66-1): exists, started=1, scheduled=1, submitted=0, 
> suspended=0
> h2_mplx.c(402): h2_stream(66-1): done
> h2_mplx.c(567): h2_mplx(66): 2. release_join with 1 streams in hold
> AH03198: h2_mplx(66): release, waiting for 5 seconds now for 1 h2_workers to 
> return, have still 1 tasks outstanding
> ->03198: h2_stream(66-1): POST server /myurl -> ? 0[orph=1/started=1/done=0]
> AH03198: h2_mplx(66): release, waiting for 10 seconds now for 1 h2_workers to 
> return, have still 1 tasks outstanding
> AH03198: h2_mplx(66): release, waiting for 15 seconds now for 1 h2_workers to 
> return, have still 1 tasks outstanding
> AH03198: h2_mplx(66): release, waiting for 20 seconds now for 1 h2_workers to 
> return, have still 1 tasks outstanding
> [...]
> AH03198: h2_mplx(66): release, waiting for 270 seconds now for 1 h2_workers 
> to return, have still 1 tasks outstanding
> AH03198: h2_mplx(66): release, waiting for 275 seconds now 

Re: Regression: mod_http2 continues to process abandoned connection

2016-06-17 Thread Stefan Eissing
Ok, need to verify this tomorrow in order to reproduce and assess. I see from 
your log that the whole thing gets cleaned up eventually. But relying on 
Timeout to save the day is not good...

-Stefan

> Am 17.06.2016 um 16:40 schrieb Michael Kaufmann :
> 
> Hi,
> 
> I have found a regression in mod_http2. When the client stops sending data 
> and closes the connection, mod_http2 doesn't detect that the client has left 
> and continues to "read" request data (until the request times out because of 
> the server's TimeOut value).
> 
> The bug has been introduced with mod_http2 version 1.5.8 (SVN 1747532). It is 
> also present in the httpd 2.4.21 release candidate. mod_http2 version 1.5.7 
> (SVN 1747194) works.
> 
> 
> How to reproduce:
> 
> curl --http2 -k -v --data-binary @bigfile.dat --limit-rate 1 
> https://http2-enabled-apache-server/
> 
> ... then kill the curl process with "kill "
> 
> 
> Log messages:
> 
> h2_session.c(1827): h2_session(66): NO_IO event, 1 streams open
> h2_session.c(1691): AH03078: h2_session(66): transit [BUSY] -- no io --> 
> [WAIT]
> h2_session.c(2260): h2_session: wait for data, 20 micros
> h2_mplx.c(775): h2_mplx(66): trywait on data for 200.00 ms)
> h2_session.c(1691): AH03078: h2_session(66): transit [WAIT] -- wait cycle --> 
> [BUSY]
> h2_filter.c(113): core_input(66): read, NONBLOCK_READ, mode=0, readbytes=8000
> h2_filter.c(164): (104)Connection reset by peer: AH03046: h2_conn_io: error 
> reading
> h2_session.c(1576): (104)Connection reset by peer: h2_session(66): input gone
> h2_session.c(1777): h2_session(66): conn error -> shutdown
> h2_session.c(789): h2_session(66): malloc(120)
> h2_session.c(643): AH03068: h2_session(66): sent FRAME[GOAWAY[error=0, 
> reason='', last_stream=1]], frames=3/3 (r/s)
> h2_session.c(799): h2_session(66): free()
> h2_session.c(799): h2_session(66): free()
> h2_conn_io.c(289): h2_conn_io: pass_output
> h2_conn_io.c(124): bb_dump(66-0)-master conn pass: heap[17] flush
> h2_conn_io.c(309): (32)Broken pipe: AH03044: h2_conn_io(66): pass_out brigade 
> 17 bytes
> h2_session.c(740): AH03069: session(66): sent GOAWAY, err=0, msg=
> h2_session.c(1691): AH03078: h2_session(66): transit [BUSY] -- local goaway 
> --> [LSHUTDOWN]
> h2_mplx.c(1356): h2_mplx(66): dispatch events
> h2_session.c(1827): h2_session(66): NO_IO event, 1 streams open
> h2_session.c(1691): AH03078: h2_session(66): transit [LSHUTDOWN] -- no io --> 
> [WAIT]
> h2_session.c(2260): h2_session: wait for data, 20 micros
> h2_mplx.c(775): h2_mplx(66): trywait on data for 200.00 ms)
> h2_session.c(1691): AH03078: h2_session(66): transit [WAIT] -- wait cycle --> 
> [LSHUTDOWN]
> h2_filter.c(113): core_input(66): read, NONBLOCK_READ, mode=0, readbytes=8000
> h2_filter.c(164): (103)Software caused connection abort: AH03046: h2_conn_io: 
> error reading
> h2_session.c(1576): (103)Software caused connection abort: h2_session(66): 
> input gone
> h2_session.c(1691): AH03078: h2_session(66): transit [LSHUTDOWN] -- conn 
> error --> [DONE]
> h2_mplx.c(1356): h2_mplx(66): dispatch events
> h2_session.c(2312): (70014)End of file found: h2_session(66): [DONE] process 
> returns
> h2_conn_io.c(289): h2_conn_io: pass_output
> h2_conn_io.c(124): bb_dump(66-0)-master conn pass: h2eoc flush
> h2_session.c(963): session(66): cleanup and destroy
> h2_mplx.c(539): h2_mplx(66): release_join with 1 streams open, 0 streams 
> resume, 0 streams ready, 1 tasks
> h2_mplx.c(518): h2_mplx(66-1): exists, started=1, scheduled=1, submitted=0, 
> suspended=0
> h2_mplx.c(402): h2_stream(66-1): done
> h2_mplx.c(567): h2_mplx(66): 2. release_join with 1 streams in hold
> AH03198: h2_mplx(66): release, waiting for 5 seconds now for 1 h2_workers to 
> return, have still 1 tasks outstanding
> ->03198: h2_stream(66-1): POST server /myurl -> ? 0[orph=1/started=1/done=0]
> AH03198: h2_mplx(66): release, waiting for 10 seconds now for 1 h2_workers to 
> return, have still 1 tasks outstanding
> AH03198: h2_mplx(66): release, waiting for 15 seconds now for 1 h2_workers to 
> return, have still 1 tasks outstanding
> AH03198: h2_mplx(66): release, waiting for 20 seconds now for 1 h2_workers to 
> return, have still 1 tasks outstanding
> [...]
> AH03198: h2_mplx(66): release, waiting for 270 seconds now for 1 h2_workers 
> to return, have still 1 tasks outstanding
> AH03198: h2_mplx(66): release, waiting for 275 seconds now for 1 h2_workers 
> to return, have still 1 tasks outstanding
> AH03198: h2_mplx(66): release, waiting for 280 seconds now for 1 h2_workers 
> to return, have still 1 tasks outstanding
> AH03198: h2_mplx(66): release, waiting for 285 seconds now for 1 h2_workers 
> to return, have still 1 tasks outstanding
> h2_task.c(194): (70007)The timeout specified has expired: h2_task(66-1): read 
> returned
> mod_airlock.c(1307): Error reading body data from client (errno = 0)
> h2_from_h1.c(488): h2_from_h1(1): output_filter called
> h2_from_h1.c(551): h2_from_h1(1): removed header filter, passing brig

Re: regression in r1698239

2015-08-28 Thread Stefan Eissing
Works fine here. Thanks!

> Am 28.08.2015 um 15:14 schrieb Eric Covener :
> 
> Thanks -- back in 1698334 with loops that actually move.
> 
> On Fri, Aug 28, 2015 at 8:55 AM, Eric Covener  wrote:
>> On Fri, Aug 28, 2015 at 8:54 AM, Stefan Eissing
>>  wrote:
>>> Change 1698239 brings t/apache/pr17629.t into an endless loop now. I built 
>>> on r1698322 in trunk (OSX + Ubuntu).
>>> 
>>> I revert the change now. A certain "covener" might want to look into this. 
>>> ;-)
>>> 
>>> Cheers,
>> 
>> 
>> Thanks, go ahead and revert.
> 
> 
> 
> -- 
> Eric Covener
> cove...@gmail.com

bytes GmbH
Hafenweg 16, 48155 Münster, Germany
Phone: +49 251 2807760. Amtsgericht Münster: HRB5782





Re: regression in r1698239

2015-08-28 Thread Eric Covener
Thanks -- back in 1698334 with loops that actually move.

On Fri, Aug 28, 2015 at 8:55 AM, Eric Covener  wrote:
> On Fri, Aug 28, 2015 at 8:54 AM, Stefan Eissing
>  wrote:
>> Change 1698239 brings t/apache/pr17629.t into an endless loop now. I built 
>> on r1698322 in trunk (OSX + Ubuntu).
>>
>> I revert the change now. A certain "covener" might want to look into this. 
>> ;-)
>>
>> Cheers,
>
>
> Thanks, go ahead and revert.



-- 
Eric Covener
cove...@gmail.com


Re: regression in r1698239

2015-08-28 Thread Eric Covener
On Fri, Aug 28, 2015 at 8:54 AM, Stefan Eissing
 wrote:
> Change 1698239 brings t/apache/pr17629.t into an endless loop now. I built on 
> r1698322 in trunk (OSX + Ubuntu).
>
> I revert the change now. A certain "covener" might want to look into this. ;-)
>
> Cheers,


Thanks, go ahead and revert.


Re: Regression with range fix

2011-08-31 Thread William A. Rowe Jr.
On 8/31/2011 4:00 PM, Stefan Fritsch wrote:
> On Wednesday 31 August 2011, Joe Orton wrote:
>>
>> Anything else to watch out for?
> 
> c) a request with a byterange beyond the end of the file used to 
> return 416 but now returns 200. This is a violation of a RFC2616 
> SHOULD. We didn't catch this when testing.
> This is how mplayer seems to determine that it has reached the end of 
> file. This seems a rather stupid thing to do unless mplayer assumes 
> that the file may grow. But as it's a SHOULD, we should fix it.

Yup, that's a major flaw.  If one range may be satisfied, we may not
return 416, but must return those ranges which we are able.  If no
ranges are satisfiable, the answer needs to be restored to 416.


Re: Regression with range fix

2011-08-31 Thread Stefan Fritsch
On Wednesday 31 August 2011, Joe Orton wrote:
> On Tue, Aug 30, 2011 at 08:51:55PM +0200, Stefan Fritsch wrote:
> > The first regression report, though slightly too late for the
> > vote:
> > 
> > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=639825
> > 
> > The byterange_filter.c in the Debian update is exactly the one
> > from 2.2.20. I will keep you updated.
> 
> Hi; I'm just back from holiday and catching up.
> 
> The behaviour changes in the patch which could feasibly break
> non-compliant clients are:
> 
> a) using 200 in some cases where a 206 response would end up being
> larger

For the first user who complained, I think this is the problem. His 
client does a "Range: bytes=0-" request. I suspect the 200 (instead of 
206) response to this request lets the client assume that the server 
does not support ranges at all. I will try to get this verified.

> 
> b) using a chunked response where previously C-L was always used,
> in cases where >=32 ranges are being returned
>
> Anything else to watch out for?

c) a request with a byterange beyond the end of the file used to 
return 416 but now returns 200. This is a violation of a RFC2616 
SHOULD. We didn't catch this when testing.
This is how mplayer seems to determine that it has reached the end of 
file. This seems a rather stupid thing to do unless mplayer assumes 
that the file may grow. But as it's a SHOULD, we should fix it.


Re: Regression with range fix

2011-08-31 Thread Jim Jagielski

On Aug 31, 2011, at 8:21 AM, Plüm, Rüdiger, VF-Group wrote:

> 
> 
>> -Original Message-
>> From: Joe Orton [mailto:jor...@redhat.com] 
>> Sent: Mittwoch, 31. August 2011 11:13
>> To: dev@httpd.apache.org
>> Subject: Re: Regression with range fix
>> 
>> On Tue, Aug 30, 2011 at 08:51:55PM +0200, Stefan Fritsch wrote:
>>> The first regression report, though slightly too late for the vote:
>>> 
>>> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=639825
>>> 
>>> The byterange_filter.c in the Debian update is exactly the one from 
>>> 2.2.20. I will keep you updated.
>> 
>> Hi; I'm just back from holiday and catching up.
>> 
>> The behaviour changes in the patch which could feasibly break 
>> non-compliant clients are:
>> 
>> a) using 200 in some cases where a 206 response would end up being 
>> larger
>> 
>> b) using a chunked response where previously C-L was always used, in 
>> cases where >=32 ranges are being returned
>> 
>> Anything else to watch out for?
>> 
>> Looking at the patch in 2.2.x; there is a lot of effort expended 
>> deadling with apr_bucket_split() returning ENOTIMPL - that looks 
>> unnecessary; the filter will only handle brigades containing buckets 
>> with known length, and all such buckets "must" be _split-able.
> 
> So you think we can rip out the whole if (rv == APR_ENOTIMPL) blocks?
> 

Belt and suspenders?



RE: Regression with range fix

2011-08-31 Thread Plüm, Rüdiger, VF-Group
 

> -Original Message-
> From: Joe Orton [mailto:jor...@redhat.com] 
> Sent: Mittwoch, 31. August 2011 11:13
> To: dev@httpd.apache.org
> Subject: Re: Regression with range fix
> 
> On Tue, Aug 30, 2011 at 08:51:55PM +0200, Stefan Fritsch wrote:
> > The first regression report, though slightly too late for the vote:
> > 
> > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=639825
> > 
> > The byterange_filter.c in the Debian update is exactly the one from 
> > 2.2.20. I will keep you updated.
> 
> Hi; I'm just back from holiday and catching up.
> 
> The behaviour changes in the patch which could feasibly break 
> non-compliant clients are:
> 
> a) using 200 in some cases where a 206 response would end up being 
> larger
> 
> b) using a chunked response where previously C-L was always used, in 
> cases where >=32 ranges are being returned
> 
> Anything else to watch out for?
> 
> Looking at the patch in 2.2.x; there is a lot of effort expended 
> deadling with apr_bucket_split() returning ENOTIMPL - that looks 
> unnecessary; the filter will only handle brigades containing buckets 
> with known length, and all such buckets "must" be _split-able.

So you think we can rip out the whole if (rv == APR_ENOTIMPL) blocks?

Regards

Rüdiger



Re: Regression with range fix

2011-08-31 Thread Joe Orton
On Tue, Aug 30, 2011 at 08:51:55PM +0200, Stefan Fritsch wrote:
> The first regression report, though slightly too late for the vote:
> 
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=639825
> 
> The byterange_filter.c in the Debian update is exactly the one from 
> 2.2.20. I will keep you updated.

Hi; I'm just back from holiday and catching up.

The behaviour changes in the patch which could feasibly break 
non-compliant clients are:

a) using 200 in some cases where a 206 response would end up being 
larger

b) using a chunked response where previously C-L was always used, in 
cases where >=32 ranges are being returned

Anything else to watch out for?

Looking at the patch in 2.2.x; there is a lot of effort expended 
deadling with apr_bucket_split() returning ENOTIMPL - that looks 
unnecessary; the filter will only handle brigades containing buckets 
with known length, and all such buckets "must" be _split-able.

Regards, Joe