Re: svn commit: r692567 - /httpd/httpd/trunk/modules/filters/mod_charset_lite.c

2008-09-06 Thread Ruediger Pluem



On 09/06/2008 12:21 AM, [EMAIL PROTECTED] wrote:

Author: gregames
Date: Fri Sep  5 15:21:36 2008
New Revision: 692567

URL: http://svn.apache.org/viewvc?rev=692567view=rev
Log:
PR 45687: Detect and pass along error buckets

Submitted by: Dan Poirier poirier pobox.org 
Reviewed by:  trawick


Modified:
httpd/httpd/trunk/modules/filters/mod_charset_lite.c

Modified: httpd/httpd/trunk/modules/filters/mod_charset_lite.c
URL: 
http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/filters/mod_charset_lite.c?rev=692567r1=692566r2=692567view=diff
==
--- httpd/httpd/trunk/modules/filters/mod_charset_lite.c (original)
+++ httpd/httpd/trunk/modules/filters/mod_charset_lite.c Fri Sep  5 15:21:36 
2008
@@ -414,6 +414,23 @@
 return rv;
 }
 
+static apr_status_t send_bucket_downstream(ap_filter_t *f, apr_bucket *b)

+{
+request_rec *r = f-r;
+conn_rec *c = r-connection;
+apr_bucket_brigade *bb;
+charset_filter_ctx_t *ctx = f-ctx;
+apr_status_t rv;
+
+bb = apr_brigade_create(r-pool, c-bucket_alloc);


This is bad. Please store the brigade in the filter context and reuse it,
by cleaning it. See also 
http://httpd.apache.org/docs/trunk/en/developer/output-filters.html#filtering
The same error is in send_downstream and IMHO they can share the same brigade.


+APR_BRIGADE_INSERT_TAIL(bb, b);
+rv = ap_pass_brigade(f-next, bb);
+if (rv != APR_SUCCESS) {
+ctx-ees = EES_DOWNSTREAM;
+}
+return rv;
+}
+
 static apr_status_t set_aside_partial_char(charset_filter_ctx_t *ctx,
const char *partial,
apr_size_t partial_len)



@@ -673,7 +690,7 @@
 }
 b = APR_BRIGADE_FIRST(bb);
 if (b == APR_BRIGADE_SENTINEL(bb) ||
-APR_BUCKET_IS_EOS(b)) {
+APR_BUCKET_IS_METADATA(b)) {
 break;
 }
 rv = apr_bucket_read(b, bucket, bytes_in_bucket, APR_BLOCK_READ);
@@ -892,6 +909,17 @@
 }
 break;
 }
+if (APR_BUCKET_IS_METADATA(dptr)) {


Don't we need to handle flush buckets separately and flush our buffer downstream
as far as we can in this case?


+apr_bucket *metadata_bucket;
+metadata_bucket = dptr;
+dptr = APR_BUCKET_NEXT(dptr);
+APR_BUCKET_REMOVE(metadata_bucket);
+rv = send_bucket_downstream(f, metadata_bucket);
+if (rv != APR_SUCCESS) {
+done = 1;
+}
+continue;
+}
 rv = apr_bucket_read(dptr, cur_str, cur_len, APR_BLOCK_READ);
 if (rv != APR_SUCCESS) {
 done = 1;


Regards

Rüdiger




Re: svn commit: r692567 - /httpd/httpd/trunk/modules/filters/mod_charset_lite.c

2008-09-06 Thread Eric Covener
On Sat, Sep 6, 2008 at 4:38 AM, Ruediger Pluem [EMAIL PROTECTED] wrote:

 @@ -673,7 +690,7 @@
 }
 b = APR_BRIGADE_FIRST(bb);
 if (b == APR_BRIGADE_SENTINEL(bb) ||
 -APR_BUCKET_IS_EOS(b)) {
 +APR_BUCKET_IS_METADATA(b)) {
 break;
 }
 rv = apr_bucket_read(b, bucket, bytes_in_bucket,
 APR_BLOCK_READ);

 Don't we need to handle flush buckets separately and flush our buffer
 downstream as far as we can in this case?

I think this part is correct in the patch.  This path is used on input
only, and the break-on-metadata will causes us to:

move remaining input into our filter state
create a heap bucket containing all the bytes we've xlated so far
(collected in a char*)
add the heap bucket onto the (emptied) bb being returned
add any metadata at the head of the input/filterstate to the bb being returned

The only thing held onto past the flush would be leftover bytes from
(apr_xlate() == APR_INCOMPLETE), for which it seems better to send
what we had (i.e. multibyte sequence with a flush bucket separating
the bytes doesn't get flushed)

-- 
Eric Covener
[EMAIL PROTECTED]


Re: svn commit: r692567 - /httpd/httpd/trunk/modules/filters/mod_charset_lite.c

2008-09-06 Thread Ruediger Pluem



On 09/06/2008 01:30 PM, Eric Covener wrote:

On Sat, Sep 6, 2008 at 4:38 AM, Ruediger Pluem [EMAIL PROTECTED] wrote:


@@ -673,7 +690,7 @@
}
b = APR_BRIGADE_FIRST(bb);
if (b == APR_BRIGADE_SENTINEL(bb) ||
-APR_BUCKET_IS_EOS(b)) {
+APR_BUCKET_IS_METADATA(b)) {
break;
}
rv = apr_bucket_read(b, bucket, bytes_in_bucket,
APR_BLOCK_READ);

Don't we need to handle flush buckets separately and flush our buffer
downstream as far as we can in this case?


I think this part is correct in the patch.  This path is used on input
only, and the break-on-metadata will causes us to:

move remaining input into our filter state
create a heap bucket containing all the bytes we've xlated so far
(collected in a char*)
add the heap bucket onto the (emptied) bb being returned
add any metadata at the head of the input/filterstate to the bb being returned

The only thing held onto past the flush would be leftover bytes from
(apr_xlate() == APR_INCOMPLETE), for which it seems better to send
what we had (i.e. multibyte sequence with a flush bucket separating
the bytes doesn't get flushed)



You are corerect. Thanks for explaining. I missed that this only happens
in the input filter. Apart from the fact that the current code flow does
it correct as you have explained, flush buckets do not seem to make a lot of
sense in an input brigade anyway.

Regards

Rüdiger






Re: Behaviour of mod_proxy_ajp if CPING/CPONG fails

2008-09-06 Thread Rainer Jung
Rüdiger Plüm schrieb:
 
 
 On 09/05/2008 06:21 PM, Mladen Turk wrote:
 Plüm, Rüdiger, VF-Group wrote:
  

 +1 for the concept.
 However for threaded servers you should call
 ap_proxy_acquire_connection inside retry loop, cause there might
 be available connections inside the pool.

 I don't think that this does what you want. If I simply continue to
 acquire connections from the pool without returning the faulty ones
 back before, other threads might starve because they cannot get
 connections
 from the reslist any longer (not even faulty ones, that they would
 reopen).
 If I return the faulty connection to the reslist, there is some
 likelyhood
 that I get the same connection back within the next acquire as the
 reslist
 is organized as a stack. IMHO this approach would only work if the
 reslist
 was organized as a queue, which it is no longer in order to get the ttl
 feature in conjunction with smax working correctly.


 If failed each connection should be released anyhow, so it's a
 loop operation that will either return connection with socket
 (potentially valid), or without a socket for reconnect, in which
 case you break from the loop in either case.

 while (apr_proxy_acguire_connection) {
fresh = 0
if (conn-sock == NULL) {
   fresh = 1
}
ap_proxy_determine_connection
ap_proxy_connect_to_backend
if (!ajp_handle_cping_cpong) {
 CPING/CPONG failed. Mark the connection for closure.
 conn-close++;
 ap_proxy_release_connection
 if (fresh) {
CPING/CPONG failed on fresh connection. bail out.
return 503;
 }
}
else {
   CPING/CPONG OK.
   break;
}
 }
 go on with socket
 
 As I said: Due to the fact that the reslist is a stack it results
 effectively in the
 same thing as my code does. This is because the acquire_connection call
 will get
 the same faulty (but then closed connection) that the previous
 ap_proxy_release_connection
 placed back in the reslist.

Maybe I'm missing something here:

The stack design is useful, because it allows for idle connections to
trigger the idle timeout. The most recently used connection gets reused
first.

But in case the user of the connection knows, that it's broken, and
closes it, it would make more sense to put in under the stack, since it
is no longer connected.

So it seems you need a dequeue data structure to be able to reflect both
use cases.

Regards,

Rainer


Re: Behaviour of mod_proxy_ajp if CPING/CPONG fails

2008-09-06 Thread Ruediger Pluem



On 09/06/2008 10:54 PM, Rainer Jung wrote:

Rüdiger Plüm schrieb:


On 09/05/2008 06:21 PM, Mladen Turk wrote:

Plüm, Rüdiger, VF-Group wrote:
 



while (apr_proxy_acguire_connection) {
   fresh = 0
   if (conn-sock == NULL) {
  fresh = 1
   }
   ap_proxy_determine_connection
   ap_proxy_connect_to_backend
   if (!ajp_handle_cping_cpong) {
CPING/CPONG failed. Mark the connection for closure.
conn-close++;
ap_proxy_release_connection
if (fresh) {
   CPING/CPONG failed on fresh connection. bail out.
   return 503;
}
   }
   else {
  CPING/CPONG OK.
  break;
   }
}
go on with socket

As I said: Due to the fact that the reslist is a stack it results
effectively in the
same thing as my code does. This is because the acquire_connection call
will get
the same faulty (but then closed connection) that the previous
ap_proxy_release_connection
placed back in the reslist.


Maybe I'm missing something here:

The stack design is useful, because it allows for idle connections to
trigger the idle timeout. The most recently used connection gets reused
first.


Exactly.



But in case the user of the connection knows, that it's broken, and
closes it, it would make more sense to put in under the stack, since it
is no longer connected.


Why? IMHO this defers only the efforts that need to be done anyway to create
a new TCP connection. And keep in mind that even in the case that I put the
faulty connection back *under* the stack the next connection on top of the
stack was even longer idle then the one that was faulty. So it is likely to
be faulty as well. It might be the case though that this faultiness is
detected earlier (in the TCP connection check) and thus our next CPING/CPONG
in the loop happens with a fine fresh TCP connection.
One question as you are more familiar with the AJP server code on Tomcat side:
If a connector closes down a connection due to its idleness does it send any
kind of AJP shutdown package via the TCP connection or does it just close the
socket like in the HTTP keepalive case?


Regards

Rüdiger