Re: [mp2] message about Invalid command 'PerlSwitches' during make test
So it would appear that I want to build mod_perl without the MP_USE_STATIC=1, is that correct? That's right. I should have noticed that in your bug report. The static build is not completed yet. I guess we should do more checking. Meanwhile just remove that MP_USE_STATIC=1, it'll use MP_USE_DSO=1 by default. I'm thinking about adding some stuff to TestReport, namely httpd -l, but perhaps also a list of the .so files in modules/ sound like a good idea? --Geoff
Flood project: keepalive_begin_conn() reorganized to properly handle transitions (between ssl and non-ssl, or different hosts/ports) and use different state variable [Ref A4]
This new patch for flood_socket_keepalive.c fixes some issues discovered during opening and closing sockets. Predicated on the changes previously made to the socket structure (reference my email with the 6 diffs), the reopen_socket member, which has now been changed to available, now takes on an ordered enum of values. The keepalive_begin_conn() function has been modified to properly handle error conditions and transitions, which can occur between pages using ssl or no ssl, different hosts, or different ports and require closure of the previous connection. Other changes: keepalive_end_conn() also contains logic to prevent certain error conditions from crashing the function. The setting of the socket's associated request, which was established in the patch mentioned above, has been moved to the case where sockets are not being opened, since otherwise the open_socket() or ssl_open_socket() function sets the request. Please review the attached diff, which compares the latest flood_socket_keepalive.c file with the version produced from the patch creating the unified socket structure, which is named flood_socket_keepalive_2.c -Norman Tuttle, Developer, OpenDemand Systems ([EMAIL PROTECTED])
Re: Hook ordering
Cliff Woolley wrote: Noel and I had a little discussion just now on IRC about hook ordering and the fact that in 2.0 we have made the admin's life a little harder by hard-coding the ordering of certain modules (eg mod_dav vs mod_jk2). Basically the problem is that a completely automatic ordering is good for the cases that have particular ordering constraints, but bad for the cases where the ordering is underconstrained because the admin has no choice as to which of the possible valid orderings is used. So we talked about two things in particular that would help: 1) A means to list all the hooks and which modules had hooked them in which order (e.g., httpd --list-hooks) as a diagnostic. For this one, it looks like we might need a way to keep track at a global scope what hooks there are. Right now, the only list of hooks is static to the file the hooks were declared in (a static struct called _hooks in that file). Or we could just only allow --list-hooks to list certain hooks which are known to the core a priori. I thought I had this in there already? Long time since I've looked, though. 2) A means to restore some control to the admin. Noel suggests expanding the current APR_HOOK_FIRST/MIDDLE/LAST etc scheme with some priority bits which could be set by the admin: We could have [CLASS|priority]. The current ones, APR_HOOK_FIRST etc, are the CLASS. Admins could be given a HookPriority directive by which they could add a priority for a given mod+method. So the class would be the high-order bits and the priority would be the low-order bits, basically. Thus if the admin doesn't use the HookPriority directive, everything works as-is. But if they do, it just provides additional control over list insertion. +1! They're already spaced apart somewhat, aren't they? Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.thebunker.net/ There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit. - Robert Woodruff
Re: [PATCH] prefork graceful restart fix
Joe Orton wrote: Graceful restarts can be really slow in 2.0 with prefork... the parent is making a dummy connection on the pod MaxClients times, rather than a connection for as many children as it has had; is that really necessary? We've had problems in this area in the past (not necessarily in prefork) due to not being able to aim the dummy connections at specific old generation server instances. Then we start the new generation before the old generation is completely down, and the old generation processes hang around indefinately. If we can insure that won't happen, then go for it. This is an area where I think signals are valuable because they target specific processes. Yes, we've been avoiding signals in 2.0. But IMO that was mostly a workaround for linuxthread problems. That's not an issue for prefork, and it shouldn't be an issue for the new Linux pthread library either. Other approaches could work too, like: 1. send ap_max_daemons_limit dummy connections. 2. scan the scoreboard. Are all the children gone? Great, on to step 5. 3. send one dummy connection 4. sleep a little, then back to step 2. 5. start up the new generation With the current design, you shouldn't see more than one connection time out. A minute or more sounds like breakage, or possibly funky sysctl settings for TCP. Greg
Re: Hook ordering
On Fri, 17 Oct 2003, Ben Laurie wrote: For this one, it looks like we might need a way to keep track at a global scope what hooks there are. Right now, the only list of hooks is static to the file the hooks were declared in (a static struct called _hooks in that file). Or we could just only allow --list-hooks to list certain hooks which are known to the core a priori. I thought I had this in there already? Long time since I've looked, though. As best I can tell (which isn't saying too much =), you can list the functions registered for a particular hook, but I can't find a way to get the names of ALL of the hooks in an entire system and then list all the registrants of each of them. +1! They're already spaced apart somewhat, aren't they? Somewhat, yes: really_first == -10 first== 0 middle == 10 last == 20 really_last == 30 --Cliff
[1.3 PATCH] another ap_die() issue related to error documents
For ErrorDocument nnn http://url;, ap_die() will respond with 302 redirect, and r-status will be updated to indicate that. But the original error could have been one that keeps us from being able to process subsequent requests on the connection. Setting r-status to REDIRECT keeps us from dropping the connection later, since it hides the nature of the original problem. Example: client uses HTTP/1.1 to POST a 2MB file, to be handled by a module... module says no way and returns 413... admin has ErrorDocument 413 http://file_too_big.html;... Apache sends back 302 with Location=http://file_too_big.html, but since this is HTTP/1.1, Apache then tries to read the next request and blows up (invalid method in request)... Depending on browser, you may get something odd. Mozilla still is able to display the error document after fetching it with GET. IE in HTTP/1.1 mode displays a Method Not Implemented error message with Invalid method in request and the first part of the POST body as the method. (I don't have standalone patch... this is an addition to a patch posted earlier) @@ -1117,7 +1120,15 @@ * apache code, and continue with the usual REDIRECT handler. * But note that the client will ultimately see the wrong * status... + * + * Also, before updating r-status, we may need to ensure that + * the connection is dropped. For example, there may be + * unread request body that would confuse us if we try + * to read another request. */ +if (ap_status_drops_connection(r-status)) { +r-connection-keepalive = -1; +} r-status = REDIRECT; ap_table_setn(r-headers_out, Location, custom_response); }
Re: [1.3 PATCH] another ap_die() issue related to error documents
On Friday, October 17, 2003, at 12:27 PM, Jeff Trawick wrote: For ErrorDocument nnn http://url;, ap_die() will respond with 302 redirect, and r-status will be updated to indicate that. But the original error could have been one that keeps us from being able to process subsequent requests on the connection. Setting r-status to REDIRECT keeps us from dropping the connection later, since it hides the nature of the original problem. Example: client uses HTTP/1.1 to POST a 2MB file, to be handled by a module... module says no way and returns 413... admin has ErrorDocument 413 http://file_too_big.html;... Apache sends back 302 with Location=http://file_too_big.html, but since this is HTTP/1.1, Apache then tries to read the next request and blows up (invalid method in request)... It sends 302? Don't you mean it does a subrequest? I'd hope so. Anyway, +1 to the patch. Roy
ab hang
Has anyone seen this? Having checked everything possible short of debugging ab itself, I don't have a clue. $ ./ab http://127.0.0.1:4874/ This is ApacheBench, Version 2.0.40-dev $Revision: 1.121.2.1 $ apache-2.0 Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright (c) 1998-2002 The Apache Software Foundation, http://www.apache.org/ Benchmarking 127.0.0.1 (be patient)...apr_poll: The timeout specified has expired (70007) While curl, fetch, wget, telnet all work just fine for that same url. This happens on Linux RH 9.0, and FreeBSD 5.1-STABLE, but not on FreeBSD 4.8 or my OS X box. The server in all cases is httpd-2.0.47. Grisha
Re: [1.3 PATCH] another ap_die() issue related to error documents
On Fri, 17 Oct 2003, Roy T. Fielding wrote: It sends 302? Don't you mean it does a subrequest? I'd hope so. Anyway, +1 to the patch. I always thought it did a 302 if the errordocument started with http:// (ie, was possibly external), but did a subrequest if it did not start with http:// . That is, in fact, the documented behavior: http://httpd.apache.org/docs/mod/core.html#errordocument --Cliff