Re: cvs commit: httpd-test/perl-framework/Apache-Test/lib/Apache TestConfig.pm
this change is wrong. please revert and explain what you need so we can find the right solution. On 3 Jun 2002 [EMAIL PROTECTED] wrote: jerenkrantz2002/06/03 11:03:42 Modified:perl-framework/Apache-Test/lib/Apache TestConfig.pm Log: Only start one server instance until we need the other one for the proxy tests. Revision ChangesPath 1.137 +1 -1 httpd-test/perl-framework/Apache-Test/lib/Apache/TestConfig.pm Index: TestConfig.pm === RCS file: /home/cvs/httpd-test/perl-framework/Apache-Test/lib/Apache/TestConfig.pm,v retrieving revision 1.136 retrieving revision 1.137 diff -u -r1.136 -r1.137 --- TestConfig.pm 20 May 2002 22:25:34 - 1.136 +++ TestConfig.pm 3 Jun 2002 18:03:42 - 1.137 @@ -1553,7 +1553,7 @@ /IfModule IfModule prefork.c -StartServers @MaxClients@ +StartServers 1 MaxClients @MaxClients@ MaxRequestsPerChild 0 /IfModule
Re: cvs commit: httpd-test/perl-framework/Apache-Test/lib/Apache TestConfig.pm
On Mon, 3 Jun 2002, Aaron Bannert wrote: Cliff is always mentioning something like t/TEST -d gdb or something like that. Won't that run in -X mode automatically? yes.
Re: cvs commit: httpd-test/perl-framework/Apache-Test/lib/Apache TestConfig.pm
On Mon, Jun 03, 2002 at 11:21:57AM -0700, Doug MacEachern wrote: On Mon, 3 Jun 2002, Aaron Bannert wrote: Cliff is always mentioning something like t/TEST -d gdb or something like that. Won't that run in -X mode automatically? The reason I don't like that is because if I need to restart the server I have to quit my gdb. I want my gdb to last longer than the process (so my breakpoints et al remain the same). I'm confused why this commit is an issue. None of the other MPMs start multiple processes - why should prefork? And, it's not like it won't start multiple processes when it needs to. -- justin
Re: cvs commit: httpd-test/perl-framework README
On 3 Jun 2002 [EMAIL PROTECTED] wrote: aaron 2002/06/03 11:31:00 Modified:perl-framework README Log: Add a note about envoking gdb. note that this and heaps of other stuff is in httpd-test/perl-framework/Apache-Test/README which is where it belongs, since Apache-Test is the self-contained part that is used to build other test suites, such as modperl-2.0
Re: cvs commit: httpd-test/perl-framework/Apache-Test/lib/Apache TestConfig.pm
The reason I don't like that is because if I need to restart the server I have to quit my gdb. I want my gdb to last longer than the process (so my breakpoints et al remain the same). I'm confused why this commit is an issue. None of the other MPMs start multiple processes - why should prefork? And, it's not like it won't start multiple processes when it needs to. -- justin Not that I'm that experienced with the perl-framework over here, it would seem to me that it's important to run the tests under typical environments (ie multiple processes). Imagine a deadlocking bug that we never hit in -X mode. just my 2c, -aaron
Re: cvs commit: httpd-test/perl-framework/Apache-Test/lib/Apache TestConfig.pm
On Mon, Jun 03, 2002 at 11:34:39AM -0700, Aaron Bannert wrote: Not that I'm that experienced with the perl-framework over here, it would seem to me that it's important to run the tests under typical environments (ie multiple processes). Imagine a deadlocking bug that we never hit in -X mode. Um, as I pointed out, none of the other MPMs are configured like this. Only prefork would start multiple servers. The others always run under a single process. -- justin
Re: cvs commit: httpd-test/perl-framework/Apache-Test/lib/Apache TestConfig.pm
On Mon, 3 Jun 2002, Justin Erenkrantz wrote: The reason I don't like that is because if I need to restart the server I have to quit my gdb. I want my gdb to last longer than the process (so my breakpoints et al remain the same). you can use the -maxclients option or edit httpd.conf by hand before you start to debug. I'm confused why this commit is an issue. None of the other MPMs start multiple processes - why should prefork? it breaks any sort of proxy tests, various modperl test, etc. your change is just plain wrong, back it out. And, it's not like it won't start multiple processes when it needs to. -- justin umm, not with MaxClients 1 it won't
Re: cvs commit: httpd-test/perl-framework/Apache-Test/lib/Apache TestConfig.pm
On Mon, 3 Jun 2002, Justin Erenkrantz wrote: Um, as I pointed out, none of the other MPMs are configured like this. Only prefork would start multiple servers. The others always run under a single process. -- justin yeah, cos threaded mpms can handle concurrent requests with one process, prefork cannot.
Re: cvs commit: httpd-test/perl-framework/Apache-Test/lib/Apache TestConfig.pm
On Mon, Jun 03, 2002 at 11:31:54AM -0700, Doug MacEachern wrote: On Mon, 3 Jun 2002, Justin Erenkrantz wrote: The reason I don't like that is because if I need to restart the server I have to quit my gdb. I want my gdb to last longer than the process (so my breakpoints et al remain the same). you can use the -maxclients option or edit httpd.conf by hand before you start to debug. I'm confused why this commit is an issue. None of the other MPMs start multiple processes - why should prefork? it breaks any sort of proxy tests, various modperl test, etc. your change is just plain wrong, back it out. And, it's not like it won't start multiple processes when it needs to. -- justin umm, not with MaxClients 1 it won't Um, I think you misread my commit. All I changed was StartServers. IfModule prefork.c StartServers 1 MaxClients @MaxClients@ MaxRequestsPerChild 0 /IfModule MaxClients remained the same. In fact, I think it should be: IfModule prefork.c StartServers 1 MinSpareServers 1 MaxSpareServers 1 MaxClients @MaxClients@ MaxRequestsPerChild 0 /IfModule Now, Aaron mentioned that perhaps we should always run with multiple processes regardless of the MPMs. I could agree with that. But, what we had was inconsistent. -- justin
Re: cvs commit: httpd-test/perl-framework/Apache-Test/lib/Apache TestConfig.pm
On Mon, 3 Jun 2002, Doug MacEachern wrote: umm, not with MaxClients 1 it won't oh wait, you changed StartServers not MaxClients, maybe that isn't a problem.
Re: cvs commit: httpd-test/perl-framework/Apache-Test/lib/Apache TestConfig.pm
On Mon, 3 Jun 2002, Justin Erenkrantz wrote: Um, I think you misread my commit. All I changed was StartServers. totally, i only read - @MaxClients@ + 1, never even saw StartServers. disregard my comments, they were meant for MaxClients, your change is fine with me.
Re: cvs commit: httpd-test/specweb99/specweb99-2.0 mod_specweb99.c
Brian Pane wrote: [EMAIL PROTECTED] wrote: gregames2002/06/03 11:05:50 Modified:specweb99/specweb99-2.0 mod_specweb99.c BTW, does anyone have SPECweb results for 2.0 that they're able to discuss? Not that can be published according to the SPEC rules, or are worth publishing for that matter. You have to run the thing for a really long time to meet the rules, and of course the server machine becomes a basket case if you give it enough workload. So I cut the run time parameters way down to maintain my sanity while doing development, which invalidates the results. But I can mention that my very unofficial mini-SPECweb99 runs with the client and server both on my ThinkPad with 100% standard dynamic GETs* show that prefork is the fastest, worker is about 1% slower, and leader is about another 1.5% slower. This is a noticeable improvement from when I started on specweb - worker was maybe 10% slower at that time, and leader had a compile error. If I were running the client server on separate boxes, the differences would probably be larger. Greg * the standard dynamic GETs wrap a few lines of dynamically generated html around a static file. These make up 12.5% of the official SPECweb99 workload; 70% is pure static requests; the remainder consists of more CPU intensive types of dynamic requests.
help with apache configuration logic...
Hello Apache-people, I'm in the process of porting to apache 2 a module I developped for apache 1.3. The 'mod_macro' module add macro definition capabilities to apache configuration files. Macros are expanded on the fly and parsed. With apache 1.3, I needed an initialization phase each time a new configuration cycle is started, as there are two analyses of the configuration file each time apache is launched. The hack I found was to notice that the temporary pool has changed to re-initialized my internal data structures which holds the description of macros... quite poor. Now with apache 2, I digged out in the source code a 'pre_config' and 'post_config' hook that look just fine, so I was planing to use that instead of the previous hack. However : 1/ the apache configuration is still read twice. well, why not if it pleases you. 2/ the pre_config hook is run *AFTER* the configuration file is read. Indeed, you can see that in main.c where ap_run_pre_config() is called after ap_read_config() Thus here are my questions: 1/ as 'PRE' is a latin prefix which means before, would it be possible for the sanity of the developpers to either: a/ call it *before* the configuration is read. b/ or rename it 'post_config';-) then the 'post_config' can be renamed 'post_post_config';-) 2/ if the pre_config is to be run anyway after the configuration file is read, could you suggest another hook I could use ? I can't see any... and the developper documentation is rather scarse and not up to date. 3/ or explain what I missed in the source code to understand the logic behind all that. Thanks in advance for your help! -- Fabien.
Re: [PATCH] 1.3: Cygwin specific changes to the build process
At 10:53 AM +0200 6/2/02, Stipe Tolj wrote: Jim Jagielski wrote: At 11:12 AM +0200 5/31/02, Stipe Tolj wrote: diff -ur apache-1.3/src/helpers/install.sh apache-1.3-cygwin/src/helpers/install.sh --- apache-1.3/src/helpers/install.sh Tue Jun 12 10:24:53 2001 +++ apache-1.3-cygwin/src/helpers/install.sh Tue May 28 11:15:10 2002 @@ -89,12 +89,8 @@ # Check if we need to add an executable extension (such as .exe) # on specific OS to src and dst -if [ -f $src.exe ]; then - if [ -f $src ]; then -: # Cygwin [ test ] is too stupid to do [ -f $src.exe ] [ ! -f $src ] - else -ext=.exe - fi +if [ -f $src.exe ] [ ! -f $src. ]; then + ext=.exe fi src=$src$ext dst=$dst$ext Why the above change?? If [] is fixed, what about backwards compatibility? the problem is Cygwin's behaviour for the -f shell condition. I'll check if I can solve it on another way. It's just that the comment in the present tree specifically says that what it's being changed to doesn't work, so I'm wondering why it *is* working now. -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ A society that will trade a little liberty for a little order will lose both and deserve neither - T.Jefferson
1.3.25 release status
There are 2 outstanding questions regarding the Cygwin patches that Stipe submitted, which I would like resolved before the TR. It's also looking like the 2 patches noted in STATUS will *not* be added in. TR set for the morning of June4. -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ A society that will trade a little liberty for a little order will lose both and deserve neither - T.Jefferson
Re: [PATCH] Apache 1.3 and OpenBSD
Can this very simple and straightforward patch please be put in before 1.3.25 is TRed ? // Brad [EMAIL PROTECTED] [EMAIL PROTECTED] -- Forwarded message -- Date: Mon, 20 May 2002 18:03:40 -0400 (EDT) From: Brad [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: [PATCH] Apache 1.3 and OpenBSD Here is a patch for Apache 1.3 which when used with OpenBSD 3.1 and up allows modules to work on our ELF-based architectures. BTW, I am not subscribed to this list so please reply directly to me. --- Configure.orig Sat May 11 23:39:59 2002 +++ Configure Mon May 20 17:19:41 2002 @@ -1130,6 +1130,9 @@ if [ x$using_shlib = x1 ] ; then *) LD_SHLIB=gcc LDFLAGS_SHLIB=-shared \$(CFLAGS_SHLIB) + if [ -z `echo __ELF__ | $CC -E - | grep __ELF__` ]; then + LDFLAGS_SHLIB_EXPORT=-Wl,-E + fi ;; esac LDFLAGS_MOD_SHLIB=$LDFLAGS_SHLIB // Brad [EMAIL PROTECTED] [EMAIL PROTECTED]
Re: Need a new feature: Listing of CGI-enabled directories.
Ronald F. Guilmette wrote: In message [EMAIL PROTECTED], Rasmus Lerdorf [EMAIL PROTECTED] wrote: mod_info will tell you some of this. ie. Look for ScriptAlias lines under mod_alias.c and AddHandler cgi-script lines under mod_mime.c. I was hoping to find a volunteer to actually hack on this for me. I am _not_ well versed in Apache internals myself. ummm, I think you misunderstood. There's no internals knowledge needed to do this step. mod_info displays parsed config file information, which could give you some of what you want. For example, http://apache.org/servinfo#mod_alias.c says that there no ScriptAliases in apache.org's config file, and http://apache.org/servinfo#mod_mime.c has an AddHandler line which says that files named *.cgi can be cgi's. In this case, .htaccess and symlinks are enabled, so that information is only a starting point. Greg
Re: [PATCH] Apache 1.3 and OpenBSD
--- Configure.orig Sat May 11 23:39:59 2002 +++ Configure Mon May 20 17:19:41 2002 @@ -1130,6 +1130,9 @@ if [ x$using_shlib = x1 ] ; then *) LD_SHLIB=gcc LDFLAGS_SHLIB=-shared \$(CFLAGS_SHLIB) +if [ -z `echo __ELF__ | $CC -E - | grep __ELF__` ]; then + LDFLAGS_SHLIB_EXPORT=-Wl,-E + fi The '-z' is not normally used in Configure but that is easily fixed... :) -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ A society that will trade a little liberty for a little order will lose both and deserve neither - T.Jefferson
Re: apache under linux -- restarting problems
Aaron Bannert wrote: On Sun, Jun 02, 2002 at 02:52:32PM -0700, Ian Holsman wrote: I've just run into this, and it is present in 2.0.36.. the name-based sysvmem isn't appropiate as it will cause apache to refuse to start when you upgrade a module (forcing a reboot) a simple way to 'fix' is this it for the server to write out in the error message what sharedmem segment it is trying to create so you can ipcrm it. otherwise your forced to remove them all which is not a good thing You shouldn't have to reboot. This also is a problem with both semaphores and shared memory whenver apache is not shut down cleanly. I'm not sure what you mean by being forced to remove all, but it would be nice if we could come up with a better way to deal with this situation (it seems to be coming up a lot recently). Perhaps ipcs/ipcrm docs aren't good enough? the problem is that on a machine with nothing else important running on it I have 5-6 shared memory segments owned by root... and I have no way of identifiying which one apache is complaining about. was there a good reason why we switched from a anonymous name ? -aaron
Re: apache under linux -- restarting problems
On Mon, Jun 03, 2002 at 08:41:53AM -0700, Ian Holsman wrote: the problem is that on a machine with nothing else important running on it I have 5-6 shared memory segments owned by root... and I have no way of identifiying which one apache is complaining about. was there a good reason why we switched from a anonymous name ? Unless someone speaks up I think we should change this to prefer anonymous mmap()-based (if available). -aaron
RE: help with apache configuration logic...
1/ as 'PRE' is a latin prefix which means before, would it be possible for the sanity of the developpers to either: a/ call it *before* the configuration is read. b/ or rename it 'post_config';-) then the 'post_config' can be renamed 'post_post_config';-) No, neither is possible. The pre refers to when we process the config, not when it is read. Argh. So I need pre/post_read_file hook maybe;-) You don't need it in 2.0. The new config system was specifically designed to allow the type of thing that mod_macro does. 2/ if the pre_config is to be run anyway after the configuration file is read, could you suggest another hook I could use ? I can't see any... and the developper documentation is rather scarse and not up to date. 3/ or explain what I missed in the source code to understand the logic behind all that. The easiest way to solve the problem you are looking at, is to mark the Macro directives as EXEC_ON_READ, I already needed that so as to be able to locate error messages, as the cmd-config_file field is not defined otherwise, and I have to deal with it. The problem is that I need an initialization *before* lines are submitted to my commands, and this does not solve this issue at all. The register_hooks phase is run as soon as the module is loaded, so you can use that phase for your initialization. that will get the macros loaded while we read the config file. Then, in pre_config, walk the tree looking for the macro names, and replace them with the definitions that you read in earlier. It seems to me not to work, as you can have a macro definition within a macro, which might be defined after macro expansion. That doesn't matter. I was actually wrong earlier, you don't want EXEC_ON_READ at all. The whole thing can be done without it. Just define a pre_config function. That function will walk the tree and find all macro definitions. Then, the second pass (not optimal, but easiest to explain) will expand all macros. Any macros inside a macro will be automatically expanded if coded properly. Also, I have a problem understanding what you mean by 'walk the tree'. The macro processing simply performs a macro expansion at the textual level, which is parsed after expansion by modifying on the fly the config_file structure with my own stuff... The Apache 2.0 config system has been changed, it no longer relies on the text file. Instead, the server reads the file into a tree that resides in memory. The server walks the tree to determine the configuration. By manipulating the tree before the server walks it, your module can impact the configuration. This is what the pre_config phase is for. I can't see anyway a parse-tree for apache configuration file, given its lexical structure... Is there such a thing? Take a look at the worker.c file, worker_pre_config function for an example of how to modify the configuration tree. Thus I think that it would be great if I had a pre/post read hook, really?! How can I get one? Or should I investigate the 'tree' stuff and try to cast my stuff into that? You don't need a pre/post read hook, and I am very much against adding one. Please look at the configuration tree to solve your problem. Ryan
[PATCH] Add content negotiation and expiration model to mod_cache
Most of this code was lifted from 1.3 proxy_cache.c. There are two problems with this code that I am aware of and the first must be fixed before the patch is committed. First, cache_read_entity_headers() is being called twice, once from mod_cache.c and now from cache_storage.c. Perhaps removing the call from mod_cache will be sufficient. I don't know the right anwer yet. Second, mod_disk_cache is broken because it does not store response_time/request_time which breaks the age calculation algorithm. This is easy to fix. Bill Index: cache_storage.c === RCS file: /home/cvs/httpd-2.0/modules/experimental/cache_storage.c,v retrieving revision 1.21 diff -u -r1.21 cache_storage.c --- cache_storage.c 28 May 2002 18:04:43 - 1.21 +++ cache_storage.c 3 Jun 2002 16:38:35 - -174,6 +174,10 char *key; cache_request_rec *cache = (cache_request_rec *) ap_get_module_config(r-request_config, cache_module); +const char *cc_cresp, *cc_req, *pragma_cresp; +const char *agestr = NULL; +char *val; +apr_time_t age_c = 0; rv = cache_generate_key(r,r-pool,key); if (rv != APR_SUCCESS) { -186,18 +190,171 type = ap_cache_tokstr(r-pool, next, next); switch ((rv = cache_run_open_entity(cache-handle, r, type, key))) { case OK: { +apr_time_t age, maxage_req, maxage_cresp, maxage, smaxage, maxstale, minfresh; +char *vary; + info = (cache-handle-cache_obj-info); -/* XXX: - * Handle being returned a collection of entities. +if (cache_read_entity_headers(cache-handle, r) != APR_SUCCESS) { +/* TODO: Handle this error */ +return DECLINED; +} + +/* + * Check Content-Negotiation - Vary + * + * At this point we need to make sure that the object we found in the +cache + * is the same object that would be delivered to the client, when the + * effects of content negotiation are taken into effect. + * + * In plain english, we want to make sure that a language-negotiated + * document in one language is not given to a client asking for a + * language negotiated document in a different language by mistake. + * + * RFC2616 13.6 and 14.44 describe the Vary mechanism. + */ +vary = ap_pstrdup(r-pool, ap_table_get(r-headers_out, Vary)); +while (vary *vary) { +char *name = vary; +const char *h1, *h2; + +/* isolate header name */ +while (*vary !ap_isspace(*vary) (*vary != ',')) +++vary; +while (*vary (ap_isspace(*vary) || (*vary == ','))) { +*vary = '\0'; +++vary; +} + +/* + * is this header in the request and the header in the cached + * request identical? If not, we give up and do a straight get + */ +h1 = ap_table_get(r-headers_in, name); +h2 = ap_table_get(info-req_hdrs, name); +if (h1 == h2) { +/* both headers NULL, so a match - do nothing */ +} +else if (h1 h2 !strcmp(h1, h2)) { +/* both headers exist and are equal - do nothing */ +} +else { +/* headers do not match, so Vary failed */ +ap_log_error(APLOG_MARK, APLOG_INFO, APR_SUCCESS, r-server, + cache_select_url(): Vary header mismatch - Cached document cannot be used. \n); +apr_table_clear(r-headers_out); +r-status_line = NULL; +cache-handle = NULL; +return DECLINED; +} +} + +cache-fresh = 0; +/* + * We now want to check if our cached data is still fresh. This depends + * on a few things, in this order: + * + * - RFC2616 14.9.4 End to end reload, Cache-Control: no-cache no-cache in + * either the request or the cached response means that we must + * revalidate the request unconditionally, overriding any expiration + * mechanism. It's equivalent to max-age=0,must-revalidate. + * + * - RFC2616 14.32 Pragma: no-cache This is treated the same as + * Cache-Control: no-cache. + * + * - RFC2616 14.9.3 Cache-Control: max-stale, must-revalidate, + * proxy-revalidate if the max-stale request header exists, modify the + * stale calculations below so that an object can be at most max-stale + * seconds stale
Re: apache under linux -- restarting problems
On Mon, Jun 03, 2002 at 09:01:29AM -0700, Aaron Bannert wrote: On Mon, Jun 03, 2002 at 08:41:53AM -0700, Ian Holsman wrote: the problem is that on a machine with nothing else important running on it I have 5-6 shared memory segments owned by root... and I have no way of identifiying which one apache is complaining about. was there a good reason why we switched from a anonymous name ? Unless someone speaks up I think we should change this to prefer anonymous mmap()-based (if available). Which AcceptMutex value would this be? -- justin
Re: httpd on win32?
apacke -k start -n apache2 is broken Bill - Original Message - From: Cliff Woolley [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, June 03, 2002 2:19 PM Subject: httpd on win32? What's the current status? The STATUS file still indicates that httpd fails to start up on Win32, but I find it hard to believe that it's been that way with no change for a week now... Thanks, Cliff
Re: httpd on win32?
On Mon, 3 Jun 2002, Bill Stoddard wrote: apacke -k start -n apache2 is broken Ah. Okay, thanks for the update. :)
[PATCH] Discussion of apache -k start -n apache2 problem...
This patch allows apache -k start -n apache2 to start the server. This is certainly not the correct fix but it should illustrate the problem for someone more familier with the services code. The problem (step-by-step): 1. Issue apache -k restart -n apache2 2. The SCM issues apache -k runservice (using values from the registry) 3. winnt_rewrite_args rewrites the args to apache -d c:/apache2/ (ie, apache -d server_root) and saves the rewritten args in a global variable, mpm_new_argv. Everything is okay to this point (I think :-). I.e. mpm_new_argv[0] = c:/apache/bin/apache.exe mpm_new_argv[1] = -d mpm_new_argv[2] = server_root Okay up to this point... 4. winnt_rewrite_args calls mpm_service_to_start, which among other things calls the Win32 call StartService() passing as one of its arguments, the string apache2 (my patch passes NULL instead)... 5. As a result of the STartService() call, Windows create a new thread which which calls service_nt_main_fn (defined in service.c). The code in service_nt_main_fn adds the apache2 to the mpm_new_argv argument list, so the new argv looks like this: apache -d c:/apache2/bin apache2 The trailing apache2 (which is the service name) is causing the (opt-ind opt-argc) check to fail and dumping us into usage() (from main.c) /* bad cmdline option? then we die */ if (rv != APR_EOF || opt-ind opt-argc) { usage(process); } I am thinking that the broken code is in service_nt_main_fn because it is not properly updating the mpm_new_argv. Another problem is that there is a race condition between the Win32 thread that gets kicked off by the call to StartService() and the thread that is in mpm_rewrite_args. Really nasty. A good Chianti and some grated parmesan and We'd have a meal :-) Index: service.c === RCS file: /home/cvs/httpd-2.0/server/mpm/winnt/service.c,v retrieving revision 1.51 diff -u -r1.51 service.c --- service.c 17 May 2002 11:11:39 - 1.51 +++ service.c 3 Jun 2002 21:04:38 - -1059,12 +1061,11 argc += 1; start_argv = malloc(argc * sizeof(const char **)); -start_argv[0] = mpm_service_name; if (argc 1) memcpy(start_argv + 1, argv, (argc - 1) * sizeof(const char **)); rv = APR_EINIT; -if (StartService(schService, argc, start_argv) +if (StartService(schService, 0, NULL) signal_service_transition(schService, 0, /* test only */ SERVICE_START_PENDING, SERVICE_RUNNING))
RE: [Bug 9488] - HTTP/0.9 requests spoken on https port returns HTTP/1.0 response
Okay, so basically what's happening is that we depend upon OpenSSL to tell us when the data it got from the client resembles an HTTP request rather than an SSL handshake. The test looks like this: if ((n = SSL_accept(filter-pssl)) = 0) { ... if (ERR_GET_REASON(ERR_peek_error()) == SSL_R_HTTP_REQUEST) { return HTTP_BAD_REQUEST; } ... } There's no distinction in there of whether it detected an 0.9 or an 1.x request, just that it wasn't SSL and it kinda looked like HTTP. The above condition triggers a hardcoded magic request GET /mod_ssl:error:HTTP-request HTTP/1.0 to be sent back up the input filter chain, which is obviously broken if the original request was in fact 0.9. So somehow we either need to get OpenSSL to give us back more information (like perhaps a copy of the data it errored out on) or we need to stash a copy of that data before OpenSSL processes it. Either way could be potentially messy... I'm not sure of the implementation details yet. Cliff, This bug is actually nastier than it looks on first glance. Think through what happens if you have RewriteRule .* http://foo.com; in your config file when you send a non-SSL request to an SSL socket. What actually happens, is that you will get the http://foo.com request sent. The reason is that mod_ssl is faking a request to the core server, and the core server is re-writing that faked request. Whatever you do to solve this, you need to ensure that if mod_ssl detects this error case, it doesn't make it look like a real request to the core server. Ryan
RE: [Bug 9488] - HTTP/0.9 requests spoken on https port returnsHTTP/1.0 response
On Mon, 3 Jun 2002, Ryan Bloom wrote: through what happens if you have RewriteRule .* http://foo.com; in your config file when you send a non-SSL request to an SSL socket. What .. Whatever you do to solve this, you need to ensure that if mod_ssl detects this error case, it doesn't make it look like a real request to the core server. Yeah, I think we've actually had a PR where that happened to someone. We need a better way to send the notification of error down than this /mod_ssl:error:HTTP-request thingy. Thanks for mentioning this. --Cliff
RE: [Bug 9488] - HTTP/0.9 requests spoken on https port returns HTTP/1.0 response
From: Cliff Woolley [mailto:[EMAIL PROTECTED]] On Mon, 3 Jun 2002, Ryan Bloom wrote: through what happens if you have RewriteRule .* http://foo.com; in your config file when you send a non-SSL request to an SSL socket. What .. Whatever you do to solve this, you need to ensure that if mod_ssl detects this error case, it doesn't make it look like a real request to the core server. Yeah, I think we've actually had a PR where that happened to someone. We need a better way to send the notification of error down than this /mod_ssl:error:HTTP-request thingy. Thanks for mentioning this. I was actually just about to look at this problem if you are busy. Ryan
RE: [Bug 9488] - HTTP/0.9 requests spoken on https port returnsHTTP/1.0 response
On Mon, 3 Jun 2002, Ryan Bloom wrote: I was actually just about to look at this problem if you are busy. Go for it... I'm working on something else. Thanks.
Re: [Bug 9488] - HTTP/0.9 requests spoken on https port returnsHTTP/1.0 response
Cliff Woolley wrote: On Mon, 3 Jun 2002, Ryan Bloom wrote: I was actually just about to look at this problem if you are busy. Go for it... I'm working on something else. Perhaps its just me, but I'm amused this is considered a bug. Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.thebunker.net/ There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit. - Robert Woodruff
RE: [Bug 9488] - HTTP/0.9 requests spoken on https port returns HTTP/1.0 response
From: Ben Laurie [mailto:[EMAIL PROTECTED]] Cliff Woolley wrote: On Mon, 3 Jun 2002, Ryan Bloom wrote: I was actually just about to look at this problem if you are busy. Go for it... I'm working on something else. Perhaps its just me, but I'm amused this is considered a bug. It's a security hole IMO. The problem is that if you rewrite the URL .*, then the error URL that mod_ssl will be rewritten. This means that you can serve information over HTTP that was supposed to be restricted to HTTPS. Ryan
Re: [PATCH] ap_discard_request_body() can't be called more than once
On Sun, Jun 02, 2002 at 04:40:41PM -0700, Justin Erenkrantz wrote: This patch combined with the last few patches I've posted today allow chunked trailer support again and now passes all httpd-test cases. What we try to do is to ensure that ap_discard_request_body() is not called before the handler accepts the request and begins generating the output. It is still possible for a bad module to call discard more than once or improperly. This effectively reverts Ryan's patch to http_protocol.c, so I'd appreciate it gets some review (preferably from Ryan himself!). -- justin This is all getting *way* too complicated. I recall seeing an email where somebody suggested putting a flag in the request_rec to determine whether HTTP_IN had seen an EOS or not. Bleh. At a minimum, that would go into the context for HTTP_IN. I think that the right answer is that when the request_rec is about to go away, that any unread body content should be read by the framework. It would be really nice to have filter logic to say call me when I'm done so that HTTP_IN could go ahead and read the rest of the request right then. If that were done, then you wouldn't have to worry about a bunch of rules all over the place, about when to call it, to avoid double-calls, etc. In fact, all of the calls could just go away... [ note that double-calling ap_discard_request_body() is quite fine. ] My vote would be to put something into ap_finalize_request_protocol(). (the problem, of course, is recovering the HTTP_IN context; short of that, putting the flag into the request_rec or maybe the 'core_request_config' structure (hmm; the latter would be better). Note that mod_dav calls it a ton because it generates complete error responses. It needs to suck up any body, then generate the error, then return DONE. By returning DONE, we prevent an attempt by ap_die() to generate messages, but it also means that ap_die() won't discard the body. Cheers, -g -- Greg Stein, http://www.lyra.org/
Re: Subrequests reading bodies?
On Sun, Jun 02, 2002 at 02:46:40PM -0700, Justin Erenkrantz wrote: Is it permissible for a subrequest (r-main != NULL) to read input data from the client? My current thought is only the original request can do that. Am I right or am I wrong? -- justin You might end up doing an internal redirect or somesuch to a subrequest to have it process the thing. In which case, it is perfectly acceptable for that guy to read the body. The basic story is that if you run a subrequest, then it can (and probably *should*) consume the body. If you mere prepare a subrequest (so that you can look at the status of that prep), then it should absolute *not* read the body. Cheers, -g -- Greg Stein, http://www.lyra.org/
Re: [PATCH] Switch DAV logic for MKCOL body
On Sun, Jun 02, 2002 at 04:28:38PM -0700, Justin Erenkrantz wrote: Based on my interpretation of the RFC, I think this might be a better way to handle the body case for MKCOL. I sort of think this is what they were thinking rather than relying on the request entity headers. Thoughts? -- justin Hmm. I can see where you're coming from, but am worried that some clients might set a Content-Type even though they don't send content. (e.g. a zero length body of x type) How about adding this to STATUS, and collecting in there the clients that have been verified? (e.g. WebFolders, cadaver, Neon clients, etc) Once it appears that most clients are /not/ sending Content-Type on a MKCOL, then yes: let's get it integrated. I've always hated that code :-) Cheers, -g -- Greg Stein, http://www.lyra.org/
Re: [PATCH] Switch DAV PUT to use brigades
On Sun, Jun 02, 2002 at 04:15:11PM -0700, Justin Erenkrantz wrote: This patch switches mod_dav to use brigades for input when handling PUT. Cool! My one caveat with this is what to do when the filters return an error (spec. AP_FILTER_ERROR which means that they already took care of it). In this case, the handler should NOT generate an error of its own and just return the AP_FILTER_ERROR value back. mod_dav has its own error handling, so its not as straightforward as the other modules. Greg? Anyone else? -- justin Hard to answer. What is the desired behavior? Let's attack it from that angle. Should it just completely exit the handler? With OK, the rv, or with DONE? I'm sure that we can arrange the appropriate behavior. ... +int seen_eos; + +brigade = apr_brigade_create(r-pool, r-connection-bucket_alloc); +seen_eos = 0; Prolly easier to just set seen_eos in the decl. ... +APR_BRIGADE_FOREACH(bucket, brigade) { +const char *data; +apr_size_t len; + +if (APR_BUCKET_IS_EOS(bucket)) { +seen_eos = 1; +break; +} + +/* Ahem, what to do? */ +if (APR_BUCKET_IS_METADATA(bucket)) { +continue; +} No need to test for this. You'll just get zero-length buckets. If an important metadata bucket *does* get generated in the future, then we'd want to be explicitly testing for and handling it (like what is being done for the EOS bucket). Until then, we don't have to worry about them. +rv = apr_bucket_read(bucket, data, len, APR_BLOCK_READ); +if (rv != APR_SUCCESS) { +err = dav_new_error(r-pool, HTTP_BAD_REQUEST, 0, +An error occurred while reading the +request body.); +/* This is a error which preempts us from reading to + * EOS. */ +seen_eos = 1; The seen_eos thing is hacky. The loop condition should watch for non-NULL 'err' values. +break; +} + +if (err == NULL) { I'd recommend writing only when len!=0. No reason to call the backend provider with no data. +/* write whatever we read, until we see an error */ +err = (*resource-hooks-write_stream)(stream, + data, len); This is missing the seen_eos (altho, per above, it shouldn't set the flag) and break if an error occurs. This needs to be fixed up. +} } + +apr_brigade_cleanup(brigade); } +while (!seen_eos); Test for an error to exit. Cheers, -g -- Greg Stein, http://www.lyra.org/
Re: Subrequests reading bodies?
At 07:16 PM 6/3/2002, you wrote: On Sun, Jun 02, 2002 at 02:46:40PM -0700, Justin Erenkrantz wrote: Is it permissible for a subrequest (r-main != NULL) to read input data from the client? My current thought is only the original request can do that. Am I right or am I wrong? -- justin You might end up doing an internal redirect or somesuch to a subrequest to have it process the thing. In which case, it is perfectly acceptable for that guy to read the body. The basic story is that if you run a subrequest, then it can (and probably *should*) consume the body. If you mere prepare a subrequest (so that you can look at the status of that prep), then it should absolute *not* read the body. Subreq's preped just to test a uri mapping and such pass NULL for the next filter arg to ap_sub_req_lookup_foo(). For REAL requests, the next filter arg is never NULL. Does that help the logic?
Re: [PATCH] Switch DAV PUT to use brigades
On Mon, Jun 03, 2002 at 04:02:03PM -0700, Greg Stein wrote: My one caveat with this is what to do when the filters return an error (spec. AP_FILTER_ERROR which means that they already took care of it). In this case, the handler should NOT generate an error of its own and just return the AP_FILTER_ERROR value back. mod_dav has its own error handling, so its not as straightforward as the other modules. Greg? Anyone else? -- justin Hard to answer. What is the desired behavior? Let's attack it from that angle. Should it just completely exit the handler? With OK, the rv, or with DONE? I'm sure that we can arrange the appropriate behavior. My idea is that the module needs to back out any intermediate steps it performed - such as releasing a lock or other such things. But, almost certainly, if we get AP_FILTER_ERROR returned, we need to return that value. If we get an APR error, then, we probably have something to worry about. Again, this is something that is wildly inconsistent throughout the code. Filters can return APR status codes, but those don't translate to HTTP error codes. ... +int seen_eos; + +brigade = apr_brigade_create(r-pool, r-connection-bucket_alloc); +seen_eos = 0; Prolly easier to just set seen_eos in the decl. Fair enough. ... +APR_BRIGADE_FOREACH(bucket, brigade) { +const char *data; +apr_size_t len; + +if (APR_BUCKET_IS_EOS(bucket)) { +seen_eos = 1; +break; +} + +/* Ahem, what to do? */ +if (APR_BUCKET_IS_METADATA(bucket)) { +continue; +} No need to test for this. You'll just get zero-length buckets. If an important metadata bucket *does* get generated in the future, then we'd want to be explicitly testing for and handling it (like what is being done for the EOS bucket). Until then, we don't have to worry about them. Um, I think the approach has been that you can't read from metadata buckets. That they may give you data back or they may not, but they certainly shouldn't be handled if you don't know what to do with them. This comes out of what little I've seen of the conversation with Cliff and Ryan. I may have it wrong, but I think someone said at some point that metadata may indeed respond to bucket_read with data, but that data shouldn't be construed as part of the input stream. I'm thoroughly confused at this point about what metadata buckets are. +rv = apr_bucket_read(bucket, data, len, APR_BLOCK_READ); +if (rv != APR_SUCCESS) { +err = dav_new_error(r-pool, HTTP_BAD_REQUEST, 0, +An error occurred while reading the +request body.); +/* This is a error which preempts us from reading to + * EOS. */ +seen_eos = 1; The seen_eos thing is hacky. The loop condition should watch for non-NULL 'err' values. Even if an error is seen, we still need to consume the body. So, I don't think we can exit if err is non-NULL. We can only exit once we've seen the EOS. +break; +} + +if (err == NULL) { I'd recommend writing only when len!=0. No reason to call the backend provider with no data. Fair enough. +/* write whatever we read, until we see an error */ +err = (*resource-hooks-write_stream)(stream, + data, len); This is missing the seen_eos (altho, per above, it shouldn't set the flag) and break if an error occurs. This needs to be fixed up. Again, it's part of the idea that even if an error is seen, it should still read until EOS. Take a quick look at the code that is in CVS right now as it does this. Is this the wrong approach? -- justin
Re: Subrequests reading bodies?
On Mon, Jun 03, 2002 at 05:16:13PM -0700, Greg Stein wrote: You might end up doing an internal redirect or somesuch to a subrequest to have it process the thing. In which case, it is perfectly acceptable for that guy to read the body. Well, for an internal redirect, the request is promoted to not be a subrequest - the r-main value populates to new-main (see internal_internal_redirect). So, I believe that's not where we're concerned - this takes care of mod_dir, mod_negotiation, mod_redirect, etc. The basic story is that if you run a subrequest, then it can (and probably *should*) consume the body. If you mere prepare a subrequest (so that you can look at the status of that prep), then it should absolute *not* read the body. The exception seems to be mod_include as a module like mod_autoindex doesn't get a chance to run its subreq handlers: !--#exec cgi=/cgi-bin/foo.cgi-- So, should the cgi_handler have access to the request body? What if I do the following: !--#exec cgi=/cgi-bin/foo.cgi-- !--#exec cgi=/cgi-bin/foo.cgi-- What happens here? Do they get the same body? Does the first one only get the body? Do none of them get input? -- justin