Re: Short or long variable names?
From: William A. Rowe, Jr. One (extreme) hassle is leaving the httpd code legible to httpd'ers and leaving .NET code legible to .NET'ers. I had chosen the conventions of using 'traditional' variable names for httpd datum, and 'wordy' variable names for the internals of Apache.Web. In Apache.Web all data is visible In Apache even the short rr and rv really do mean something but what? It wasn't a nice practice to start and now one can see why. In mod_aspdotnet and all assemblies one should use reader readable variables. Much easier for the old httpd'ers to read mod_aspdotnet with descriptive variables, than for .NET user usage to say what Mod_aspdotnet and the assemblies are only written once, but read over and over again.. There are quite a few cases where the variable name (e.g. static 'conf' structure in the mod_aspdotnet.cpp source) is horrid and needs to be cleaned up. I'm almost thinking, native_xx for variable names from apache and apr - would that improve legibility? I'm not sure one needs native_ but definitely not rr, rv, abc, and so on Compare Parameter name = rv Type=System.Int32 Position=2 Optional=False to Parameter name = loglevel Type=System.Int32 Position=1 Optional=False By the way is there an .NET enum for loglevel? Jeff
Re: Build ways
Feel free to offer a patch to build under nant, msbuild or any method that proves viable! The problem, is that the delayimp.lib is not distributed with the .NET Visual C++ compiler - this makes it impossible to build our c++ code outside of the full Visual Studio. I'm actually much more concerned with changes coming in the .NET Visual C++ 2005. They refactored ALOT - and we will be playing catch-up there. The code will be measurably easier to read, but in some ways harder to grok. Bill At 01:06 PM 11/22/2004, Jeff White wrote: Have any of you workers (those with live and test servers/machines) looked at starting to use build systems for mod_aspdotnet? Soon Microsoft is going to MSBuild an XML file build system for developers, and MSBuild is out now but for .NET 2 and so on but NAnt (an open source .NET XML build) http://nant.sourceforge.net/ http://nant.sourceforge.net/nightly/latest/help/index.html http://nantcontrib.sourceforge.net/help/index.html http://nantcontrib.sourceforge.net/ NAnt is out now (for .NET 1.0/1.1/beta2 and Mono) and allows XML builds of C/C++ (6,7,8) and .NET assemblies, and MSI install and much more Shouldn't mod_aspdotnet start using NAnt and be ready this time (to use MSBuild or NAnt and perhaps since it's XML based both) instead of playing catch up later? Using NAnt the build can search folders, call exe, Windows Scripting, call .NET written input routines and look like an GUI build system or stay command line. Perhaps these newer build ways can help on the build of Apache.Web and it's usage of the other libs find the libs, copy/move them and use them here and then send them back to where they belong :) Jeff
Re: Short or long variable names?
From: William A. Rowe, Jr. let me get ahold of my life first I see you doing so much, what life? More later Jeff
Compiling flood-0.4
When we attempted to run make, we ran into the following build/rules.mk:57: /build/config_vars.mk: No such file or directory What option if any controls the location of config_vars.mk? Thanks.
Re: People still using v1.3 - finding out why
Graham Leggett wrote: Hi all, I've been keen to do some digging for reasons why someone might need to install httpd v1.3 instead of v2.0 or later. Support for mod_backhand seems to be a significant reason (and getting backhand ported to v2.0 would be a win): Apart from backhand, are there in the experience of the people on this list any other significant apps out there that are keeping people from deploying httpd v2.x? Regards, Graham -- Hi, in my organization we are heavy users of Apache 1.3 and have no intent of migrating to 2 yet. The main reason why we are not migrating to 2 is related to bug 17877 I filed for Apache 1.3 last year. We are using Apache as a reverse proxy using mod_rewrite and mod_proxy and we need to proxy WebDAV requests. Those requests are often sent using Chunked as their transfer-encoding, and our reverse proxies need to forward those. The vanilla mod_proxy rejects those requests (in both 1.3 and 2.0), but as we cannot control the DAV clients being used this kind of behaviour is not an option we can tolerate. I patched mod_proxy in 1.3 to pass those requests 'AS IS' to the origin servers (which we KNOW for sure to be 1.1 compliant). My patch would not port easily to version 2 as the structure of mod_proxy has changed significantly between 1.3 and 2. The availability of filtering in Apache 2 seemed at first a nice feature to use but it turned out keepalive connections' requests are far from easy to handle, at least using mod_perl and the perl filter hooks. I do not know if this has been fixed yet or not as I did not have time to look again at mod_perl but this was sure a problem for us to start using this nice Apache 2 feature. Mathias. 'AS IS' to the origin servers otherwise they might just crash due to the size of the data being transfered (several hundreds of Mb or even several Gb) which canno
Re: People still using v1.3 - finding out why
On Mon, 22 Nov 2004, Mathias Herberts wrote: The main reason why we are not migrating to 2 is related to bug 17877 I filed for Apache 1.3 last year. Erm, you file a bug report for Apache 1 and treat it as a reason not to upgrade to apache 2? Should I Cc: this to Scott Adams? If you'd filed a bug for Apache 2, maybe it would have been fixed by now. Like, for example, when I had a mini-blitz on mod_proxy bugs ( http://marc.theaimsgroup.com/?l=apache-cvsm=108862194618599w=2 ) Or anyone else who has worked on mod_proxy in the time since your report. -- Nick Kew
Re: People still using v1.3 - finding out why
Nick Kew wrote: On Mon, 22 Nov 2004, Mathias Herberts wrote: The main reason why we are not migrating to 2 is related to bug 17877 I filed for Apache 1.3 last year. Erm, you file a bug report for Apache 1 and treat it as a reason not to upgrade to apache 2? Should I Cc: this to Scott Adams? Well, put Scott in Cc if you want :-). I was using Apache 1 at the time I filed the bug and was not in the process of migrating. I fixed the bug and had our setup work without problem. The bug as I filed it is still open as the HTTP compliance of forwarding Chunked requests to origin servers still does not seem to be clear. If you'd filed a bug for Apache 2, maybe it would have been fixed by now. I started a thread back in june: http://www.gossamer-threads.com/lists/apache/dev/264479 but I think the HTTP compliance problem is still of some concern and is in the way of a solution being found. Mathias.
Re: People still using v1.3 - finding out why
On Mon, 22 Nov 2004, Mathias Herberts wrote: Nick Kew wrote: On Mon, 22 Nov 2004, Mathias Herberts wrote: The main reason why we are not migrating to 2 is related to bug 17877 I filed for Apache 1.3 last year. Erm, you file a bug report for Apache 1 and treat it as a reason not to upgrade to apache 2? Should I Cc: this to Scott Adams? Well, put Scott in Cc if you want :-). I was using Apache 1 at the time I filed the bug and was not in the process of migrating. I fixed the bug and had our setup work without problem. The bug as I filed it is still open as the HTTP compliance of forwarding Chunked requests to origin servers still does not seem to be clear. Well, mod_proxy in Apache 1.x doesn't claim to be HTTP/1.1, so there's no reason it should be expected to support chunked encoding. And since Apache 1.x is a maintenance-only product not in active development, that's not too likely to change - ever. If you'd filed a bug for Apache 2, maybe it would have been fixed by now. I started a thread back in june: http://www.gossamer-threads.com/lists/apache/dev/264479 Aha, I have no recollection of that. The thing about list posts is they're easy to overlook if the subject is not one of great concern at the time. And on June 3rd I was doing some urgent work as well as rehearsing an opera production that was on stage the following week. Bugzilla is different - it's a database. I will of course search for relevant bugs any time I'm contemplating any substantial update to a module. -- Nick Kew
[PATCH] another mod_deflate vs 304 response case
There's another mod_deflate vs 304 response problem which is being triggered by ViewCVS on svn.apache.org: when a CGI script gives a Status: 304 response the brigade contains a CGI bucket then the EOS, so fails the if this brigade begins with EOS do nothing test added for the proxied-304 case. I wonder if the simplest fix is to just explicitly test for a 304/204 response instead. Were there any more EOS-only brigade cases this won't catch? Index: modules/filters/mod_deflate.c === --- modules/filters/mod_deflate.c (revision 106098) +++ modules/filters/mod_deflate.c (working copy) @@ -354,7 +354,7 @@ * proxy may pass through an empty response for a 304 too. * So we just need to fix up the headers as if we had a body. */ -if (APR_BUCKET_IS_EOS(APR_BRIGADE_FIRST(bb))) { +if (r-status == HTTP_NOT_MODIFIED || r-status == HTTP_NO_CONTENT) { if (!encoding || !strcasecmp(encoding, identity)) { apr_table_set(r-headers_out, Content-Encoding, gzip); }
Re: mod_cgid, unix socket, ScriptSock directive
On Sat, 20 Nov 2004 12:11:34 -0500, Jeff Trawick [EMAIL PROTECTED] wrote: The ScriptSock directive must be used when there are two instances of the server with same ServerRoot. If it is omitted, symptoms may include . wrong credentials for CGIs . CGIs stop working for one server when other server is terminated It should be easy to avoid this configuration requirement by appending parent pid to the name of the unix socket which is used *when user didn't specify ScriptSock*, though there is slight migration concern in case administrator relies on name of unix socket for other reason (e.g., to use its existence as knowledge that mod_cgid is ready for business). It should be easy to catch such a misconfiguration by adding the parent pid to the CGI request sent over the Unix socket, and fail the request (and log appropriate message) if parent pid is wrong. code to check for the misconfiguration is small and is expected to be fool-proof (independent of what the user does); also, no way the change can result in stale unix sockets left around, unlike sticking the pid in the filename see patch in attachment Index: modules/generators/mod_cgid.c === --- modules/generators/mod_cgid.c (revision 106178) +++ modules/generators/mod_cgid.c (working copy) @@ -89,6 +89,7 @@ static server_rec *root_server = NULL; static apr_pool_t *root_pool = NULL; static const char *sockname; +static pid_t parent_pid; /* Read and discard the data in the brigade produced by a CGI script */ static void discard_script_output(apr_bucket_brigade *bb); @@ -153,6 +154,9 @@ * to find the script pid when it is time for that * process to be cleaned up */ +pid_t ppid;/* sanity check for config problems leading to +* wrong cgid socket use +*/ int core_module_index; int have_suexec; int suexec_module_index; @@ -439,6 +443,7 @@ apr_status_t stat; req.req_type = req_type; +req.ppid = parent_pid; req.conn_id = r-connection-id; req.core_module_index = core_module.module_index; if (suexec_mod) { @@ -667,6 +672,14 @@ continue; } +if (cgid_req.ppid != parent_pid) { +ap_log_error(APLOG_MARK, APLOG_CRIT, 0, main_server, + CGI request received from wrong server instance; + see ScriptSock directive); +close(sd2); +continue; +} + if (cgid_req.req_type == GETPID_REQ) { pid_t pid; @@ -839,6 +852,7 @@ for (m = ap_preloaded_modules; *m != NULL; m++) total_modules++; +parent_pid = getpid(); sockname = ap_server_root_relative(p, sockname); ret = cgid_start(p, main_server, procnew); if (ret != OK ) {
Re: Event MPM - CLOSE_WAITs
Paul Querna wrote: I have Paul's version of the Event MPM patch up and running. The only glitch I saw bringing it up was a warning for two unused variables in http_core.c (patchlet below). Then I tried stressing it with SPECweb99 and saw some errors after several minutes: [error] (24)Too many open files: apr_socket_accept: (client socket) Need to bump your ulimits. I ran into this too. After raising it to 4096 open FDs per process, I didn't have any issues. (Linux default is 1024?) There's more than ulimits going on here. I had over 900 sockets stuck in CLOSE_WAIT for at least 10 minutes after I shut down the spec client. They cleared up right away when I shut down httpd. Greg
Re: 2.2 roadmap with respect to APR was Re: [NOTICE] CVS to SVN migration complete
William A. Rowe, Jr. wrote: At 11:03 PM 11/19/2004, Justin Erenkrantz wrote: --On Friday, November 19, 2004 8:01 PM -0600 William A. Rowe, Jr. [EMAIL PROTECTED] wrote: I'll offer compelling argument. Allen offered patches, which Roy vetoed, to fix object sizes on 32/64/64 ILP bit platforms, and told Allen to go back and fix APR. I don't believe that Allen would be able to complete his changes in a reasonable timeframe. So, my opinion is that we let Allen branch apr off now and let him go at it at a measured pace, but we shouldn't intend to hold httpd 2.2 for that. -- I guess the ball is to Allen, then, but I'd also be happy to quickly whack at it. The concept is Simple(tm) and would be far less painful than actually fighting through all the ( cast )s of his later patches. Bill I'd personally prefer to see this done in 2.2 but reserving the right to change my mind ;-) Bill
Re: 2.2 roadmap with respect to APR was Re: [NOTICE] CVS to SVN migration complete
William A. Rowe, Jr. wrote: At 08:23 AM 11/20/2004, Jim Jagielski wrote: On Nov 20, 2004, at 12:03 AM, Justin Erenkrantz wrote: So, my opinion is that we let Allen branch apr off now and let him go at it at a measured pace, but we shouldn't intend to hold httpd 2.2 for that. -- justin +1. Of course, I am assuming that his 64bit fixes will likely break binary compatibility. It does - that's the rub. And, for 2.2, this was always the plan. And that's precisely the reason we should attack the 64 bit problem for 2.2. This will give the 2.2 series a much longer life than if we push off the 64 bit work to 2.4. Bill
Re: End of Life Policy
Paul Querna wrote: So, we are nearing a new stable branch. For the first time in a long time we will have a no-longer-developed-but-stable-branch in wide use. (2.0.x) I would like to have a semi-official policy on how long we will provide security backports for 2.0 releases. I suggest a value between 6 and 12 months. Many distrobutions will provide their own security updates anyways, so this would be a service to only a portion of our users. As always, this is open source, and I would not stop anyone from continuing support for the 2.0.x branch. My goal is to help set our end user's expectations for how long they have to upgrade to 2.2. Thoughts? -Paul Querna Why drive artificial constraints on any of our release processes? If a sizeable number of people (ie, a community) are interested in maintaining 2.0 for the next 20 years, then why prevent it 'by decree'? Bill
Re: 2.2 roadmap with respect to APR was Re: [NOTICE] CVS to SVN migration complete
At 10:08 AM 11/22/2004, Bill Stoddard wrote: William A. Rowe, Jr. wrote: At 08:23 AM 11/20/2004, Jim Jagielski wrote: On Nov 20, 2004, at 12:03 AM, Justin Erenkrantz wrote: So, my opinion is that we let Allen branch apr off now and let him go at it at a measured pace, but we shouldn't intend to hold httpd 2.2 for that. -- justin +1. Of course, I am assuming that his 64bit fixes will likely break binary compatibility. It does - that's the rub. And, for 2.2, this was always the plan. And that's precisely the reason we should attack the 64 bit problem for 2.2. This will give the 2.2 series a much longer life than if we push off the 64 bit work to 2.4. +1 - well said. By the way, Allen was out for the week of AC but returns this week, perhaps he can jump back into this conversation. Yes - I understand that this means 1.x will never be used by httpd. Version numbers are cheap. The APR project should become used to this, if they are active, and httpd moves at it's normal pace, it would be easy to go through APR 2.x, 3.x, and land somewhere in version 4.x by the time httpd 2.4 or 3.0 walks out the door. Bill
Re: People still using v1.3 - finding out why
Nick Kew wrote: Well, mod_proxy in Apache 1.x doesn't claim to be HTTP/1.1, so there's no reason it should be expected to support chunked encoding. And since Apache 1.x is a maintenance-only product not in active development, that's not too likely to change - ever. The mod_proxy in v1.3 does claim to be an HTTP/1.1 proxy, but it doesn't claim to be a fully compliant one. There were some protocol things that were too hard to do in v1.3, and so were only fixed in v2.0. Active development is currently being done on v2.1 (soon to be v2.2) with active backports to v2.0. Currently v1.3 is maintenance only, which basically means security fixes are all you will get at the moment. Regards, Graham --
Re: 2.2 roadmap with respect to APR was Re: [NOTICE] CVS to SVN
Bill Stoddard wrote: William A. Rowe, Jr. wrote: At 08:23 AM 11/20/2004, Jim Jagielski wrote: On Nov 20, 2004, at 12:03 AM, Justin Erenkrantz wrote: So, my opinion is that we let Allen branch apr off now and let him go at it at a measured pace, but we shouldn't intend to hold httpd 2.2 for that. -- justin +1. Of course, I am assuming that his 64bit fixes will likely break binary compatibility. It does - that's the rub. And, for 2.2, this was always the plan. And that's precisely the reason we should attack the 64 bit problem for 2.2. This will give the 2.2 series a much longer life than if we push off the 64 bit work to 2.4. I agree... Otherwise, we won't see many people move to 2.2 since 3rd party modules won't be available for it, since module developers will know that within a short amount of time, they'll need to redo their modules for the 64bit fixes. -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ There 10 types of people: those who read binary and everyone else.
Re: End of Life Policy
On Monday 22 November 2004 09:11, Bill Stoddard wrote: Why drive artificial constraints on any of our release processes? If a sizeable number of people (ie, a community) are interested in maintaining 2.0 for the next 20 years, then why prevent it 'by decree'? Id certainly have to agree with this logic. By the standard proposed (6-12 months following new stable release), 1.3 would have been dead, buried, and have a house built on top of it by now. Still, we have recently seen in this very list that there IS a community using 1.3 for various reasons. While I can understand at a certain point scaling back development resources and focus request towards a particular branch, I also believe that no arbitrary EOL cycle should be implemented on a community supported product. There are no real commercial support oblications imcumbant on Apache to continue to provide access and support for patch roll-in for old versions, is there? My own personal $.02 is that EOL should be little more than pulling the status updates from being sent out and not actively requesting backports from 2.x to 1.3.x. Anything beyond that should be driven by community... or lack thereof Eventually when no one is supporting the codebase and security issues pile up with warnings on the website and release files that this is an old version provided for legacy users only. we do not recommend production systems run this version, instead use [insert new branch here], they will upgrade and 1.3.x (and later 2.0.x) will die a natural death. If there are coorporate costs involved in legacy version support then I can perhaps understand an EOL but with a mostly community-developed and -supported product, I see little reason to introduce an arbitrary limitation on the life of a branch. -- Wayne S. Frazee Any sufficiently developed bug is indistinguishable from a feature. pgpVsm4ZBmYeU.pgp Description: PGP signature
Re: [PATCH] another mod_deflate vs 304 response case
On Mon, 22 Nov 2004, Joe Orton wrote: There's another mod_deflate vs 304 response problem which is being triggered by ViewCVS on svn.apache.org: when a CGI script gives a Status: 304 response the brigade contains a CGI bucket then the EOS, so fails the if this brigade begins with EOS do nothing test added for the proxied-304 case. I wonder if the simplest fix is to just explicitly test for a 304/204 response instead. Were there any more EOS-only brigade cases this won't catch? Okay, but why the next three lines? Why would Content-Encoding: gzip *ever* be set on a 304?
Re: [PATCH] another mod_deflate vs 304 response case
Cliff Woolley wrote: Okay, but why the next three lines? Why would Content-Encoding: gzip *ever* be set on a 304? In theory the 304 might be in response to a freshness check on the particular variant that was compressed. (Assuming I am not mixing up my content and transfer encodings). Regards, Graham --
Re: [PATCH] another mod_deflate vs 304 response case
On Mon, Nov 22, 2004 at 11:26:18AM -0500, Cliff Woolley wrote: On Mon, 22 Nov 2004, Joe Orton wrote: There's another mod_deflate vs 304 response problem which is being triggered by ViewCVS on svn.apache.org: when a CGI script gives a Status: 304 response the brigade contains a CGI bucket then the EOS, so fails the if this brigade begins with EOS do nothing test added for the proxied-304 case. I wonder if the simplest fix is to just explicitly test for a 304/204 response instead. Were there any more EOS-only brigade cases this won't catch? Okay, but why the next three lines? Why would Content-Encoding: gzip *ever* be set on a 304? I was wondering that too. The answer to the latter is that it *won't* be sent for a 304 because the http_header filter knows better than that. joe
Re: 2.2 roadmap with respect to APR was Re: [NOTICE] CVS to SVN migration complete
At 11:08 AM 11/22/2004, Cliff Woolley wrote: On Mon, 22 Nov 2004, William A. Rowe, Jr. wrote: Yes - I understand that this means 1.x will never be used by httpd. Version numbers are cheap. The APR project should become used to this, if they are active, and httpd moves at it's normal pace, it would be easy to go through APR 2.x, 3.x, and land somewhere in version 4.x by the time httpd 2.4 or 3.0 walks out the door. Do you understand how ridiculous that sounds? How so? Let's imagine the release -after- 2.2 takes another 12-18 months. Perhaps the event mpm gets plugged in, and it takes three months, alone, just to find all the gotchas of thread-jumping. In the meantime, apr is adopted by other projects. These coders offer up some solid functionality for their own applications, and the apr team agrees. Yes, I realize most of the time new functionality can be a minor bump of apr. Yes, I realize apr has not been all that active in the past 12 months. All I'm arguing is that apr shouldn't be addicted to some 1:1 correspondence of httpd and apr bumps. Bill
Re: [PATCH] another mod_deflate vs 304 response case
On Mon, 22 Nov 2004, Joe Orton wrote: Okay, but why the next three lines? Why would Content-Encoding: gzip *ever* be set on a 304? I was wondering that too. The answer to the latter is that it *won't* be sent for a 304 because the http_header filter knows better than that. Well, okay. But should we even set it in that case? Is it correct to send it in the 204 case?
Re: 2.2 roadmap with respect to APR was Re: [NOTICE] CVS to SVN migration complete
On Mon, 22 Nov 2004, William A. Rowe, Jr. wrote: Yes - I understand that this means 1.x will never be used by httpd. Version numbers are cheap. The APR project should become used to this, if they are active, and httpd moves at it's normal pace, it would be easy to go through APR 2.x, 3.x, and land somewhere in version 4.x by the time httpd 2.4 or 3.0 walks out the door. Do you understand how ridiculous that sounds?
Re: mod_cgid, unix socket, ScriptSock directive
* Jeff Trawick [EMAIL PROTECTED] wrote: code to check for the misconfiguration is small and is expected to be fool-proof (independent of what the user does); also, no way the change can result in stale unix sockets left around, unlike sticking the pid in the filename see patch in attachment +1. nd -- Winnetous Erbe: http://pub.perlig.de/books.html#apache2
Whitespace strip filter for httpd v2.1
Hi all, I have attached a small filter module that strips leading whitespace from text files. This would typically be used to remove the indenting whitespace found inside HTML files, resulting in a significant reduction in network traffic for some sites. I didn't bother to include trailing whitespace removal, as this involved buffering (leading whitespace removal requires no buffering). Comments? Regards, Graham -- /* Copyright 2001-2004 The Apache Software Foundation * * Licensed under the Apache License, Version 2.0 (the License); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an AS IS BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include httpd.h #include http_config.h #include apr_buckets.h #include apr_general.h #include apr_lib.h #include util_filter.h #include http_request.h #include ctype.h static const char s_szStripFilterName[]=StripFilter; module AP_MODULE_DECLARE_DATA strip_filter_module; typedef struct { int bEnabled; } StripFilterConfig; /** * Context for the filter */ typedef struct strip_ctx_t { int mode; } strip_ctx; /** * Create server config for strip filter */ static void *StripFilterCreateServerConfig(apr_pool_t *p,server_rec *s) { StripFilterConfig *pConfig=apr_pcalloc(p,sizeof *pConfig); pConfig-bEnabled=0; return pConfig; } /** * Insert the strip filter into the output filter stack */ static void StripFilterInsertFilter(request_rec *r) { StripFilterConfig *pConfig=ap_get_module_config(r-server-module_config, strip_filter_module); if(!pConfig-bEnabled) { return; } ap_add_output_filter(s_szStripFilterName,NULL,r,r-connection); } /** * Strip whitespace from this bucket * * If the variable buf is provided, the resulting stripped output * will be written to buf. The resulting state at the end of the * bucket is saved to ctx. * * If the variable buf is NULL, then just the length of the new * bucket is returned. The resulting state at the end of the * bucket is not saved in this case. * * We track the following states: * o Ignore leading whitespace * o Pass through content */ #define STRIP_IGNORE_LEADING 1 #define STRIP_PASS_THROUGH 2 static const apr_size_t StripFilterStrip(strip_ctx *ctx, const char *data, char *buf, apr_size_t len) { apr_size_t from = 0; apr_size_t to = 0; int mode = ctx-mode; for (from = 0; from len; ++from) { /* end of line yet? */ if (13 == data[from] || 10 == data[from]) { mode = STRIP_IGNORE_LEADING; if (buf) { buf[to] = data[from]; } to++; continue; } /* middle of line - should we ignore the character? */ switch (mode) { case STRIP_IGNORE_LEADING: { if (!apr_isspace(data[from])) { if (buf) { buf[to] = data[from]; } to++; mode = STRIP_PASS_THROUGH; } break; } case STRIP_PASS_THROUGH: { if (buf) { buf[to] = data[from]; } to++; break; } } } /* update the ending mode, but only if we have a buffer */ if (buf) { ctx-mode = mode; } return to; } /** * Filter whitespace from the output * */ static apr_status_t StripFilterOutFilter(ap_filter_t *f, apr_bucket_brigade *pbbIn) { request_rec *r = f-r; conn_rec *c = r-connection; strip_ctx *ctx = f-ctx; apr_bucket *pbktIn; apr_bucket_brigade *pbbOut; if (!ctx) { f-ctx = ctx = apr_pcalloc(f-r-pool, sizeof(*ctx)); ctx-mode = STRIP_IGNORE_LEADING; } pbbOut=apr_brigade_create(r-pool, c-bucket_alloc); for (pbktIn = APR_BRIGADE_FIRST(pbbIn); pbktIn != APR_BRIGADE_SENTINEL(pbbIn); pbktIn = APR_BUCKET_NEXT(pbktIn)) { const char *data; apr_size_t len; char *buf; apr_size_t n; apr_bucket *pbktOut; if(APR_BUCKET_IS_EOS(pbktIn)) { apr_bucket *pbktEOS=apr_bucket_eos_create(c-bucket_alloc); APR_BRIGADE_INSERT_TAIL(pbbOut,pbktEOS); continue; } /* read a bucket */ apr_bucket_read(pbktIn,data,len,APR_BLOCK_READ); /* work out the resultant length */ buf = apr_bucket_alloc(StripFilterStrip(ctx, data, NULL, len), c-bucket_alloc); /* write out the bucket */ StripFilterStrip(ctx, data, buf, len); pbktOut =
Re: 2.2 roadmap with respect to APR was Re: [NOTICE] CVS to SVN
--On Monday, November 22, 2004 11:27 AM -0500 Jim Jagielski [EMAIL PROTECTED] wrote: I agree... Otherwise, we won't see many people move to 2.2 since 3rd party modules won't be available for it, since module developers will know that within a short amount of time, they'll need to redo their modules for the 64bit fixes. I expect that as it stands right now most 2.0 modules will compile for 2.2 with very minor (if any) changes. If we 'fix' 64-bit issues now, then that means that their modules are going to undergo massive changes. That *will* affect the 2.2 uptake rate because our third parties will take a lot of time to get their modules 64-bit clean (if they do at all). I still think this needs to be punted to 2.4. It's just *way* too big. We'll also have to fix up all of httpd to be 64-bit clean. And, that's going to push 2.2 out even further. This is something we should take our time on - not try to rush out the door and then have to go back and clean up because we rushed 2.2 (and APR 2.0) out the door. No thanks. -- justin
Re: Whitespace strip filter for httpd v2.1
Graham Leggett wrote: Hi all, I have attached a small filter module that strips leading whitespace from text files. This would typically be used to remove the indenting whitespace found inside HTML files, resulting in a significant reduction in network traffic for some sites. I didn't bother to include trailing whitespace removal, as this involved buffering (leading whitespace removal requires no buffering). Comments? +1 in concept but code not reviewed. mod_deflate would effectivly do the same thing, right? What user base would are we targeting with this module (looking for an answer a bit more specific than 'web clients that don't support gzip decoding'). Bill
Re: 2.2 roadmap with respect to APR was Re: [NOTICE] CVS to SVN
At 12:17 PM 11/22/2004, Justin Erenkrantz wrote: I expect that as it stands right now most 2.0 modules will compile for 2.2 with very minor (if any) changes. If we 'fix' 64-bit issues now, then that means that their modules are going to undergo massive changes. This I will attest to; porting to 2.2 for mod_aspdotnet, I encountered; - lost APR_BRIGADE_FOREACH - changed apr_pool_get_parent // apr_pool_parent_get and of course; link to libapr-1 / libaprutil-1. other than that, it was very clean. That *will* affect the 2.2 uptake rate because our third parties will take a lot of time to get their modules 64-bit clean (if they do at all). WHO CARES?!? That's on them. They can release bug fixes after bug fixes, or extend their list of tested/supported platforms as they get around to it. It's their problem. As it stands, WE will be in THEIR WAY with incorrect code in apr and httpd. At that point - they cannot address the problems until we release the next version minor (2.4 or 3.0). How unfair is that? I still think this needs to be punted to 2.4. It's just *way* too big. Way too big for you to handle? Ok. Not asking you to develop, test or even review. We'll also have to fix up all of httpd to be 64-bit clean. Not hard. And, that's going to push 2.2 out even further. It's pointless arguing this aspect. Are we done with 2.2? If you want to vote on 2.2 - then vote on 2.2 - don't get in the way of other developers' progress with hand waving. That, I think, is the biggest lesson I took out of the httpd luncheons. This is something we should take our time on - not try to rush out the door and then have to go back and clean up because we rushed 2.2 (and APR 2.0) out the door. No thanks. -- justin I could say the same about... ...nevermind. The lesson we learned, in a nutshell; When httpd 2.1-dev is mostly satisfactory, and we have an alpha that the community decides is ready to take to 2.2 - we fork. That fork gets stabilization improvements until it's beta, and finally GA quality. That GA release is titled 2.2.0. In the meantime, head becomes 2.3-dev. Once again, nobody is standing in the way of code fixes. They can be percolated down to the 2.2 branch (before or after 2.2.0 is blessed), and even all the way to the 2.0 branch. That requires more review, which it should so 'stable' branches don't destabilize.
Re: svn commit: r106181 - httpd/httpd/branches/2.0.x/docs/manual/mod
On Mon, 22 Nov 2004 [EMAIL PROTECTED] wrote: Author: nd Date: Mon Nov 22 05:43:39 2004 New Revision: 106181 Modified: httpd/httpd/branches/2.0.x/docs/manual/mod/core.html.de httpd/httpd/branches/2.0.x/docs/manual/mod/core.html.en httpd/httpd/branches/2.0.x/docs/manual/mod/core.html.ja.euc-jp httpd/httpd/branches/2.0.x/docs/manual/mod/core.xml.de httpd/httpd/branches/2.0.x/docs/manual/mod/core.xml.ja httpd/httpd/branches/2.0.x/docs/manual/mod/core.xml.meta httpd/httpd/branches/2.0.x/docs/manual/mod/mod_ssl.html.en Log: update transformation Thanks. I'm having trouble getting the transformation stuff working, as I generally do in a new checkout. -- :wq
YAML: httpd distro package managers
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Due to the huge variety and creativity displayed by those fine folks who package the Apache Web Server up for distribution with the gzillion operating systems which come with it, users tend to have problems with the default configuration, which seem very odd to those of us doing mailing list and IRC support of the product. Some of these default configurations are just plain wrong, while others are simply confusing. Various of us on IRC have discussed the benefits of having a mailing list where the package managers can ask questions, or be berated for their decisions. And, with any luck, we can get the various distros to compare notes and arrive at some compromises. To that end, I'd like to request Yet Another Mailing List, called something like [EMAIL PROTECTED] or [EMAIL PROTECTED], or some such. The list would be public, although I'd be willing to moderate if needed. Thoughts? Thanks. - -- FREE AS IN PUPPIES -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFBohPSXP03+sx4yJMRAje8AJ4+6a9kvXIE5CZaYrB7wLrqz26etACePCkG i441Yc0gWhRBbkhAdB/DLdg= =622f -END PGP SIGNATURE-
Re: 2.2 roadmap with respect to APR was Re: [NOTICE] CVS to SVN
--On Monday, November 22, 2004 1:08 PM -0600 William A. Rowe, Jr. [EMAIL PROTECTED] wrote: That *will* affect the 2.2 uptake rate because our third parties will take a lot of time to get their modules 64-bit clean (if they do at all). WHO CARES?!? That's on them. They can release bug fixes after bug fixes, or extend their list of tested/supported platforms as they get around to it. It's their problem. No, but as we learned from 2.0, third-party modules have a direct impact on the uptake rate. So, making it harder for third-parties to port would make it much harder to see 2.2 in deployment. As it stands, WE will be in THEIR WAY with incorrect code in apr and httpd. At that point - they cannot address the problems until we release the next version minor (2.4 or 3.0). How unfair is that? 2.0 has the same problem and I've yet to see any real complaints. I don't want to see 2.2 be the 'end-all-be-all' - I want 2.2 to be a usable and incremental improvement over 2.0. I still think this needs to be punted to 2.4. It's just *way* too big. Way too big for you to handle? Ok. Not asking you to develop, test or even review. Way too big for third-parties to handle easily. We'll also have to fix up all of httpd to be 64-bit clean. Not hard. So say you without any code to back that statement up. We don't even know the extent of the changes at this point. It's pointless arguing this aspect. Are we done with 2.2? If you want to vote on 2.2 - then vote on 2.2 - don't get in the way of other developers' progress with hand waving. That, I think, is the biggest lesson I took out of the httpd luncheons. No offense, but I see you holding up the development by saying that 2.2 must wait for changes that aren't even written nor likely to be quickly accomplished and tested. -- justin
Re: Whitespace strip filter for httpd v2.1
Bill Stoddard wrote: +1 in concept but code not reviewed. mod_deflate would effectivly do the same thing, right? What user base would are we targeting with this module (looking for an answer a bit more specific than 'web clients that don't support gzip decoding'). Bandwidth reduction for servers that cannot afford the horsepower for a full deflate? Whitespace stripping is negligible in terms of horsepower, deflate (I understand) is not. Regards, Graham -- smime.p7s Description: S/MIME Cryptographic Signature
Re: [PATCH] another mod_deflate vs 304 response case
At 10:26 AM 11/22/2004, Cliff Woolley wrote: On Mon, 22 Nov 2004, Joe Orton wrote: There's another mod_deflate vs 304 response problem which is being triggered by ViewCVS on svn.apache.org: when a CGI script gives a Status: 304 response the brigade contains a CGI bucket then the EOS, so fails the if this brigade begins with EOS do nothing test added for the proxied-304 case. Okay, but why the next three lines? Why would Content-Encoding: gzip *ever* be set on a 304? Let me expand on this question... ... we've all seen broken browsers. Why attempt to gzip non-2xx responses at all? I'd prefer (at least on servers with modest error message responses) to ensure the user can *read* any error response, in spite of any broken browser behavior w.r.t. deflate. It seems like a flag (even default) dropping gzip from non-2xx class responses could be a very useful thing. At least, if the browser results are a mess, it's due to a good response. Thoughts? Bill
Re: Whitespace strip filter for httpd v2.1
Graham Leggett wrote: Hi all, I have attached a small filter module that strips leading whitespace from text files. This would typically be used to remove the indenting whitespace found inside HTML files, resulting in a significant reduction in network traffic for some sites. I didn't bother to include trailing whitespace removal, as this involved buffering (leading whitespace removal requires no buffering). Comments? this may be outside the scope of the example code, but filters like this probably ought to consider the ETag header. my personal opinion is that because the content will not be exactly the same bitwise as, say, a file on disk, any ETag header should be made weak to be compliant. I think roy didn't agree, but I never really understood his response last time I brought this up. anyway, if you start considering the ETag header, then you need to consider what happens when default-handler returns 304 and short-circuits your filter entirely. in this case the ETag returned would be the strong ETag created by ap_set_etag, even though the cached response was using a weak ETag. so, you would probably want to create a filter_init hook to weaken any ETag. but that ETag would possibly run in vain if your filter doesn't have any whitespace to change, so... anyway, I'm bringing this up only as something to consider - it's a real life problem for me that I have tried several times to figure out within the current filter architecture but have come up short. so, for a simple filter like this to be fully RFC compliant would be a bit help to others that want to apply filters in more complex settings. fwiw --Geoff
Re: YAML: httpd distro package managers
On Mon, Nov 22, 2004 at 11:29:01AM -0500, Rich Bowen wrote: To that end, I'd like to request Yet Another Mailing List, called something like [EMAIL PROTECTED] or [EMAIL PROTECTED], or some such. The list would be public, although I'd be willing to moderate if needed. Thoughts? +1 - imo there's more than enough mutilated httpd packages out there to warrant such a list. Perhaps packagers@ would be better? vh Mads Toftum -- `Darn it, who spiked my coffee with water?!' - lwall
Re: YAML: httpd distro package managers
Mads Toftum wrote: +1 - imo there's more than enough mutilated httpd packages out there to warrant such a list. Perhaps packagers@ would be better? Would a packagers' list be restricted to httpd only, or would it include other projects like APR? I think it should. Regards, Graham -- smime.p7s Description: S/MIME Cryptographic Signature
Re: YAML: httpd distro package managers
On Mon, Nov 22, 2004 at 10:23:41PM +0200, Graham Leggett wrote: Would a packagers' list be restricted to httpd only, or would it include other projects like APR? I think it should. APR being a prerequisite for httpd certainly makes me think the same - but I don't think we should spread much further than that (ie. no need to cover the joys of jarfiles ;) just my $.02 vh Mads Toftum -- `Darn it, who spiked my coffee with water?!' - lwall
Re: YAML: httpd distro package managers
Mads Toftum wrote: On Mon, Nov 22, 2004 at 10:23:41PM +0200, Graham Leggett wrote: Would a packagers' list be restricted to httpd only, or would it include other projects like APR? I think it should. APR being a prerequisite for httpd certainly makes me think the same - but I don't think we should spread much further than that (ie. no need to cover the joys of jarfiles ;) just my $.02 +1 to that. packaging of httpd and directly related items. (ie how 3rd party modules are organized would fall nicely into this category). -Paul
Re: svn commit: r105919 - in httpd/httpd/trunk: include modules/http server/mpm server/mpm/experimental/event
[EMAIL PROTECTED] wrote: Log: The Event MPM. woo-hoo! Thanks for driving this, Paul, and thanks to Justin and Sander for all the svn work. Status: Should work as a drop in replacement for all non-ssl servers. SSL Requests that use HTTP 1.1 Pipelining do not currently work. cannot confirm or deny the SSL problem at the moment, but I'm looking at the CLOSE-WAITs first. Originally based on the patch by Greg Ames. ...which was originally based on a patch by Bill Stoddard. Greg
Re: 2.1.1 tarballs posted...
Justin Erenkrantz wrote: http://httpd.apache.org/dev/dist/ Grab the 2.1.1 tarballs while they're fresh. Please start testing these releases - they should have the intent of becoming the beginning of the 2.2.x series modulo all of the cleanup work we'll have to do after we branch. For now, 2.1.1 includes APR/APR-util 1.0.1 - we can adjust this later, if need be. 2.1.1 is currently at alpha level - if we get enough +1s (i.e. 3 or more), it can then be promoted to beta. -- justin +1 for beta Matthieu
Re: Whitespace strip filter for httpd v2.1
On Mon, 22 Nov 2004, Graham Leggett wrote: I have attached a small filter module that strips leading whitespace from text files. This would typically be used to remove the indenting whitespace found inside HTML files, resulting in a significant reduction in network traffic for some sites. I didn't bother to include trailing whitespace removal, as this involved buffering (leading whitespace removal requires no buffering). Well, we've had people submit modules in the past that stripped the extraneous whitespace from HTML files, and we rejected them. Check the archives for messages from a guy named fabio rohrich, who wrote a module called mod_blank to see why that might have been. I don't completely remember the details. --Cliff
Re: [PATCH] another mod_deflate vs 304 response case
At 10:26 AM 11/22/2004, Cliff Woolley wrote: On Mon, 22 Nov 2004, Joe Orton wrote: There's another mod_deflate vs 304 response problem which is being triggered by ViewCVS on svn.apache.org: when a CGI script gives a "Status: 304" response the brigade contains a CGI bucket then the EOS, so fails the "if this brigade begins with EOS do nothing" test added for the proxied-304 case. Okay, but why the next three lines? Why would Content-Encoding: gzip *ever* be set on a 304? Let me expand on this question... ... we've all seen broken browsers. Why attempt to gzip non-2xx responses at all? I'd prefer (at least on servers with modest error message responses) to ensure the user can *read* any error response, in spite of any broken browser behavior w.r.t. deflate. It seems like a flag (even default) dropping gzip from non-2xx class responses could be a very useful thing. At least, if the browser results are a mess, it's due to a good response. Thoughts? Bill Because some custom error response pages on some sites are HUGE... and they WANT to compress them. For a while there ( especially in Europe and most notably in Germany ) it seemed like there was a contest going on to see who could come up with the most bloated and complicated error response pages for their Web farm/ commercial Server. Tons of _javascript_ and flashing lights and bouncing balls and advertising links, you name it. I have seen some base error templates exceed 200,000 bytes of HTML and _javascript_ just to say 'We're sorry... that page can't be found'. Bottom line: If someone WANTS to be doing this sort of thing and they WANT to compress the responses they certainly should be able to and it should all work. Anything that prevents it from working is still just a bug that's being ignored. Suggestion: Make sure someone can compress any response they want via config but then make sure to NOT recommend doing certain things and let them swim at their own risk. No lifeguard on duty. Yours... Kevin Kiley
Re: [PATCH] another mod_deflate vs 304 response case
--On Monday, November 22, 2004 5:30 PM -0500 [EMAIL PROTECTED] wrote: Suggestion: Make sure someone can compress any response they want via config but then make sure to NOT recommend doing certain things and let them swim at their own risk. No lifeguard on duty. +1. -- justin
Re: svn commit: r106229 - /httpd/site/branches/css-test
* [EMAIL PROTECTED] wrote: Author: pquerna Date: Mon Nov 22 14:55:35 2004 New Revision: 106229 Added: httpd/site/branches/css-test/ - copied from r106228, httpd/site/trunk/ Log: create a copy of the trunk to test out a CSS based website. FWIW: I'd really like something based on what the docs are built on (semantically and syntactically valid xhtml strict, valid CSS and perhaps the build system [which is in stage of being rebuilt currently ;-)]). And I'm ready to help. nd -- Das, was ich nicht kenne, spielt stückzahlmäßig *keine* Rolle. -- Helmut Schellong in dclc
Re: svn commit: r106229 - /httpd/site/branches/css-test
André Malo wrote: * [EMAIL PROTECTED] wrote: Author: pquerna Date: Mon Nov 22 14:55:35 2004 New Revision: 106229 Added: httpd/site/branches/css-test/ - copied from r106228, httpd/site/trunk/ Log: create a copy of the trunk to test out a CSS based website. FWIW: I'd really like something based on what the docs are built on (semantically and syntactically valid xhtml strict, valid CSS and perhaps the build system [which is in stage of being rebuilt currently ;-)]). ++1, I want that too! And I'm ready to help. Sweet. I decided to start small and try out using CSS instead of the hard coded colors. -Paul
moving docs build tools to httpd
Currently the docs build tools are misplaced under infrastructure. I'd like to have them moved from /infrastructure/site-tools/trunk/httpd-docs-build/ to /httpd/docs-build/trunk (or the like) What do you think? nd -- Umfassendes Werk (auch fuer Umsteiger vom Apache 1.3) -- aus einer Rezension http://pub.perlig.de/books.html#apache2
Re: moving docs build tools to httpd
--On Tuesday, November 23, 2004 12:25 AM +0100 André Malo [EMAIL PROTECTED] wrote: Currently the docs build tools are misplaced under infrastructure. I'd like to have them moved from /infrastructure/site-tools/trunk/httpd-docs-build/ to /httpd/docs-build/trunk (or the like) What do you think? +1. If you don't have the karma to do so, I do. =) -- justin
Re: moving docs build tools to httpd
* Justin Erenkrantz [EMAIL PROTECTED] wrote: Currently the docs build tools are misplaced under infrastructure. I'd like to have them moved from /infrastructure/site-tools/trunk/httpd-docs-build/ to /httpd/docs-build/trunk (or the like) What do you think? +1. If you don't have the karma to do so, I do. =) -- justin I'm not sure about the karma, probably not. If there are no objections, please do it :-) nd -- package Hacker::Perl::Another::Just;print [EMAIL PROTECTED] split/::/ =__PACKAGE__]}~; # André Malo # http://pub.perlig.de #
Possible leak in core_output_filter()
Hi All, I'm investigating a difficult to diagnose problem and I came across the following code in core_output_filter(): 4152 nvec = 0; 4153 nbytes = 0; 4154 temp = APR_BRIGADE_FIRST(temp_brig); 4155 APR_BUCKET_REMOVE(temp); 4156 APR_BRIGADE_INSERT_HEAD(b, temp); 4157 apr_bucket_read(temp, str, n, APR_BLOCK_READ); 4158 vec[nvec].iov_base = (char*) str; 4159 vec[nvec].iov_len = n; 4160 nvec++; 4161 4162 /* Just in case the temporary brigade has 4163 * multiple buckets, recover the rest of 4164 * them and put them in the brigade that 4165 * we're sending. 4166 */ 4167 for (next = APR_BRIGADE_FIRST(temp_brig); 4168 next != APR_BRIGADE_SENTINEL(temp_brig); 4169 next = APR_BRIGADE_FIRST(temp_brig)) { 4170 APR_BUCKET_REMOVE(next); 4171 APR_BUCKET_INSERT_AFTER(temp, next); 4172 temp = next; 4173 apr_bucket_read(next, str, n, 4174 APR_BLOCK_READ); 4175 vec[nvec].iov_base = (char*) str; 4176 vec[nvec].iov_len = n; 4177 nvec++; 4178 } 4179 4180 apr_brigade_destroy(temp_brig); where 4034 struct iovec vec[MAX_IOVEC_TO_WRITE]; and 3995 #define MAX_IOVEC_TO_WRITE 16 but there's no explicit check here to ensure we don't exceed the array's boundary. :/ Our problem occurs in apr_brigade_destroy(temp_brig). Now, it may be that temp_brig is such that this can never happen, but I don't see how based on this code: 4116 /* Create a temporary brigade as a means 4117 * of concatenating a bunch of buckets together 4118 */ 4119 if (last_merged_bucket) { 4120 /* If we've concatenated together small 4121 * buckets already in a previous pass, 4122 * the initial buckets in this brigade 4123 * are heap buckets that may have extra 4124 * space left in them (because they 4125 * were created by apr_brigade_write()). 4126 * We can take advantage of this by 4127 * building the new temp brigade out of 4128 * these buckets, so that the content 4129 * in them doesn't have to be copied again. 4130 */ 4131 apr_bucket_brigade *bb; 4132 bb = apr_brigade_split(b, 4133 APR_BUCKET_NEXT(last_merged_bucket)); 4134 temp_brig = b; 4135 b = bb; 4136 } 4137 else { 4138 temp_brig = apr_brigade_create(f-c-pool, 4139 f-c-bucket_alloc); 4140 } 4141 4142 temp = APR_BRIGADE_FIRST(b); 4143 while (temp != e) { 4144 apr_bucket *d; 4145 rv = apr_bucket_read(temp, str, n, APR_BLOCK_READ); 4146 apr_brigade_write(temp_brig, NULL, NULL, str, n); 4147 d = temp; 4148 temp = APR_BUCKET_NEXT(temp); 4149 apr_bucket_delete(d); 4150 } Now, all this code is inside a big if: 4101 if (n) { 4102 if (!fd) { 4103 if (nvec == MAX_IOVEC_TO_WRITE) { 4104 /* woah! too many. buffer them up, for use later. */ 4105 apr_bucket *temp, *next; 4106 apr_bucket_brigade *temp_brig; which implies nvec could be carrying information over from previous iterations through the code... but all of this is inside a huge while loop where 4026 while (b !APR_BRIGADE_EMPTY(b)) { 4027 apr_size_t nbytes = 0; 4028 apr_bucket *last_e = NULL; /*
Re: Whitespace strip filter for httpd v2.1
Cliff Woolley wrote: On Mon, 22 Nov 2004, Graham Leggett wrote: I have attached a small filter module that strips leading whitespace from text files. This would typically be used to remove the indenting whitespace found inside HTML files, resulting in a significant reduction in network traffic for some sites. I didn't bother to include trailing whitespace removal, as this involved buffering (leading whitespace removal requires no buffering). Well, we've had people submit modules in the past that stripped the extraneous whitespace from HTML files, and we rejected them. Check the archives for messages from a guy named fabio rohrich, who wrote a module called mod_blank to see why that might have been. I don't completely remember the details. In a nutshell, it is very hard to strip whitespace out of HTML, and have the resulting file still be as valid as the old one. Browsers are very forgiving on something and not on others. for example, you should not strip code inside of javascript, or 'pre' tags. --Cliff
Re: moving docs build tools to httpd
Currently the docs build tools are misplaced under infrastructure. I'd like to have them moved from /infrastructure/site-tools/trunk/httpd-docs-build/ to /httpd/docs-build/trunk (or the like) What do you think? +1 from me. ---Hiroaki Kawai
Re: svn commit: r106229 - /httpd/site/branches/css-test
Paul Querna [EMAIL PROTECTED] writes: Author: pquerna Date: Mon Nov 22 14:55:35 2004 New Revision: 106229 Added: httpd/site/branches/css-test/ - copied from r106228, httpd/site/trunk/ Log: create a copy of the trunk to test out a CSS based website. FWIW: I'd really like something based on what the docs are built on (semantically and syntactically valid xhtml strict, valid CSS and perhaps the build system [which is in stage of being rebuilt currently ;-)]). ++1, I want that too! And I'm ready to help. Sweet. I decided to start small and try out using CSS instead of the hard coded colors. ++1. I'm ready to help, too. I was playing with web site documentation to start providing translation of those pages and wondered why we haven't moved to CSS yet. -- Yoshiki Hayashi
Re: moving docs build tools to httpd
André Malo [EMAIL PROTECTED] writes: Currently the docs build tools are misplaced under infrastructure. I'd like to have them moved from /infrastructure/site-tools/trunk/httpd-docs-build/ to /httpd/docs-build/trunk (or the like) What do you think? +1. If you don't have the karma to do so, I do. =) -- justin I'm not sure about the karma, probably not. If there are no objections, please do it :-) +1 After conversion is done, sending out instruction of how to update existing checkout would be nice. I believe you can just svn switch to the new location but I'm not sure. -- Yoshiki Hayashi
Re: Bug #31228
Geoffrey Young wrote: Garrett Rooney wrote: Justin Erenkrantz wrote: --On Friday, September 17, 2004 1:07 PM -0400 Garrett Rooney [EMAIL PROTECTED] wrote: Could someone please take a look at bug 31228 in bugzilla? It's just adding a new response code (226) which is defined in rfc3229. I'm working on a module that implements a type of rfc3229 delta encoding, and it'd be nice if people didn't have to apply a patch to Apache in order to use it. FWIW, I looked at it the other night and I'm mildly iffy on adding it. Any particular reason? It seems like providing support for the response code specified in the RFC would only help encourage people to actually implement support for it in Apache, which seems like a good thing to me... provided that garrett's patch is technically the correct way to add a new status code to apache, and that RFC3229 went through the proper motions to capture exclusive use of 226 within http, I don't see any reason why 226 shouldn't be added in 2.1. This sort of got dropped on the ground once Justin told me how to make my module work without explicit support for the status code in the core, and it's totally my fault for not pressing it, but I'd still like to get this in or to hear what the reasoning for not including it is. So, uhh, ping? Any comments other than i'm iffy and is there any reason not to add it? -garrett
Re: Possible leak in core_output_filter()
On Mon, 22 Nov 2004, Ronald Park wrote: I'm investigating a difficult to diagnose problem and I came across the following code in core_output_filter(): Yuck. I HATE the core_output_filter. I probably know that code as well as anybody, and yet every time I look at it, it makes my brain hurt so bad I want to scream. It's just a big pile of jumbled up intertwined heuristics. I can see I'm going to have to just break down and rewrite the damned thing as a series of (gasp) SMALLER FUNCTIONS, rather than one big monolithic beast. But first, this bug. It would be very helpful if you could do one or both of the following: * compile with bucket debugging enabled (see slide 17 of http://www.cs.virginia.edu/~jcw5q/talks/apache/apache2moddebugging.ppt) * give me a list of the buckets in each brigade that gets passed in to the core output filter in response to the request that reproduces the problem. (set a breakpoint on the line: 4026 while (b !APR_BRIGADE_EMPTY(b)) { and then run dump_brigade b in gdb each time the breakpoint gets hit) --Cliff
Re: [PATCH] another mod_deflate vs 304 response case
Quoting William A. Rowe, Jr. [EMAIL PROTECTED]: Okay, but why the next three lines? Why would Content-Encoding: gzip *ever* be set on a 304? Because Content-* header fields in a 304 response describe what the response entity would contain if it were a 200 response. Therefore, the header field must be the same as it would be for a 200. The body must be dropped by the HTTP filter. Roy