Re: Time for new apr-* releases soon? Corrections inc for .vcproj conversion
On Friday 05 October 2007, Ruediger Pluem wrote: Once APR is out, I'll plan on a httpd release too. There are several backport proposals in the STATUS file missing only one vote. So I guess it is voting time :-). Maybe someone could also look at http://issues.apache.org/bugzilla/show_bug.cgi?id=42829 which is somewhat annoying and seems to have a working patch. TIA. Cheers, Stefan
prefork: hung processes on graceful reload
On Monday 08 October 2007, Jim Jagielski wrote: On Oct 5, 2007, at 2:07 PM, Stefan Fritsch wrote: Maybe someone could also look at http://issues.apache.org/bugzilla/show_bug.cgi?id=42829 A quick review seems to indicate that the suggested patch could result in a worker accepting a request and then being killed off, something that we've tried to prevent... The patch was broken in other ways, too. I have attached a different patch that is definitely an improvement over 2.2.6 (though a small race condition remains). Can you look at it again and maybe include it in 2.2.7? Thanks. Cheers, Stefan
Apache memory usage
Hi, there is still the problem that during a request, many bucket brigades being created which are only cleaned up after the request is finished, see http://issues.apache.org/bugzilla/show_bug.cgi?id=23567 . There was some discussion about retaining ownership of a brigade when ap_pass_brigade() is called, and then reusing the brigades [1]. But this does not seem to be implemented in 2.2, yet. But I found two locations where the creation of a new brigade could be avoided: - In buffer_output()/ap_old_write_filter(), it is possible to keep the brigade around and reuse it after the next flush. - In ap_http_chunk_filter(), a new brigade is created for every flush bucket. But it is not really necessary if the flush bucket is the last bucket in the brigade. I don't know yet whether the saving is significant in practice. But it saves two brigades on every combination of ap_rwrite()/ap_rflush() and mod_php uses ap_r*() quite a lot. A patch is attached. It's against 2.2.6 but applies to trunk. Cheers, Stefan [1] http://mail-archives.apache.org/mod_mbox/httpd-dev/200504.mbox/9b5d906c83b6650b9512eb9e7ac96be1%40ricilake.net diff -urNad trunk~/modules/http/chunk_filter.c trunk/modules/http/chunk_filter.c --- trunk~/modules/http/chunk_filter.c 2006-07-12 05:38:44.0 +0200 +++ trunk/modules/http/chunk_filter.c 2007-12-02 18:52:20.955996723 +0100 @@ -85,7 +85,9 @@ } if (APR_BUCKET_IS_FLUSH(e)) { flush = e; -more = apr_brigade_split(b, APR_BUCKET_NEXT(e)); +if (e != APR_BRIGADE_LAST(b)) { +more = apr_brigade_split(b, APR_BUCKET_NEXT(e)); +} break; } else if (e-length == (apr_size_t)-1) { diff -urNad trunk~/server/protocol.c trunk/server/protocol.c --- trunk~/server/protocol.c 2006-07-12 05:38:44.0 +0200 +++ trunk/server/protocol.c 2007-12-02 18:51:19.200477476 +0100 @@ -1385,6 +1385,7 @@ typedef struct { apr_bucket_brigade *bb; +apr_bucket_brigade *tmpbb; } old_write_filter_ctx; AP_CORE_DECLARE_NONSTD(apr_status_t) ap_old_write_filter( @@ -1399,8 +1400,7 @@ * can simply insert our buffered data at the front and * pass the whole bundle down the chain. */ -APR_BRIGADE_CONCAT(ctx-bb, bb); -bb = ctx-bb; +APR_BRIGADE_PREPEND(bb, ctx-bb); ctx-bb = NULL; } @@ -1449,7 +1449,12 @@ ctx = r-output_filters-ctx; if (ctx-bb == NULL) { -ctx-bb = apr_brigade_create(r-pool, c-bucket_alloc); +if (ctx-tmpbb == NULL) { +ctx-tmpbb = ctx-bb = apr_brigade_create(r-pool, c-bucket_alloc); + } + else { + ctx-bb = ctx-tmpbb; + } } return ap_fwrite(f-next, ctx-bb, str, len);
Re: Apache memory usage
Hi, On Monday 03 December 2007, Stefan Fritsch wrote: But I found two locations where the creation of a new brigade could be avoided: - In buffer_output()/ap_old_write_filter(), it is possible to keep the brigade around and reuse it after the next flush. - In ap_http_chunk_filter(), a new brigade is created for every flush bucket. But it is not really necessary if the flush bucket is the last bucket in the brigade. I don't know yet whether the saving is significant in practice. But it saves two brigades on every combination of ap_rwrite()/ap_rflush() and mod_php uses ap_r*() quite a lot. A patch is attached. It's against 2.2.6 but applies to trunk. I now got a report that this patch helped in one case where the system used to run out of memory pretty quickly [1]. Could someone please review the patch? Maybe it's not too late for 2.2.7 yet? Cheers, Stefan [1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=421557
Re: Apache memory usage
On Sunday 09 December 2007, Ruediger Pluem wrote: But I think your patch to server/protocol.c can be done much simpler. Can you try the following and let us know if this helps as well: Index: server/protocol.c === --- server/protocol.c (Revision 602730) +++ server/protocol.c (Arbeitskopie) @@ -1397,9 +1397,7 @@ * can simply insert our buffered data at the front and * pass the whole bundle down the chain. */ -APR_BRIGADE_CONCAT(ctx-bb, bb); -bb = ctx-bb; -ctx-bb = NULL; +APR_BRIGADE_PREPEND(bb, ctx-bb); } return ap_pass_brigade(f-next, bb); Yes, that works as well. I wanted to avoid calling APR_BRIGADE_PREPEND when the temporary brigade is empty. But I don't really know whether APR_BRIGADE_PREPEND is so expensive that this makes sense. An alternative would be to check this using APR_BRIGADE_EMPTY. Thanks for the review. Cheers, Stefan
CVE-2007-6203
*) http_protocol: Escape request method in 413 error reporting. Determined to be not generally exploitable, but a flaw in any case. PR 44014 [Victor Stinner victor.stinner inl.fr] This is CVE-2007-6203. Maybe you should add the reference to the CHANGES file? Cheers, Stefan
Re: CVE-2007-6203
On Monday 17 December 2007, William A. Rowe, Jr. wrote: This is CVE-2007-6203. Maybe you should add the reference to the CHANGES file? I don't think that's a good idea since we don't want to mislead users into thinking a security issue exists here. it potentially does, just not of httpd's creation. I liked the text for the autoindex issue; *) mod_autoindex: Add in Type and Charset options to IndexOptions directive. This allows the admin to explicitly set the content-type and charset of the generated page and is therefore a viable workaround for buggy browsers affected by CVE-2007-4465 (cve.mitre.org). [Jim Jagielski] I'd use the phrase hypothetically buggy clients in this case, since there is not a single proof on this incident. I agree. It might be exploitable with buggy browser plugins using HTTP request splitting. See e.g. http://www.adobe.com/support/security/advisories/apsa06-01.html It is definitely a bug in flash and not httpd, of course. But the CVE id could be added for reference. Stefan
PR42829: graceful restart with multiple listeners using prefork MPM can result in hung processes
Hi, this bug can be quite annoying because of the resources used by the hung processes. It happens e.g. under Linux when epoll is used. The patch from http://issues.apache.org/bugzilla/show_bug.cgi?id=42829#c14 has been in Debian unstable/Ubuntu hardy for several weeks and there have not been any complaints. It would be nice if you could look at it and commit it to svn. Thanks, Stefan
Re: PR42829: graceful restart with multiple listeners using prefork MPM can result in hung processes
Joe Orton wrote: I mentioned in the bug that the signal handler could cause undefined behaviour, but I'm not sure now whether that is true. On Linux I can reproduce some cases where this will happen, which are all due to well-defined behaviour: 1) with some (default on Linux) accept mutex types, apr_proc_mutex_lock() will loop on EINTR. Hence, children blocked waiting for the mutex do hang until the mutex is released. Fixing this would need some APR work, new interfaces, blah This is not a problem. On graceful-stop or reload the processes will get the lock one by one and die (or hang somewhere else). I have never seen a left over process hanging in this function. 2) prefork's apr_pollset_poll() loop-on-EINTR loop was not checking die_now; the child holding the mutex will not die immediately if poll fails with EINTR, and will hence appear to hang until a new connection is recevied. Fixed by http://svn.apache.org/viewvc?rev=613260view=rev IMHO this is the same as 3), as apr_pollset_poll() will be called again but with all fds already closed. I can also reproduce a third case, but I'm not sure about the cause: 3) apr_pollset_poll() is blocking despite the fact that the listening fds are supposedly already closed before entering the syscall. This is the main problem in my experience. I vaguely recall some issue with epoll being mentioned before in the context of graceful stop, but I can't find a reference. Colm? A very tempting explanation for (3) would be the fact that prefork only polls for POLLIN events, not POLLHUP or POLLERR, or indeed that it does not check that the returned event really is a POLLIN event; POSIX says on poll: ... poll() shall set the POLLHUP, POLLERR, and POLLNVAL flag in revents if the condition is true, even if the application did not set the corresponding bit in events. I also had problems under solaris 9 where processes blocked in lr-accept_func() if the fd had been closed in the meantime. Unfortunately, I cannot reproduce it now even with an unpatched 2.2.6 and I don't remember which configuration I used. But this could be related to the returned event not being POLLIN. and there's even a comment in the prefork poll code to the effect that maybe checking the returned event type would be a good idea. But from a brief play around here, fixing the poll code to DTRT doesn't help. I think more investigation is needed to understand exactly what is going on here. (Also, just to note; I can reproduce (3) even with my patch to dup2 against the listener fds.) On Linux with epoll, the hanging processes just blocks in apr_pollset_poll(), so checking the return value won't do any good. Maybe the problem is that (AIUI) poll() returns POLLNVAL if a fd is not open, while epoll() does not have something similar. In epoll.c, a comment says APR_POLLNVAL is not handled by epoll. Or should epoll return EPOLLHUP in this case? Stefan
RE: XSS vulnerability in mod_negotiation - status in 2.2.8?
Hi, On Wed, 6 Feb 2008, Boyle Owen wrote: It is clear to me now that this is a storm in a teacup. I note also that the vulnerability never made it to the CVE database so I think we can decide on no further action. That's not true. CVE-2008-0455 and CVE-2008-0456 have been assigned to this issue. Maybe the apache security team should contact mitre so that these entries are marked as disputed. Cheers, Stefan
Re: 2.2.9 status
Hi, for 2.2.9, it would be nice to fix the epoll issue PR 42829, IMHO. The patch in the bug report works, even if it may not be the perfect solution. Cheers, Stefan
PR42829 (was: 2.2.9 status)
On Thursday 29 May 2008, Jim Jagielski wrote: for 2.2.9, it would be nice to fix the epoll issue PR 42829, IMHO. The patch in the bug report works, even if it may not be the perfect solution. From what I can see, there is no real patch available or fully tested enough to warrant anything for 2.2.9 right now. https://issues.apache.org/bugzilla/attachment.cgi?id=21137 has been in Debian testing and unstable for about 6 months without problems. It is not an elegant solution but it works. Considering that is is not clear how an elegant solution would look like, including this patch might make sense.
Re: PR42829 (was: 2.2.9 status)
On Thursday 29 May 2008, Jim Jagielski wrote: https://issues.apache.org/bugzilla/attachment.cgi?id=21137 has been in Debian testing and unstable for about 6 months without problems. It is not an elegant solution but it works. Considering that is is not clear how an elegant solution would look like, including this patch might make sense. Even if so, we cannot simply put it in 2.2.9. It needs to be in trunk first, then tested, then proposed for backport to 2.2.x and then voted on there before backported. Timing-wise, it is VERY unlikely this will happen in time for 2.2.9. However, some other prefork fixes I just added to STATUS in hopes of adding them to 2.2.9... I will bug you again after 2.2.9, then.
Re: PR42829
On Friday 30 May 2008, Paul Querna wrote: https://issues.apache.org/bugzilla/attachment.cgi?id=21137 has been in Debian testing and unstable for about 6 months without problems. It is not an elegant solution but it works. Considering that is is not clear how an elegant solution would look like, including this patch might make sense. Please don't put these kind of patches into the debian apache packages, especially ones that don't exist in trunk. (Things that are committed to turnk, and just are awaiting backport, I'm less concerned about, but this patch is a behavior change at the core of the MPMs.) Bugs as grave as this one are not acceptable in Debian packages for extended periods of time. The bug report has been open for over 1 year, I have attached my patch on 2007-11-16. It is marked as critical since 2008-01-16. If you don't want such patches in the Debian package, you need to fix such bugs faster (and comment on patches in bugzilla faster). Of course I understand that this is difficult because there are never enough people to fix bugs (we have the same problem). I admit that I should have followed up on the discussion in February, but I was quite busy and then forgot about it. Cheers, Stefan
Re: PR42829
On Friday 30 May 2008, Nick Kew wrote: I don't think I share your implied view about how grave this is. I guess this is the main (or only?) problem with this patch/bug. I got quite a few people complaining about it and therefore I wanted to fix it. I respect your opinion, but when maintaining your own patches, please consider also the problems discussed in my article at http://www.regdeveloper.co.uk/2006/11/04/apache_packages_support_va cuum/ (which goes to the heart of why Debian may get a pretty hostile reception amongst some Apache folks). Yes, this is definitely a problem, but not easy to fix. I hope I will find some time soon to try to improve the situation. In any case the problem is less about patches but more about the configuration and the additional scripts we ship with apache. For example the configuration is split into many small files because this makes upgrades easier because of the way dpkg handles config files. To take it to the extreme, a fork being called 'Apache' isn't acceptable either. Please work with us here, even though it's a very low barrier for you to put patches in your package, much lower than to get it applied upstream (here). Fixing bugs is not forking. We don't include many patches that are not either bug fixes or related to build or file system layout issues. For example we don't add features or change the behaviour (unless the component comes in a separate package that is clearly marked as non-standard, like the mpm-itk). And for the bug fixes, these are usually from branches/2.2.x or from the Apache bugzilla. To be fair, I think Stefan _is_ working with us: he's put his patch in bugzilla, and (now, though not originally) he's raised it on the list. I raised the issue in January (http://marc.info/?l=apache-httpd-devm=119945416529706w=2) and there was some discussion with Joe Orton, but no conclusion about what would be the proper fix. But since I had a fix that worked for me, I didn't see any reason to revert the patch. My mail in January already mentioned that the patch is in Debian, but I guess now after the openssl debacle people are more sensitive. If you think it would help, I could go through our patches and post a list of the non-Debian specific ones here. Cheers, Stefan
Re: [PATCH] SIGBUS when compiled with gcc 4.3
Hi, On Wednesday 23 July 2008, Joe Orton wrote: when compiled with gcc 4.3 on Sparc under Linux, Apache 2.2.9 sometimes crashes with SIGBUS in the ssl shmcb code. Adding __attribute__((__noinline__)) (which is already present in ssl_scache_shmcb.c for the memset call) to the memcpy calls seems to fix the problem. Blech, thanks for tracking this down. There is an outstanding PR that this happens with some Solaris compiler too, for which nobody has worked out a way to prevent inlining. So I'd be inclined to bite the bullet and backport the rewritten shmcb from the trunk, which should avoid these alignment issues forever more. Can you try the attached? The new shmcb code works for me without SIGBUS, too. But I only have a test setup with no real users, and I haven't tested too much. If you chose to backport the code, will you do it for 2.0, too? See https://issues.apache.org/bugzilla/show_bug.cgi?id=43943 If not, you should at least add the __attribute__((__noinline__)) to 2.0, IMHO. Cheers, Stefan
PR 42829: apache prefork hanging in apr_pollset_poll() on graceful restarts or shutdowns
Hi, there is the problem that with prefork mpm, child processes can hang in apr_pollset_poll() on graceful restarts or shutdowns (https://issues.apache.org/bugzilla/show_bug.cgi?id=42829). This happens under Linux with epoll, and there is now also a report that the same problem exists with Solaris 10 ports. Joe Orton posted a summary of the problem here http://marc.info/?l=apache-httpd-devm=120221601627610w=2 Are there any new ideas on this? In the absence of a better solution, wouldn't it be reasonable to apply the patch that simply avoids calling apr_pollset_poll() when the listening sockets have already been closed? See https://issues.apache.org/bugzilla/attachment.cgi?id=21137 Cheers, Stefan
Re: Need suggestions for adding tproxy support to mod_proxy
Hi, On Wed, 17 Dec 2008, Pranav Desai wrote: I am trying to add tproxy4 (http://www.balabit.com/support/community/products/tproxy/) support to the mod_proxy to achieve transparency. It basically involves a kernel patch which allows binding of a socket to foreign address among other things. At the app layer we only need to set the setsockopt() Linux 2.6.28 includes tproxy support again (see Documentation/networking/tproxy.txt in the kernel source). You may want to check that your httpd patch works with that kernel, too. BTW, I think this would be a nice feature to have in httpd 2.4. Cheers, Stefan
Re: Graceful restart not so graceful?
Hi, thanks for following up on this and sorry for the late response. On Wednesday 07 January 2009, Jeff Trawick wrote: Initial testing of your idea for a timeout was promising. I couldn't reproduce any hangs under linux with the patch you commited to trunk. In my patch I tried to avoid that an idle apache wakes periodically for no good reasons. But if you don't think that is a problem, please backport your patch to 2.2.x. Cheers, Stefan
more apr_pollset_* error checking
Hi, the epoll limit in new linux kernels can cause problems because of insufficient error checking in httpd. The most obvious problem was fixed in https://issues.apache.org/bugzilla/show_bug.cgi?id=46467 in MPM prefork, but mod_cgi, mod_proxy_connect, and the other MPMs should also check for errors in apr_pollset_create/add. Patch is at https://issues.apache.org/bugzilla/attachment.cgi?id=23105 For the documentation: This means that the limit in /proc/sys/fs/epoll/max_user_instances needs to be twice the number of MaxClients to handle the worst case of every process doing a proxy CONNECT or cgi request. Cheers, Stefan
Automatically fall back to read/write when sendfile fails?
Hi, is there any particular reason why httpd does not automatically fall back to read/write if sendfile failed [1]? Or is the only problem that nobody has written the code yet? I have googled a bit but have not found any discussion about this. Cheers, Stefan [1] The linux sendfile man page has: Applications may wish to fall back to read(2)/write(2) in the case where sendfile() fails with EINVAL or ENOSYS.
Re: [RFC] A new hook: invoke_handler and web-application security
On Thursday 09 April 2009, Graham Dumpleton wrote: Only you would know that. But then, I could be pointing you at the wrong MPM. There is from memory another by another name developed outside of ASF which intends to do the same think. The way it is implemented is probably going to be different and may be the one I am actually thinking of. I can't remember the name of it right now. Maybe you mean MPM itk, which can change to different users for different vhosts? http://mpm-itk.sesse.net/
mod_perl test failure with CVE-2009-1195 fix in 2.2.12
Hi, when backporting the CVE-2009-1195 fix in r773881+r779472 from branches/2.2.x to 2.2.9, I noticed that it causes a test failure when compiling mod_perl 2.0.4. Since I am neither familiar with mod_perl nor with the mod_include internals, maybe someone else can check if this is a necessary breakage or if the fix can be adjusted to be more backward compatible. The test output: t/api/add_config# connecting to http://localhost:8560/TestAPI__add_config/ 1..9 # Running under perl version 5.01 for linux # Current time local: Mon Jun 1 15:56:35 2009 # Current time GMT: Mon Jun 1 13:56:35 2009 # Using Test.pm version 1.25 # Using Apache/Test.pm version 1.31 ... # expected: 8 # received: 40 not ok 7 ... FAILED test 7 Failed 1/9 tests, 88.89% okay = The interesting test file in mod_perls source is ./t/response/TestAPI/add_config.pm. It looks like the test sets Options ExecCGI and expects $r-allow_options to be 8 (Apache2::Const::OPT_EXECCGI), but the actual value is 40 (Apache2::Const::OPT_EXECCGI|Apache2::Const::OPT_INCNOEXEC). Cheers, Stefan
Re: mod_perl test failure with CVE-2009-1195 fix in 2.2.12
On Monday 01 June 2009, Jeff Trawick wrote: This patch works for me; please try it with the Perl suite. That fixed it. Thanks Stefan
PR 38330: Make RemoveType override TypesConfig mime.types?
On Saturday 20 December 2008, Stefan Fritsch wrote: for people who use a system wide mime.types as TypesConfig, it would be nice if there was a way to remove some type associations in the apache config. For example, nowadays .es seems to be ecmascript (according to RFC 4329), but it is also often used for spanish language encoding. Currently the only solutions are to edit the global mime.types, to use a separate mime.types for apache, or to rename all .es files to something else. Making RemoveType also override the enry from TypesConfig would be a far better solution. A patch that does that is at https://issues.apache.org/bugzilla/attachment.cgi?id=22817 (PR 38330). Cheers, Stefan Ping?
Re: mod_noloris: mitigating against slowloris-style attack
Nick Kew wrote: Is this worth hacking up, or more trouble than it saves? It seems it already exists (I haven't tested it, though): ftp://ftp.monshouwer.eu/pub/linux/mod_antiloris/mod_antiloris-0.3.tar.bz2
mod_deflate DoS
Hi, we have received a bug report [1] that a DoS is possible with mod_deflate since it does not stop to compress large files even after the network connection has been closed. This allows to use large amounts of CPU if there is a largish (10 MB) file available that has mod_deflate enabled. An exploit is as easy as for a in $(seq 1 100) ; do curl -H Accept-Encoding: gzip -is http://url.to/large/file.txt | head -1 ; done François Guerraz, the bug submitter, also suggested a patch: --- mod_deflate.c 2008-01-04 15:23:50.0 +0100 +++ mod_deflate.c.new 2009-06-26 16:50:36.0 +0200 @@ -691,6 +691,10 @@ continue; } + if (r-connection-aborted) { +return APR_ECONNABORTED; +} + /* read */ apr_bucket_read(e, data, len, APR_BLOCK_READ); This greatly reduces the problem. But even with it, quite a bit of data is compressed (maybe determined by APR_MMAP_LIMIT == 4MB?). Cheers, Stefan [1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=534712
Segfault with fix for CVE-2009-1891
Hi, I have backported r791454 to 2.2.3 in Debian 4.0 and have received a report [1] about segfaults with mod_deflate and mod_php (5.2.0). As far as I understand it, the reason is that mod_php uses ap_rwrite which creates transient buckets. When the connection is closed by the client, these buckets sometimes stay in the bucket brigade when ap_pass_brigade returns an error for the compressed data of an earlier bucket. If deflate_out_filter gets called again with the same brigade, the memory of the transient buckets is no longer valid, causing a segfault. This patch seems to fix the issue: --- apache2-2.2.3~/modules/filters/mod_deflate.c +++ apache2-2.2.3/modules/filters/mod_deflate.c @@ -512,6 +512,7 @@ APR_BRIGADE_INSERT_TAIL(ctx-bb, bkt); rv = ap_pass_brigade(f-next, ctx-bb); if (rv != APR_SUCCESS) { +apr_brigade_cleanup(bb); return rv; } continue; @@ -543,6 +544,7 @@ /* Send what we have right now to the next filter. */ rv = ap_pass_brigade(f-next, ctx-bb); if (rv != APR_SUCCESS) { +apr_brigade_cleanup(bb); return rv; } } I could not reproduce the segfault with Debian 5.0, containing Apache 2.2.9 and php 5.2.8. The same for Debian unstable with 2.2.12 and php 5.2.10. Therefore, I also tried to take the whole mod_deflate.c from 2.2.9 into 2.2.3, but it did not fix the segfaults [2]. Is there some change from 2.2.3 to 2.2.9 (which cannot be in mod_deflate.c) that fixes the issue? Or is it just coincidence that 2.2.9 and 2.2.12 do not segfault and should the above patch be included in the next 2.2.x? Cheers, Stefan [1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=537665 [2] to reproduce, use this script ?php for ($i = 0; $i 10; $i++ ) { echo blah\n; echo $i; } echo blubb\n; and request it with curl -is -H 'Accept-Encoding: gzip' http://localhost/x.php |head -10 With 2.2.3, I get a segfault for about 30% of the requests.
Re: Segfault with fix for CVE-2009-1891
Ruediger Pluem wrote: far as I understand it, the reason is that mod_php uses ap_rwrite which creates transient buckets. When the connection is closed by the client, these buckets sometimes stay in the bucket brigade when ap_pass_brigade returns an error for the compressed data of an earlier bucket. If deflate_out_filter gets called again with the same brigade, the memory of the transient buckets is no longer valid, causing a segfault. IMHO the mod_deflate filter shouldn't be called with the same brigade again or at least the brigade should have been set aside in this case. So that should never happen. I was not really able to figure out, how or why this happens. What I actually see is that the brigade in mod_deflate contains one heap bucket and one transient bucket where the latter points to invalid memory. Therefore I assumed the bucket was around from the last call to buffer_output. But php does strange things (which I don't understand) involving longjmp to handle errors, maybe that's got something to do with it. I don't know. Maybe at setaside is now done somewhere were it didn't happen before. I guess the best approach is to add the above patch to your backport and dig further. Nevertheless a backtrace of the segfault would help in any case to understand better what is going on and why it might not happen on more recent versions. A backtrace is available in the bug report, but it does not reveal much: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=537665#30
Re: Segfault with fix for CVE-2009-1891
William A. Rowe, Jr. wrote: One helpful detail, Stefan, would be if this is worker-specific or can be reproduced with prefork. That helps narrow down the number of places to consider your question. This happened with prefork, Debian supports mod_php only with prefork. As I recall, we have several filter bug fixes in deflate over the past years, so it's not a really surprising result. But I have also tried the 2.2.9 mod_deflate.c with 2.2.3, and it didn't fix the issue. Therefore the fix would have to be in some other source file.
Re: Segfault with fix for CVE-2009-1891
Right, it is not really helpful, but as you seem to be able to reproduce the issue can you please create a backtrace on your own, preferably with an unstripped and -g compiled php (which doesn't seem to be the case in the current backtrace) . Backtrace is attached. Looking at it again, the brigade that contains the transient bucket does not seem to belong to the oldwrite filter. It's the brigade that is used by mod_php to pass the eos bucket. But since mod_php only creates eos buckets and uses ap_fwrite for everything else, I don't know where the transient bucket comes from. stacktrace Description: Binary data
Changing the default algorithm in htpasswd
Hi, given that crypt() hashes can nowadays be brute-forced on commodity hardware (especially since the password length is limited to 8 characters), wouldn't it make sense for htpasswd to use something stronger by default? Cheers, Stefan
Re: Segfault with fix for CVE-2009-1891
Right, it is not really helpful, but as you seem to be able to reproduce the issue can you please create a backtrace on your own, preferably with an unstripped and -g compiled php (which doesn't seem to be the case in the current backtrace) . Backtrace is attached. I forgot to mention that mod_deflate.c:537 is actually line 524, because I have applied Rüdiger Plüm's patch for no deflate for HEAD requests.
mod_reqtimeout: mitigating against slowloris-style attack (different approach)
Hi, since there was some doubt that the mod_antiloris and mod_noloris modules use the correct approach against slowloris type attacks, I hacked up something different. mod_reqtimeout allows to set timeouts for the reading request and reading body phases. It is implemented as an input connection filter that sets the socket timeout so that the total request time does not exceed the timeout value. I have done only limited testing but it seems to work (with prefork). The source is here: http://www.sfritsch.de/mod_reqtimeout/mod_reqtimeout.c Any comments are welcome. Some questions/thoughts: - Is this a reasonable approach or did I overlook something important? If the former, would you consider including something like it with httpd? - Would it work with mpm_event? Would it make sense to only insert it for ssl with mpm_event? If yes, how do I do that? - How do I prevent the filter from being inserted for other protocols (echo, ftp)? - Obviously the body read timeout is only useful for sites that do not allow file uploads. But an extension to a minimum body transfer rate would probably be possible. Also, it would be possible to make the body read timeout configurable by direcory, which may be useful if file uploads are only allowed by authorized users. - This does not defend against attacks like: HEAD request, wait, HEAD request, wait, ... But the keepalive timeout can be tuned for that. - If you test it under linux or *bsd, don't get confused by the accept filter. - Apache should respond with HTTP_REQUEST_TIME_OUT and not HTTP_BAD_REQUEST when there is a timeout reading the request. Cheers, Stefan
Re: mod_reqtimeout: mitigating against slowloris-style attack (different approach)
On Tuesday 01 September 2009, Nick Kew wrote: How does it relate to the Timeout directive? The Timeout directive sets the maximum time between two packets. mod_requtimeout will set the socket timeout to the minumum of {Timeout, time left for the current request}. You can set RequestTimeout to much lower values than Timeout and it still works as expected. One comment: you're returning APR_EGENERAL if there's no config. I'd strongly suggest you always do-nothing if not configured. Or if not-configured is a can't-happen event, catch it with an ap_assert. Not-configured should not happen and would be a bug, I will change it to ap_assert. - Obviously the body read timeout is only useful for sites that do not allow file uploads. But an extension to a minimum body transfer rate would probably be possible. Also, it would be possible to make the body read timeout configurable by direcory, which may be useful if file uploads are only allowed by authorized users. The body part probably overlaps with what existing modules like mod_evasive and the bandwidth-management modules do. Have you looked at them? I haven't looked very closely at them. But from the descriptions it looked like they defend against too much traffic/requests. I try to defend against connections with very little traffic. And unless they fiddle with the socket timeout too, they are limited by the value of the Timeout directive. Once apache is in a blocking read, the modules can't do anything until the timeout expires. When Timeout is set to a large value like 300s (e.g. because of mod_cgi), mod_reqtimeout can still limit the maximum time for reading the body to a much lower value (like 15s for sites that have only forms and no file uploads). I also start counting the bodytimeout only when there is the first read for the body. That way the time needed by apache to process the request is not included. - Apache should respond with HTTP_REQUEST_TIME_OUT and not HTTP_BAD_REQUEST when there is a timeout reading the request. In the slowloris case, it needs to time out before there's any such thing as an HTTP request, so it won't be sending an HTTP response. But I guess you're talking about the body timeout? No, about the request. When apache has received at least one line of the request, it currently responds with HTTP_BAD_REQUEST when there is a timeout before the complete request was read. In this case HTTP_REQUEST_TIME_OUT is more appropriate. It means the client did not produce a request within the time that the server was prepared to wait.
Re: mod_reqtimeout: mitigating against slowloris-style attack (different approach)
On Tuesday 01 September 2009, Torsten Foertsch wrote: Just a few thoughts: - You use GLOBAL_ONLY in ap_check_cmd_context. That means the directive must not appear in vhost context. AFAIK, conn-base_server reflects the vhost in a pre connection hook if it is IP-based. So, why don't you allow for RequestTimeout to be valid in ip-based vhost context? Basically because I didn't get around to write the merge function yet. But it's on my todo list. That way the protocol problem is solved, isn't it? That's true. It's probably not necessary to detect non-http virtual hosts automatically. Just let the admin configure it. - Wouldn't RequestTimeout better be named RequestHeaderTimeout or ReadRequestHeaderTimeout? RequestTimeout is a bit missleading (IMHO). My first thought was: That thing limits the whole transaction. Maybe. I will think about this. - You mentioned a minimum body transfer rate instead of a simple timeout. If the default values for LimitRequestFields, LimitRequestFieldSize and LimitRequestLine are added up I get a max. request header size of 101*8190 bytes. This may take some time to transmit while still valid. So, perhaps a transfer rate limit for the header is a good option, as well. Perhaps having both a timeout and a rate limit would be good. A more realistic estimate for the maximum for a valid request is 10*8190 + 91*100, i.e. there are normally only very vew long lines. But if the transfer rate function is added for the body, it's easy to add it for the headers, too.
Re: mod_reqtimeout: mitigating against slowloris-style attack (different approach)
On Tuesday 01 September 2009, Ruediger Pluem wrote: On 09/01/2009 04:26 PM, Torsten Foertsch wrote: On Tue 01 Sep 2009, Stefan Fritsch wrote: http://www.sfritsch.de/mod_reqtimeout/mod_reqtimeout.c Any comments are welcome. Just a few thoughts: - You use GLOBAL_ONLY in ap_check_cmd_context. That means the directive must not appear in vhost context. AFAIK, conn-base_server reflects the vhost in a pre connection hook if it is IP-based. So, why don't you allow for RequestTimeout to be valid in ip-based vhost context? That way the protocol problem is solved, isn't it? - Wouldn't RequestTimeout better be named RequestHeaderTimeout or ReadRequestHeaderTimeout? RequestTimeout is a bit missleading (IMHO). My first thought was: That thing limits the whole transaction. Nice module. +1 to the comments above. Thanks to everyone commenting so far. I have changed these two things and uploaded the new version to the same place. Stefan
Re: mod_reqtimeout: mitigating against slowloris-style attack (different approach)
On Tuesday 01 September 2009, Ruediger Pluem wrote: - Apache should respond with HTTP_REQUEST_TIME_OUT and not HTTP_BAD_REQUEST when there is a timeout reading the request. In the slowloris case, it needs to time out before there's any such thing as an HTTP request, so it won't be sending an HTTP response. But I guess you're talking about the body timeout? No, about the request. When apache has received at least one line of the request, it currently responds with HTTP_BAD_REQUEST when there is a timeout before the complete request was read. In this case HTTP_REQUEST_TIME_OUT is more appropriate. It means the client did not produce a request within the time that the server was prepared to wait. Is this just regarding better logging on the server side? Otherwise I wouldn't care too much what we sent to an attacker. Well, if there is a legitimate client who is too slow, it's better to send him a meaningful error message. But it's not that important, of course.
Better logging for ssl configuration errors
Hi, it seems there are a number of configurations that used ssl name based virtual hosts with ssl that broke with 2.2.12. A frequent problem seems to be missing sslcertificate(key)file directives for some of the virtual hosts. The logged error message is not too helpful (at least if all virtual hosts share the same error log): [error] Server should be SSL-aware but has no certificate configured [Hint: SSLCertificateFile] The error message should at least give the name or position of the problematic virtual host definition. Like this: --- modules/ssl/ssl_engine_pphrase.c.dist 2006-07-23 13:11:58.0 +0200 +++ modules/ssl/ssl_engine_pphrase.c2009-09-06 21:51:26.0 +0200 @@ -188,7 +188,8 @@ if (sc-server-pks-cert_files[0] == NULL) { ap_log_error(APLOG_MARK, APLOG_ERR, 0, pServ, Server should be SSL-aware but has no certificate - configured [Hint: SSLCertificateFile]); + configured [Hint: SSLCertificateFile] (%s:%d), + pServ-defn_name, pServ-defn_line_number); ssl_die(); } algoCert = SSL_ALGO_UNKNOWN; If anybody is interested, the original report is at: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=541607 Cheers, Stefan
Re: mod_reqtimeout: mitigating against slowloris-style attack (different approach)
On Tuesday 01 September 2009, Ruediger Pluem wrote: I guess reqtimeout_after_body also needs to be updated to the assert / do nothing if not configured logic like reqtimeout_after_headers Thanks, I missed that. I fixed it and also added support for minimum upload rates: This RequestHeaderTimeout initialTimeout [maxTimeout] RequestHeaderMinRate minRate will now set the timeout to initialTimeout. Whenever data is received, the timeout is increased according to minRate, but not to a value larger than maxTimeout. If RequestHeaderMinRate is not present, maxTimeout will be ignored. The same goes for the Body* directives. The new version is again at http://www.sfritsch.de/mod_reqtimeout/ @Nick: I now had also a brief look at mod_evasive, mod_cband and mod_qos. mod_evasive does only request-level checking and cannot defend against slowloris style attacks. mod_cband only provides upper bandwidth and connection limits. mod_qos has minimum upload rates and per IP connection limits and much more. However, it is much more heavy weight than mod_reqtimeout. The source code is about 20 times larger, it uses many mutexes, etc. Therefore I think mod_reqtimeout still has its use cases. And it's far easier to review, too ;-) Cheers, Stefan
CVE-2009-3094, CVE-2009-3095: mod_proxy_ftp issues
Hi, in case you haven't noticed yet, some new mod_proxy_ftp issues have been reported: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-3094 The ap_proxy_ftp_handler function in modules/proxy/proxy_ftp.c in the mod_proxy_ftp module in the Apache HTTP Server 2.0.63 and 2.2.13 allows remote FTP servers to cause a denial of service (NULL pointer dereference and child process crash) via a malformed reply to an EPSV command. http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-3095 The mod_proxy_ftp module in the Apache HTTP Server allows remote attackers to bypass intended access restrictions and send arbitrary commands to an FTP server via vectors related to the embedding of these commands in the Authorization HTTP header, as demonstrated by a certain module in VulnDisco Pack Professional 8.11. The (untested) patch below should fix CVE-2009-3094. For CVE-2009-3095 there is only little information. But looking at the code, it seems the username and password sent by the browser are sent to the ftp server without sanitization (i.e. they can contain LF characters). Cheers, Stefan --- a/modules/proxy/mod_proxy_ftp.c +++ b/modules/proxy/mod_proxy_ftp.c @@ -1351,10 +1351,6 @@ static int proxy_ftp_handler(request_rec *r, proxy_worker *worker, connect = 1; } } -else { -/* and try the regular way */ -apr_socket_close(data_sock); -} } } @@ -1441,10 +1437,6 @@ static int proxy_ftp_handler(request_rec *r, proxy_worker *worker, connect = 1; } } -else { -/* and try the regular way */ -apr_socket_close(data_sock); -} } } /*bypass:*/
Re: CVE-2009-3094, CVE-2009-3095: mod_proxy_ftp issues
On Fri, 11 Sep 2009, Joe Orton wrote: +char *p = ap_strchr(reply, '('), *ep, *term; +long port; + +/* Reply syntax per RFC 2428: 229 blah blah (|||port|) where '|' + * can be any character in ASCII from 33-126, obscurely. Verify + * the syntax. */ +if (p == NULL || p[1] != p[2] || p[1] != p[3] +|| (ep = strchr(p + 4, ')')) == NULL +|| ep == p + 4 || ep[-1] != p[1]) { +return 0; +} Shouldn't you also check for p[1] != 0 before p[1] != p[2], to catch the case where reply ends after the opening bracket? Apart from that, both this patch and the one you have already commited look fine. I haven't actually tested them, though. Stefan
Memory usage, core output filter, and apr_brigade_destroy
Hi, http://httpd.apache.org/docs/trunk/developer/output-filters.html recommends to reuse bucket brigades and to not use apr_brigade_destroy. However, both in 2.2 and in trunk, the core output filter sometimes calls apr_brigade_destroy on brigades that it has received down the chain from earlier output filters. Is this not bound to cause problems since the brigade's pool cleanup is then removed but the brigade is still reused later on? Also, the core output filter often creates new brigades instead of reusing an existing brigade. This should also be changed to reduce memory usage, shouldn't it? For trunk, the attached patch at least keeps the temporary brigade for flush buckets around. Do the versioning rules allow to add elements to core_output_filter_ctx for 2.2.x, too? It's defined in httpd.h. Cheers, Stefan--- a/include/httpd.h +++ b/include/httpd.h @@ -1239,6 +1239,7 @@ struct server_rec { typedef struct core_output_filter_ctx { apr_bucket_brigade *buffered_bb; +apr_bucket_brigade *tmp_flush_bb; apr_size_t bytes_in; apr_size_t bytes_written; } core_output_filter_ctx_t; diff --git a/server/core_filters.c b/server/core_filters.c index d9b6eb0..093205a 100644 --- a/server/core_filters.c +++ b/server/core_filters.c @@ -468,7 +468,7 @@ apr_status_t ap_core_output_filter(ap_filter_t *f, apr_bucket_brigade *new_bb) bucket = next) { next = APR_BUCKET_NEXT(bucket); if (APR_BUCKET_IS_FLUSH(bucket)) { -apr_bucket_brigade *remainder = apr_brigade_split(bb, next); +ctx-tmp_flush_bb = apr_brigade_split_ex(bb, next, ctx-tmp_flush_bb); apr_status_t rv = send_brigade_blocking(net-client_socket, bb, (ctx-bytes_written), c); if (rv != APR_SUCCESS) { @@ -476,7 +476,7 @@ apr_status_t ap_core_output_filter(ap_filter_t *f, apr_bucket_brigade *new_bb) c-aborted = 1; return rv; } -bb = remainder; +APR_BRIGADE_CONCAT(bb, ctx-tmp_flush_bb); next = APR_BRIGADE_FIRST(bb); bytes_in_brigade = 0; non_file_bytes_in_brigade = 0;
Re: CVE-2009-3094, CVE-2009-3095: mod_proxy_ftp issues
Shouldn't you also check for p[1] != 0 before p[1] != p[2], to catch the case where reply ends after the opening bracket? This should be p[1] == 0, of course.
Re: Memory usage, core output filter, and apr_brigade_destroy
Hi Rüdiger, thanks for the response. On Sunday 13 September 2009, Ruediger Pluem wrote: On 09/13/2009 01:11 PM, Stefan Fritsch wrote: http://httpd.apache.org/docs/trunk/developer/output-filters.html recommends to reuse bucket brigades and to not use apr_brigade_destroy. However, both in 2.2 and in trunk, the core output filter sometimes calls apr_brigade_destroy on brigades that it has received down the chain from earlier output filters. Is this not bound to cause problems since the brigade's pool cleanup is then removed but the brigade is still reused later on? It could be. But the questions is if it is really reused later on. Since this is the recommended design for output filters, I am sure it can happen. But your patch is causing core dumps during the proxy tests when running the test suite :-(. It seems I should install the test suite, too. I will look at it when I have some free cycles again. Cheers, Stefan
Re: Memory usage, core output filter, and apr_brigade_destroy
On Sun, 13 Sep 2009, Ruediger Pluem wrote: But your patch is causing core dumps during the proxy tests when running the test suite :-(. I currently don't understand why. Hmmm... either ctx-tmp_flush_bb is NULL or, since it was added in the middle of the struct, you didn't do a make distclean 1st That is not the problem. I did a slightly modified patch that added it to the end. I suppose it has something to do with not matching pools or bucket allocators between bb and ctx-tmp_flush_bb. It fails on in the proxy case and in the proxy case we have some mixtures going on there regarding pools and bucket allocators caused by the pooled backend connections. Yes, the lifetime of the brigade was wrong. The attached patch works without segfaults. Cheers, Stefandiff --git a/include/httpd.h b/include/httpd.h index a7a7025..cd47b11 100644 --- a/include/httpd.h +++ b/include/httpd.h @@ -1241,6 +1241,7 @@ typedef struct core_output_filter_ctx { apr_bucket_brigade *buffered_bb; apr_size_t bytes_in; apr_size_t bytes_written; +apr_bucket_brigade *tmp_flush_bb; } core_output_filter_ctx_t; typedef struct core_filter_ctx { diff --git a/server/core_filters.c b/server/core_filters.c index d9b6eb0..2f60d04 100644 --- a/server/core_filters.c +++ b/server/core_filters.c @@ -468,7 +468,15 @@ apr_status_t ap_core_output_filter(ap_filter_t *f, apr_bucket_brigade *new_bb) bucket = next) { next = APR_BUCKET_NEXT(bucket); if (APR_BUCKET_IS_FLUSH(bucket)) { -apr_bucket_brigade *remainder = apr_brigade_split(bb, next); +if (!ctx-tmp_flush_bb) { +/* + * Need to create tmp brigade with correct lifetime. Passing + * NULL to apr_brigade_split_ex would result in a brigade + * allocated from a different pool. + */ +ctx-tmp_flush_bb = apr_brigade_create(c-pool, c-bucket_alloc); +} +ctx-tmp_flush_bb = apr_brigade_split_ex(bb, next, ctx-tmp_flush_bb); apr_status_t rv = send_brigade_blocking(net-client_socket, bb, (ctx-bytes_written), c); if (rv != APR_SUCCESS) { @@ -476,7 +484,7 @@ apr_status_t ap_core_output_filter(ap_filter_t *f, apr_bucket_brigade *new_bb) c-aborted = 1; return rv; } -bb = remainder; +APR_BRIGADE_CONCAT(bb, ctx-tmp_flush_bb); next = APR_BRIGADE_FIRST(bb); bytes_in_brigade = 0; non_file_bytes_in_brigade = 0;
Re: Memory usage, core output filter, and apr_brigade_destroy
On Sunday 13 September 2009, Stefan Fritsch wrote: On Sunday 13 September 2009, Ruediger Pluem wrote: On 09/13/2009 01:11 PM, Stefan Fritsch wrote: http://httpd.apache.org/docs/trunk/developer/output-filters.htm l recommends to reuse bucket brigades and to not use apr_brigade_destroy. However, both in 2.2 and in trunk, the core output filter sometimes calls apr_brigade_destroy on brigades that it has received down the chain from earlier output filters. Is this not bound to cause problems since the brigade's pool cleanup is then removed but the brigade is still reused later on? It could be. But the questions is if it is really reused later on. Since this is the recommended design for output filters, I am sure it can happen. I have attached two patches: In the first, I change (hopefully) all places to not apr_brigade_destroy brigades that have been passed down the filter chain. Especially the core_output_filter change needs some review. In the second, I have changed all occurences of apr_brigade_split to apr_brigade_split_ex and I have made some more changes where bucket brigades can be reused. The test suite shows no regressions. Cheers, Stefan diff --git a/modules/http/byterange_filter.c b/modules/http/byterange_filter.c index a79b7f7..92048ab 100644 --- a/modules/http/byterange_filter.c +++ b/modules/http/byterange_filter.c @@ -307,7 +307,7 @@ AP_CORE_DECLARE_NONSTD(apr_status_t) ap_byterange_filter(ap_filter_t *f, APR_BRIGADE_INSERT_TAIL(bsend, e); /* we're done with the original content - all of our data is in bsend. */ -apr_brigade_destroy(bb); +apr_brigade_cleanup(bb); /* send our multipart output */ return ap_pass_brigade(f-next, bsend); diff --git a/modules/http/http_filters.c b/modules/http/http_filters.c index 3e96cb9..01ced24 100644 --- a/modules/http/http_filters.c +++ b/modules/http/http_filters.c @@ -1112,7 +1112,7 @@ AP_CORE_DECLARE_NONSTD(apr_status_t) ap_http_header_filter(ap_filter_t *f, ctx = f-ctx = apr_pcalloc(r-pool, sizeof(header_filter_ctx)); } else if (ctx-headers_sent) { -apr_brigade_destroy(b); +apr_brigade_cleanup(b); return OK; } } @@ -1283,7 +1283,7 @@ AP_CORE_DECLARE_NONSTD(apr_status_t) ap_http_header_filter(ap_filter_t *f, ap_pass_brigade(f-next, b2); if (r-header_only) { -apr_brigade_destroy(b); +apr_brigade_cleanup(b); ctx-headers_sent = 1; return OK; } diff --git a/server/core_filters.c b/server/core_filters.c index f073765..eb206c1 100644 --- a/server/core_filters.c +++ b/server/core_filters.c @@ -316,7 +316,7 @@ int ap_core_input_filter(ap_filter_t *f, apr_bucket_brigade *b, static void setaside_remaining_output(ap_filter_t *f, core_output_filter_ctx_t *ctx, apr_bucket_brigade *bb, - int make_a_copy, conn_rec *c); + conn_rec *c); static apr_status_t send_brigade_nonblocking(apr_socket_t *s, apr_bucket_brigade *bb, @@ -392,19 +392,21 @@ apr_status_t ap_core_output_filter(ap_filter_t *f, apr_bucket_brigade *new_bb) } } +if (new_bb != NULL) { +bb = new_bb; +} + if ((ctx-buffered_bb != NULL) !APR_BRIGADE_EMPTY(ctx-buffered_bb)) { -bb = ctx-buffered_bb; -ctx-buffered_bb = NULL; if (new_bb != NULL) { -APR_BRIGADE_CONCAT(bb, new_bb); +APR_BRIGADE_PREPEND(bb, ctx-buffered_bb); +} +else { +bb = ctx-buffered_bb; } c-data_in_output_filters = 0; } -else if (new_bb != NULL) { -bb = new_bb; -} -else { +else if (new_bb == NULL) { return APR_SUCCESS; } @@ -444,7 +446,7 @@ apr_status_t ap_core_output_filter(ap_filter_t *f, apr_bucket_brigade *new_bb) /* The client has aborted the connection */ c-aborted = 1; } -setaside_remaining_output(f, ctx, bb, 0, c); +setaside_remaining_output(f, ctx, bb, c); return rv; } @@ -508,14 +519,14 @@ apr_status_t ap_core_output_filter(ap_filter_t *f, apr_bucket_brigade *new_bb) } } -setaside_remaining_output(f, ctx, bb, 1, c); +setaside_remaining_output(f, ctx, bb, c); return APR_SUCCESS; } static void setaside_remaining_output(ap_filter_t *f, core_output_filter_ctx_t *ctx, apr_bucket_brigade *bb, - int make_a_copy, conn_rec *c) + conn_rec *c) { if (bb == NULL) { return; @@ -523,20 +534,14 @@ static void setaside_remaining_output(ap_filter_t *f, remove_empty_buckets(bb
Re: Logging or not logging 408's
On Monday 28 September 2009, Dan Poirier wrote: Is there some good reason not to log the 408's in this case? I am +1 for logging the 408's. I also think in case of a timeout, 408 should be logged instead of 400. The attached patch does that. --- protocol.c.orig 2009-09-05 00:36:31.448689825 +0200 +++ protocol.c 2009-09-05 00:35:43.472690365 +0200 @@ -691,7 +691,12 @@ len, r, 0, bb); if (rv != APR_SUCCESS) { -r-status = HTTP_BAD_REQUEST; +if (rv == APR_TIMEUP) { +r-status = HTTP_REQUEST_TIME_OUT; +} +else { +r-status = HTTP_BAD_REQUEST; +} /* ap_rgetline returns APR_ENOSPC if it fills up the buffer before * finding the end-of-line. This is only going to happen if it @@ -877,7 +882,7 @@ r-read_length = 0; r-read_body = REQUEST_NO_BODY; -r-status = HTTP_REQUEST_TIME_OUT; /* Until we get a request */ +r-status = HTTP_OK; /* Until further notice */ r-the_request = NULL; /* Begin by presuming any module can make its own path_info assumptions, @@ -916,7 +921,7 @@ if (!r-assbackwards) { ap_get_mime_headers_core(r, tmp_bb); -if (r-status != HTTP_REQUEST_TIME_OUT) { +if (r-status != HTTP_OK) { ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, request failed: error reading the headers); ap_send_error_response(r, 0); @@ -957,8 +962,6 @@ apr_brigade_destroy(tmp_bb); -r-status = HTTP_OK; /* Until further notice. */ - /* update what we think the virtual host is based on the headers we've * now read. may update status. */
Re: Memory usage, core output filter, and apr_brigade_destroy
Thanks for your comments. On Wednesday 23 September 2009, Ruediger Pluem wrote: --- modules/http/chunk_filter.c (Revision 818232) +++ modules/http/chunk_filter.c (Arbeitskopie) @@ -49,11 +49,11 @@ #define ASCII_CRLF \015\012 #define ASCII_ZERO \060 conn_rec *c = f-r-connection; -apr_bucket_brigade *more; +apr_bucket_brigade *more, *tmp; apr_bucket *e; apr_status_t rv; -for (more = NULL; b; b = more, more = NULL) { +for (more = tmp = NULL; b; tmp = b, b = more, more = NULL) { apr_off_t bytes = 0; apr_bucket *eos = NULL; apr_bucket *flush = NULL; @@ -85,7 +85,8 @@ if (APR_BUCKET_IS_FLUSH(e)) { flush = e; if (e != APR_BRIGADE_LAST(b)) { -more = apr_brigade_split(b, APR_BUCKET_NEXT(e)); +more = apr_brigade_split_ex(b, APR_BUCKET_NEXT(e), tmp); + tmp = NULL; } break; } @@ -105,7 +106,8 @@ * block so we pass down what we have so far. */ bytes += len; -more = apr_brigade_split(b, APR_BUCKET_NEXT(e)); +more = apr_brigade_split_ex(b, APR_BUCKET_NEXT(e), tmp); + tmp = NULL; break; } else { What is the point here? tmp is always NULL when passed to apr_brigade_split_ex so apr_brigade_split_ex == apr_brigade_split You missed the tmp = b at the beginning of the loop. The tmp = NULL lines were actually not needed. I have changed this a bit to hopefully make it clearer and commited the patch together with the other changes you suggested. Cheers, Stefan
adding mod_reqtimeout to trunk?
Hi, I would like to add mod_reqtimeout [1,2] to trunk. Is this OK? Considering the positive comments it received, may I put it into modules/filter or should it go into modules/experimental first? Cheers, Stefan [1] http://www.sfritsch.de/mod_reqtimeout/mod_reqtimeout.c [2] http://mail-archives.apache.org/mod_mbox/httpd- dev/200908.mbox/200909010043.54040...@sfritsch.de
Re: svn commit: r821477 - in /httpd/httpd/trunk: CHANGES modules/http/byterange_filter.c modules/http/http_filters.c server/core_filters.c
On Sunday 04 October 2009, Paul Querna wrote: URL: http://svn.apache.org/viewvc?rev=821477view=rev Log: Make sure to not destroy bucket brigades that have been created by earlier filters. Otherwise the pool cleanups would be removed causing potential memory leaks later on. I am not sure these changes make sense. The 'traditional' API view says that brigades passed down the output fitler chain should be destroyed, not cleared -- please see the thread started with this message: http://mail-archives.apache.org/mod_mbox/httpd-dev/200504.mbox/%3C 9ccfb16df2220fc31c836bb7cd640...@ricilake.net%3e This is at odds with the documentation at http://httpd.apache.org/docs/trunk/developer/output-filters.html , in particular with rules 6 and 9 at the end. If output filters reuse brigades, the core must not destroy them. I just noticed that include/util_filter.h still says The caller relinquishes ownership of the brigade for ap_pass_brigade(). Either output-filters.html or util_filter.h must be changed. In my opinion, it is much better to not destroy the brigades and reuse them instead of creating a new brigade on every filter invocation. Otherwise there is huge memory usage for streaming content. The thread you have pointed out also has a suggested patch for the docs in util_filter.h: http://mail-archives.apache.org/mod_mbox/httpd- dev/200504.mbox/20050426102834.ge16...@redhat.com It is not clear from the thread why it wasn't applied.
Re: svn commit: r821471 - in /httpd/httpd/trunk: CHANGES modules/filters/mod_deflate.c modules/filters/mod_sed.c modules/http/chunk_filter.c server/protocol.c
On Sunday 04 October 2009, Ruediger Pluem wrote: To be on the safe side we should do apr_brigade_cleanup(b) here. Thanks. Fixed in r821481
Re: svn commit: r821477 - in /httpd/httpd/trunk: CHANGES modules/http/byterange_filter.c modules/http/http_filters.c server/core_filters.c
On Sunday 04 October 2009, Ruediger Pluem wrote: --- httpd/httpd/trunk/server/core_filters.c (original) +++ httpd/httpd/trunk/server/core_filters.c Sun Oct 4 08:08:50 2009 @@ -392,19 +392,21 @@ } } +if (new_bb != NULL) { +bb = new_bb; +} + This could move in the block above, couldn't it? Fixed in r821486
Re: adding mod_reqtimeout to trunk?
On Sunday 04 October 2009, Jim Jagielski wrote: Personally, I'd like to see this as part of the actual code core, where we have several Timeouts, eg: Timeout 30 5 10 2 which define timeout as now, timeout before 1st byte, timeout between bytes timeout after etc... We've always wanted better control over this ind timeouts and putting it in a module seems not a good idea for 2.4/3.0 (but of course, OK for 2.2.x) I think putting mod_reqtimeout into trunk until it is backported to 2.2 and then moving it into the core would make sense, wouldn't it? And I would prefer several config directives instead of having to remember which value in Timeout means what.
Re: adding mod_reqtimeout to trunk?
On Sunday 04 October 2009, Nick Kew wrote: FWIW, IMO it should go in modules/filters not experimental. +1. trunk is, by definition, experimental. But when we float off 2.3/4-branch, we should perhaps do some documentation of stability levels of different features and modules for users. I might open a wiki page to collect information on the subject. I agree and would rather reserve modules/experimental for modules that have known issues. I have commited mod_reqtimeout to trunk. I haven't finished the docs, yet.
Re: adding mod_reqtimeout to trunk?
On Monday 05 October 2009, Jim Jagielski wrote: Thx... I'm updating it with an eye to making it core, and therefore having ReqTimeout headerinit=5 headermax=10 As we also have RequestHeaders, maybe RequestTimeout would be better? Let me know if I can help w/ the docs. I have commited some docs. Any improvements are welcome. BTW, how will the timeouts interact with SSL renegotiation and with mod_request? I haven't tried that yet. Cheers, Stefan
Re: svn commit: r821477 - in /httpd/httpd/trunk: CHANGES modules/http/byterange_filter.c modules/http/http_filters.c server/core_filters.c
On Sunday 04 October 2009, Nick Kew wrote: Good summary. I have taken the absence of further replies as agreement and commited the patch to util_filter.h.
Re: svn commit: r822870 - in /httpd/httpd/trunk: CHANGES include/util_filter.h
On Wednesday 07 October 2009, Jim Jagielski wrote: Does this really require a CHANGES entry?? No. There is at least one other CHANGES entry about a changed comment, though.
Re: svn commit: r823337 - in /httpd/httpd/trunk: CHANGES include/ap_mmn.h include/http_core.h modules/loggers/mod_logio.c server/scoreboard.c
On Thursday 08 October 2009, s...@apache.org wrote: --- httpd/httpd/trunk/include/ap_mmn.h (original) +++ httpd/httpd/trunk/include/ap_mmn.h Thu Oct 8 21:42:13 2009 @@ -198,15 +198,17 @@ * 20090401.3 (2.3.3-dev) Added DAV options provider to mod_dav.h * 20090925.0 (2.3.3-dev) Added server_rec::context and added *server_rec * param to ap_wait_or_timeout() + * 20090925.1 (2.3.3-dev) Add optional function ap_logio_get_last_bytes() to + * mod_logio * */ #define MODULE_MAGIC_COOKIE 0x41503234UL /* AP24 */ #ifndef MODULE_MAGIC_NUMBER_MAJOR -#define MODULE_MAGIC_NUMBER_MAJOR 20090401 +#define MODULE_MAGIC_NUMBER_MAJOR 20090925 #endif -#define MODULE_MAGIC_NUMBER_MINOR 2 /* 0...n */ +#define MODULE_MAGIC_NUMBER_MINOR 1 /* 0...n */ I updated MODULE_MAGIC_NUMBER_MAJOR to match the comments. I hope this is correct?
Re: svn commit: r823337 - in /httpd/httpd/trunk: CHANGES include/ap_mmn.h include/http_core.h modules/loggers/mod_logio.c server/scoreboard.c
On Friday 09 October 2009, William A. Rowe, Jr. wrote: * 20090925.0 (2.3.3-dev) Added server_rec::context and added *server_rec * param to ap_wait_or_timeout() + * 20090925.1 (2.3.3-dev) Add optional function ap_logio_get_last_bytes() to + * mod_logio * */ #define MODULE_MAGIC_COOKIE 0x41503234UL /* AP24 */ #ifndef MODULE_MAGIC_NUMBER_MAJOR -#define MODULE_MAGIC_NUMBER_MAJOR 20090401 +#define MODULE_MAGIC_NUMBER_MAJOR 20090925 #endif -#define MODULE_MAGIC_NUMBER_MINOR 2 /* 0...n */ +#define MODULE_MAGIC_NUMBER_MINOR 1 /* 0...n */ I updated MODULE_MAGIC_NUMBER_MAJOR to match the comments. I hope this is correct? Nope - adding a function doesn't cost us MMN MAJOR (and never delete MMN_MINOR, it just falls back to zero). Sorry, that's not what I meant (and kmail sucks at quoting preformatted mails). The comments indicated MMN 20090925.0 but the code was at 20090401.2. Therefore I assumed this was just forgotten and jumped to 20090925.1. MMN MAJOR jumps every time we encounter a change to an exported API which users might be calling, which would result in erroneous calls to the function from a module at an older build. It similarly changes when the fields of an existing public data structure change (but not when they are extended).
Re: client_ip vs remote_ip
On Wednesday 23 November 2011, Graham Leggett wrote: On 23 Nov 2011, at 8:22 PM, Nick Kew wrote: This has the additional advantage of *breaking* existing c-remote_ip references and forcing the module author to choose which they mean for their purposes (most would refer to the authenticated address). This makes more sense to me, +1. Yes, +1 An interesting take on it! But use of remote_ip and remote_addr goes further than that. Changing their semantics in CGI (and its imitators from PHP to mod_rewrite) would *silently* break apps, so a firm -1 to that. And divorcing conn-remote_ip from the CGI/etc gets fearsomely confusing! There is no option to silently break any app, the only way to get this functionality is for the administrator to actively enable a module like mod_remoteip (or similar module depending on your needs). In the word of load balancers, this behaviour is already well understood and supported. Agreed.
Re: svn commit: r1205423 - in /httpd/httpd/trunk/modules: cache/mod_cache.c mappers/mod_negotiation.c
On Wed, 23 Nov 2011, j...@apache.org wrote: Author: jim Date: Wed Nov 23 15:01:42 2011 New Revision: 1205423 URL: http://svn.apache.org/viewvc?rev=1205423view=rev Log: Use ap_pass_brigade_fchk() Modified: httpd/httpd/trunk/modules/cache/mod_cache.c httpd/httpd/trunk/modules/mappers/mod_negotiation.c Modified: httpd/httpd/trunk/modules/cache/mod_cache.c URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/cache/mod_cache.c?rev=1205423r1=1205422r2=1205423view=diff == --- httpd/httpd/trunk/modules/cache/mod_cache.c (original) +++ httpd/httpd/trunk/modules/cache/mod_cache.c Wed Nov 23 15:01:42 2011 @@ -292,19 +292,11 @@ static int cache_quick_handler(request_r out = apr_brigade_create(r-pool, r-connection-bucket_alloc); e = apr_bucket_eos_create(out-bucket_alloc); APR_BRIGADE_INSERT_TAIL(out, e); -rv = ap_pass_brigade(r-output_filters, out); -if (rv != APR_SUCCESS) { -if (rv != AP_FILTER_ERROR) { -/* no way to know what type of error occurred */ -ap_log_rerror(APLOG_MARK, APLOG_DEBUG, rv, r, - cache_quick_handler(%s): ap_pass_brigade returned %i, - cache-provider_name, rv); -return HTTP_INTERNAL_SERVER_ERROR; -} -return rv; -} -return OK; +return ap_pass_brigade_fchk(r, out, +apr_psprintf(r-pool, + cache_quick_handler(%s): ap_pass_brigade returned, + cache-provider_name)); } This means we call apr_psprintf even if everything went well. That's far from optimal. Maybe really make it into a macro, then the apr_psprintf would only be executed on error? /** @@ -576,19 +568,10 @@ static int cache_handler(request_rec *r) out = apr_brigade_create(r-pool, r-connection-bucket_alloc); e = apr_bucket_eos_create(out-bucket_alloc); APR_BRIGADE_INSERT_TAIL(out, e); -rv = ap_pass_brigade(r-output_filters, out); -if (rv != APR_SUCCESS) { -if (rv != AP_FILTER_ERROR) { -/* no way to know what type of error occurred */ -ap_log_rerror(APLOG_MARK, APLOG_DEBUG, rv, r, - cache_handler(%s): ap_pass_brigade returned %i, - cache-provider_name, rv); -return HTTP_INTERNAL_SERVER_ERROR; -} -return rv; -} - -return OK; +return ap_pass_brigade_fchk(r, out, +apr_psprintf(r-pool, + cache(%s): ap_pass_brigade returned, + cache-provider_name)); } Same as above
Icons for 2.4
Hi, docs/icons/apache_pb2* contain the version number (2.2), in the case of docs/icons/apache_pb2_ani.gif it's even an animation. Any volunteers for changing these to 2.4? Cheers, Stefan
Re: svn commit: r1205894 - in /httpd/httpd/trunk: include/util_filter.h modules/cache/mod_cache.c server/util_filter.c
On Thu, 24 Nov 2011, j...@apache.org wrote: Author: jim Date: Thu Nov 24 15:53:16 2011 New Revision: 1205894 URL: http://svn.apache.org/viewvc?rev=1205894view=rev Log: Use varargs... Modified: httpd/httpd/trunk/include/util_filter.h httpd/httpd/trunk/modules/cache/mod_cache.c httpd/httpd/trunk/server/util_filter.c Modified: httpd/httpd/trunk/server/util_filter.c URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/server/util_filter.c?rev=1205894r1=1205893r2=1205894view=diff == --- httpd/httpd/trunk/server/util_filter.c (original) +++ httpd/httpd/trunk/server/util_filter.c Thu Nov 24 15:53:16 2011 @@ -544,17 +544,25 @@ AP_DECLARE(apr_status_t) ap_pass_brigade */ AP_DECLARE(apr_status_t) ap_pass_brigade_fchk(request_rec *r, apr_bucket_brigade *bb, - const char *errmsg) + const char *fmt, + ...) { apr_status_t rv; -if (!errmsg) -errmsg = ap_pass_brigade returned; rv = ap_pass_brigade(r-output_filters, bb); if (rv != APR_SUCCESS) { if (rv != AP_FILTER_ERROR) { -ap_log_rerror(APLOG_MARK, APLOG_DEBUG, rv, r, - %s %d, errmsg, rv); +if (!fmt) +ap_log_rerror(APLOG_MARK, APLOG_DEBUG, rv, r, + ap_pass_brigade returned %d, rv); +else { +va_list ap; +const char *res; +va_start(ap, fmt); +res = apr_pvsprintf(r-pool, fmt, ap); +va_end(ap); +ap_log_rerror(APLOG_MARK, APLOG_DEBUG, rv, r, res, NULL); +} No, this is not right. If some caller passes arguments to ap_pass_brigade_fchk that may cause the result of apr_pvsprintf to contain a %, you would get a format-string vulnerability. This could easily happen if some error message included the URL. You must use ap_log_rerror(APLOG_MARK, APLOG_DEBUG, rv, r, %s, res); intead. return HTTP_INTERNAL_SERVER_ERROR; } return AP_FILTER_ERROR;
Re: Proposal: error codes
On Sunday 27 November 2011, Rich Bowen wrote: At Apachecon several of us were discussing how error messages could be made more helpful without making them paragraphs. Two suggestions were made - adding a URL to the message or adding a number/code to each error that would then be looked up for more information. Any thoughts on 1) the wisdom of this and 2) the method of assigning codes? I've long considered error messages to be documentation and would live to see the log files be one step more helpful. Yes, that would be a good idea and I agree with Daniel that we should use a distinct prefix or format. We currently have around 2700 calls to *_log_?error in trunk, so a 4-digit number should be ok. Together with for example AH as prefix for Apache HTTPD this would result in numbers like AH0815 which don't seem to cause many hits on google. As a first step, one could simply manually add the error code to the format strings of those messages that frequently appear in support requests. This would be non-disruptive for 2.4 and could even be done in some future 2.2.x version, if someone is willing to do the work. I would keep the scheme for assigning codes simple. Just have a file that contains the next usable one and always increment the number in that file when a code is assigned to an error message. We could then have some scripts to check that no error code is used twice and maybe create a list of error message and module for each error code. If somebody later comes up with something more refined (e.g. pass the error code as additional parameter to ap_log_error), it should be straightforward enough to switch. Cheers, Stefan
Can we be less forgiving about what we accept?
Hi, while browsing a bit through Michael Zalewski's new Tangled Web book, I was reminded again that we are very forgiving about what we accept as a request. Is this really a good idea in the time of lots of web security issues? Examples include: * in the request line, the protocol may be arbitrary garbage (including control characters) and the request will be interpreted as HTTP/1.0. r-protocol will be set to that garbage. * the request method may contain arbitrary garbage (including control characters), as long as it does not contain white space, and is sent as method to a backend server by mod_proxy * We still accept HTTP/0.9 for incoming requests. Is this necessary? Do we want to make this configurable? * Mod_proxy_http will accept arbitray garbage sent by the backend server as HTTP/0.9 response. I think this one is also pretty bad from a usability point of view because it makes it more difficult to diagnose if a backend server sends some body that does not match the length information in the headers. Is accepting 0.9 responses really necessary? Do we want to make this configurable? Even disable by default? * headers may contain control characters * With 'ProxyRequests off', we accept absolute urls like http://hostname/path for local requests, but we don't check that the hostname contained in it actually matches the Host header if there is one. The hostname from the URI is then used for vhost matching and put into r-hostname. This is mandated by RFC2616 but I guess there are quite a few buggy webapps that always look into the Host header. A workaround may be to set the Host header to the hostname from the URI in this case. * We accept multiple headers with the same name even where it makes no sense, e.g. Host and Range * We accept unescaped control characters in the URI In all cases where I mention control characters, this does not include NUL-bytes but it includes carriage returns which are considered end of line on some systems. Changing this is of course too late for 2.4. But options to disable HTTP 0.9 for incoming requests and for mod_proxy would be nice, at least for 2.4.1. And maybe we could have a 'unforgiving' option that catches the other things I mentioned and whatever else we find. Or do you think that such checks should be the sole responsibility of mod_security and the like? Cheers, Stefan
Re: Can we be less forgiving about what we accept?
On Monday 28 November 2011, Nick Kew wrote: On 28 Nov 2011, at 00:37, Stefan Fritsch wrote: Hi, while browsing a bit through Michael Zalewski's new Tangled Web book, I was reminded again that we are very forgiving about what we accept as a request. Is this really a good idea in the time of lots of web security issues? Sounds like you're thinking of something like mod_taint[1] plus a default ruleset to ship it with? I thought more of something that is contained in the core, aborts processing early for invalid requests, is not configurable (except maybe for a lax/strict switch) and does not reduce performance in any significant way. Not sure if a regex approach is right there. But I am not sure if doing such validation in the core is worth the effort, either.
Re: mod_xml2enc comments
On Sunday 13 November 2011, Nick Kew wrote: Indeed, checking those return values would be better. May have been lost when I separated out the i18n code from its origins in markup filtering. I have added some error checks and a few ap_asserts(). Do you want to review it before I merge it into 2.4? Do you have some setup where you can test the change?
Re: svn commit: r1207721 - in /httpd/httpd/branches/2.4.x: ./ build/rpm/httpd.spec.in
On Tuesday 29 November 2011, Igor Galić wrote: I hope that other vendors will pick up our packaging as the canonical way, and improve the way httpd is deployed out there. +1 sf - how are you planning to do this, btw ;) I think we are going to drop the separate MPM packages and build all MPMs shared. I don't think we are going to move packages into separate modules, though. The dependencies are either very small (lua) or installed on most systems anyway (openssl, libxml). Apart from that, I haven't thought much about it, yet. I guess the Debian packages won't change very much.
Re: Proposal: error codes
On Tuesday 29 November 2011, William A. Rowe Jr. wrote: On 11/27/2011 8:34 AM, Rich Bowen wrote: At Apachecon several of us were discussing how error messages could be made more helpful without making them paragraphs. Two suggestions were made - adding a URL to the message or adding a number/code to each error that would then be looked up for more information. Any thoughts on 1) the wisdom of this and 2) the method of assigning codes? Keep these numeric and it will assist us in later internationalizing error log messages. I was initially thinking in terms of using hashed strings into mapping indexes, but such numeric codes would be valuable. Well, the number/code will be a number with some prefix (e.g. AH) for search machine friendliness. The number could later potentially be used for looking up translations, but not without API changes. And I am not sure if that would be the smartest idea. Two log messages that contain exactly the same text but are emitted by different parts of httpd should have two distinct numbers. This may not be the best way to handle translations. For 2.4, I would pass the number as part of the format string. This does not require any code change (in the sense of programming logic) and is therefore still doable in 2.4. Ideally we could assign these numbers into the appropriate apr_errno range. I think this is kind of orthogonal to having numbers in each log message. There is often an APR error code that supplements the log message. The log message says what went wrong and the error code says how it went wrong (e.g. message is Can't open file %s, error code is ENOENT). But we don't want the error strings for APR error codes to be format strings. The number of the log message however could be listed in the documentation saying that this message happens if a client requests a non-existing URL. On the other hand, we could of course make httpd APIs return httpd- specific apr_errno values that could then be passed in the usual apr_status_t argument to ap_log_*error and be printed like normal APR error messages. But maybe your and Mikhail's mails are showing that we should have some name for these error message numbers that allows to distinguish them from error codes as those returned by system or library calls. Maybe message tag or error tag?
Re: Test failures and libwww-perl 6.0.3
On Tuesday 29 November 2011, Kaspar Brand wrote: On 23.11.2011 15:06, Joe Orton wrote: On Wed, Nov 23, 2011 at 08:37:31AM +0100, Kaspar Brand wrote: There are two approaches to fix 1): a) turn off verify_hostname where needed (t/ssl/pr12355.t and t/ssl/pr43738.t are doing this right now) or b) specify the CA cert (generated in t/conf/ca/...) to make verification work/succeed. (b) sounds better; I presume it doesn't break with older versions of LWP? LWP::UserAgent 6 will warn when supplying unrecognized options to its constructor, so I made the changes depend on $LWP::VERSION. I have committed the patches in r1207758 (for Apache::Test) and r1207759 (adjustments to the t/ssl tests). Nice, works here now. Thanks.
Re: mod_xml2enc: warning: variable 'rv' set but not used
On Tuesday 29 November 2011, Graham Leggett wrote: Hi all, I've noticed some warnings in mod_xml2enc: mod_xml2enc.c: In function 'fix_skipto': mod_xml2enc.c:123:18: warning: variable 'rv' set but not used [-Wunused-but-set-variable] mod_xml2enc.c: In function 'sniff_encoding': mod_xml2enc.c:167:18: warning: variable 'rv' set but not used [-Wunused-but-set-variable] Should we be handling the errors here? rv = apr_brigade_partition(ctx-bbsave, (p-ctx-buf), bstart); These are fixed in trunk r1206850. I intend to backport to 2.4. But if you have some time, a second set of eyeballs can't hurt.
Re: Error codes
On Monday 28 November 2011, Rich Bowen wrote: On Nov 28, 2011, at 11:21 AM, Stefan Fritsch wrote: A question on procedure: Do you want to add all error codes at once and then fill in the descriptions or add the error codes as the documentation evolves? If the former, some scripting would probably save a lot of work, too. If it can be done all at once with a script, that would be great. The only concern from there would be the ease of applying that change to earlier versions. Presumably the trunk - 2.4 patch would work pretty well, but - 2.2 would take a lot of work. Not sure that it would be worth it, though. I'm in favor of making this a 2.4-only effort. I am not sure that every debug message needs a code, maybe one could at first only tag those of level info or higher? Or maybe even warning? Ah. Good point. Yes, we should probably skip stuff in Debug, unless it makes sense to add them on a case-by-case basis later on. (Not sure what would justify that, but perhaps some messages are more common than others? More important? More … something.) I would say warn and higher, but perhaps it merits further discussion. Adding them to info now might save hassle later on if we wanted to further document those, and costs us nothing now, except for 6 characters in the log message. What do folks think? Currently my scripts produces: http://people.apache.org/~sf/error-msg-numbers.diff http://people.apache.org/~sf/error-msg-numbers.list This is level info and up, but that is easily changed. The script still misses all occurences where the format string is split into several parts (e.g. over several lines), where the loglevel is not constant but a variable, and possibly some others. Are other folks comfortable with going this way? Add it to 2.4?
Re: Error codes
On Wednesday 30 November 2011, Guenter Knauf wrote: Am 30.11.2011 01:51, schrieb William A. Rowe Jr.: On 11/29/2011 5:30 PM, Stefan Fritsch wrote: Currently my scripts produces: http://people.apache.org/~sf/error-msg-numbers.diff http://people.apache.org/~sf/error-msg-numbers.list This is level info and up, but that is easily changed. Everything from debug level up should be coded. Suggestion, should we allocate numeric ranges per-file, core vs add-on module, so that there is room for expansion? I am not sure that this is necessary. The error log lines already contain the module name or core. Actually I currently have two scripts, one that tries to identify where to add a number and adds AH, and another one that takes each AH and replaces it with the next free number. So programmers would only need to add AH and run the second script over the source file. The only disadvantage may be that in a list sorted by log number, messages by the same module would not appear next to each other. On the other hand, it should be easy to extend the script to also create a list sorted by module or source file name if that is useful. another suggestion - perhaps add a 3rd char: AHC - core AHM - module then we would have for both core and modules the whole range of error codes ... Sometimes we move code from the core to a module or vice versa. If the code is mostly unchanged, I would want to keep the error message number in this case. So best not encode that in the number, IMHO. If we ever hit the codes, we could either switch to hex or add another digit. Or would it be better to use 5 digits right away? Another thought: Would having the AH0815 numbers verbatim in the source actually hurt search engine users because they get hits on svn.apache.org, github, and whatever. Maybe a macro that hides the actual form would be better? #define APLOGNO(n) AH #n should do the trick. The source would then contain ap_log_error(..., APLOGNO(0815) foo went wrong, ...); which would not produce a hit for AH0815. Thoughts?
Re: Error codes
On Wednesday 30 November 2011, Mikhail T. wrote: On 29.11.2011 23:30, William A. Rowe Jr. wrote: But my point remains, that we allocate each module a block of some 50 codes, such that mod_aaa gets AHM-0049 and mod_aab gets 50-99, etc. How will 3rd-party modules be getting their blocks? One idea would be to have them use other prefixes than AH. But this may be a bad idea if we ever want the number to be used for other things.
Re: Proposal: error codes
On Wednesday 30 November 2011, Tim Bannister wrote: On 27 Nov 2011, at 17:14, Stefan Fritsch wrote: Yes, that would be a good idea and I agree with Daniel that we should use a distinct prefix or format. We currently have around 2700 calls to *_log_?error in trunk, so a 4-digit number should be ok. Together with for example AH as prefix for Apache HTTPD this would result in numbers like AH0815 which don't seem to cause many hits on google. I think most people still use file logging, but Apache httpd does also support syslog. And over the life of Apache httpd syslog has gained features too, such as message codes. http://tools.ietf.org/html/rfc5424 section 6.2.7 says: The MSGID SHOULD identify the type of message. For example, a firewall might use the MSGID TCPIN for incoming TCP traffic and the MSGID TCPOUT for outgoing TCP traffic. Messages with the same MSGID should reflect events of the same semantics. The MSGID itself is a string without further semantics. It is intended for filtering messages on a relay or collector. There is also a mechanism for structured metadata. I don't know whether either or both of these will be written into future Apache httpd… but I thought it was worth mentioning these early on in the discussion. That's definitely interesting to know, but one has to see if there is a standard API for these things. The Linux man pages of syslog() and friends do not mention MSGID.
Re: Error codes
On Wednesday 30 November 2011, Graham Leggett wrote: On 30 Nov 2011, at 9:21 PM, William A. Rowe Jr. wrote: I'm not suggesting changing the alpha prefix. Just block out ranges so that any listing of the codes is grouped by module that emits them. From my experience, any attempt at grouping some kind of numbering like this normally results a few years later in an attempt to undo the grouping to solve problems caused by the ranges being too small, or running out of numbers of a sensible size. I agree with Graham here. mod_ssl has 300 messages, so the range per module would likely need to be something like 1000, and with 100 modules, this would mean 6 digits in the number. And tracking one counter per module would make my script way more complex. Ideally it should be as simple as possible, run a script and a number will be chosen for you is a lot more convenient, and if a script could warn of duplicated numbers for fixing (think the result of cut-and-paste by someone unfamiliar with the script) that would be ideal too. The current state is here: http://people.apache.org/~sf/log-msg-numbers.diff http://people.apache.org/~sf/log-msg-numbers.list http://people.apache.org/~sf/log-msg-numbers.scripts.diff Changes to previous are - 5 digits instead of 4 - with APLOGNO() syntax (we really want that, just think of all the archives of the svn commit mailing list) - level debug and above instead of info - now after the s/r/c/p argument and not before the format; this makes the script a bit simpler and don't miss logging calls with multi-line format strings. But the number is now frequently on a different line than the format string. I have put the scripts there, too, but they need some cleaning up. Still todo are at least ap_log_cserror, ssl_log_error, dav_log_err, and everything that has a variable as level parameter. What to do about multi-line log messages that are split over several calls to ap_log...()? Grep prefork.c for 'almost certain server failure' for an example. Maybe just add a number to the first line? This is something that will need manual adjustment. I guess before we commit this to 2.4, all other pending backports should be done. Any more comments/thoughts?
Re: svn commit: r1209754 - in /httpd/httpd/trunk/modules/proxy: ./ balancers/
On Friday 02 December 2011, Graham Leggett wrote: On 03 Dec 2011, at 12:42 AM, minf...@apache.org wrote: Author: minfrin Date: Fri Dec 2 22:42:39 2011 New Revision: 1209754 URL: http://svn.apache.org/viewvc?rev=1209754view=rev Log: mod_proxy: Make ap_proxy_retry_worker() into an optional function. Allows mod_lbmethod_bybusyness, mod_lbmethod_byrequests, mod_lbmethod_bytraffic and mod_lbmethod_heartbeat to be loaded without mod_proxy yet being present, which happens when modules are loaded in alphabetical order. The proxy modules need a bunch more refactoring like this so that the modules can be loaded cleanly in any order. Right now, we've gotten away with this due to the alphabetical naming of the proxy modules, sorted alphabetically mod_proxy comes before mod_proxy_http, and it just so happens mod_proxy_http depends on a lot of symbols provided by mod_proxy, so we work by accident. The above change solves the problem for the mod_lbmethod_* modules, which sort alphabetically before mod_proxy and so cause a problem for people. While I want to backport this change to v2.4, we don't have to backport the rest of the changes if doing so will delay httpd v2.4 for any length of time. Right now I have a bit of time, so am keen to do it. There are also functions in mod_proxy that are only used by a single mod_proxy_* module which ideally should be moved to the module that needs it. There are also a bunch of utility functions that take a function name as argument for logging. I think this should actually be replaced by a module_index argument, which should then be passed to ap_log_*. This way the utility functions would write log messages in the name of the calling module, and be subject of the log config of the calling module. But that's a bit late for 2.4, too.
Re: Error codes
On Thursday 01 December 2011, Stefan Fritsch wrote: Any more comments/thoughts? As nobody disagreed, this is now in trunk. I intend to commit it to 2.4 tomorrow. It's already a big step forward and the finishing touches can be done in 2.4.1.
Are we there yet?
Hi, where are we WRT 2.4? Blockers: mod_proxy_scgi.c needs to be fixed for compilation with C89 (easy) The only blocker left in STATUS is this: * Modules that are not ready for production use must be removed. The same for modules without documentation. I think we have already removed the obvious candidates WRT maturity. About documentation, docs/STATUS has this: - modules docs - the follwing modules added since 2.2 lack documentation - mod_watchdog - mod_heartbeat - mod_heartmonitor - mod_lbmethod_heartbeat - mod_socache_dbm - mod_socache_memcache - mod_socache_shmcb Do we want to remove any of these modules? The unique tags for log messages change is waiting for backport to 2.4. But if people would want to let it sit in trunk a bit and wait for 2.4.1, that would be fine with me, too. Anything else? Jim, do you consider your mod_proxy* changes finished? Do you think they are well enough tested for an instant 2.4.0? Graham has suggested some more refactoring for mod_proxy*. Would certainly be nice but not a blocker. Do we need another beta? Or just roll an release candidate tarball, have people test a bit, and if it's good, tag 2.4.0? Cheers, Stefan
Re: Are we there yet?
On Saturday 03 December 2011, William A. Rowe Jr. wrote: On 12/3/2011 1:32 AM, Gregg L. Smith wrote: On 12/2/2011 3:48 PM, Stefan Fritsch wrote: - the follwing modules added since 2.2 lack documentation - mod_socache_dbm - mod_socache_memcache - mod_socache_shmcb These are required for SSL AFIAK, they must stay! Mentioned in the mod_ssl docks should be sufficient IMHO. It isn't. Someone trimming their LoadModule lines will undoubtedly hit the wall trying to figure out why they couldn't just switch from memcache or dbm back to shmcb. Someone else will break there server simply removing unfamiliar modules from the list, particularly one which appears nowhere in the ASF HTTP Server module documents lists. This, and the mod_slotmem_* objects, really make lousy modules. Isn't the answer to this that all consumers of slotmem and socache should simply point to the mod_slotmem_*/mod_socache_* modules in the error message if the configured provider is not found? Then the user would immediately know that another module is required.
Re: svn commit: r1209777 - in /httpd/httpd/branches/2.4.x/include: ap_mmn.h http_log.h
On Saturday 03 December 2011, Nick Kew wrote: On 2 Dec 2011, at 23:19, s...@apache.org wrote: Modified: httpd/httpd/branches/2.4.x/include/ap_mmn.h httpd/httpd/branches/2.4.x/include/http_log.h [...] + * 20111202.1 (2.5.0-dev) add APLOGNO() 2.4 or 2.5? Changed it to 2.4. I guess we still need to update the comments in the trunk and 2.4.x ap_mmn.h files after the dust has settled. I guess the trunk file should get the history of 2.4.x, because that's what module authors need to take into account. Isn't this incompatible with being in a near-release beta state? Could certainly use a couple of test releases before being set in stone! There is no actual code change. Only a lot of strings added to error messages. But I would be fine with adding it later, too. Can some other people please voice their opinions?
Re: Are we there yet?
On Sun, 4 Dec 2011, Jim Jagielski wrote: There seems to be a lot of renewed effort in getting 2.4/trunk is a really releasable state, which is all Goodness. Ideally, I'd like to release 2.4.0 before the end of the year, but starting off 2012 with a new httpd release also makes some sense as well. I was kind of hoping for GA before Christmas... With all the changes, I would like to have another 'beta' release simply to try to work out any remaining issues, but my fear is that this would also encourage a lot of rework rather than polishing (if you get my drift). So tagging as a RC puts us in the more correct state of mind. +1 for a RC ASAP. Do you have time to RM again? Would we name that RC 2.4.0 or would we release 2.3.16 from the 2.4.x branch? We could also consider putting the focus more on polishing by making API changes and large refactorings review-then-commit. Probably with the exception of the proxy/slotmem plans that were already put forward on the list. I would also like to know what more people think about the error message tag thing? AAUI Rich, Graham, and myself are for including it in 2.4 now, while Nick prefers to wait. If we decide to wait, it should maybe be reverted from trunk, too, to make merging of commits to 2.4 easier.
Slotmem/socache module names vs. provider names
mod_slotmem_plain plain mod_slotmem_shm shared ! mod_socache_dbm dbm mod_socache_dc dc mod_socache_memcachemc ! mod_socache_shmcb shmcb Should we align the provider names with the module names? E.g. change shared to shm and mc to memcache?
Re: Are we there yet?
On Sunday 04 December 2011, Jim Jagielski wrote: I also need to look at the event changes as well in trunk to see if they are in 2.4.0 as well (or if they are something we could easily add post 2.4.0)... The event changes in trunk are not ready for 2.4, see http://mail-archives.apache.org/mod_mbox/httpd- dev/20.mbox/%3CCAHSOX_C872kk7HWxfaeiEsHQm_2QVJDYjBajkg5RXjaBtx4%2BJQ%40mail.gmail.com%3E But I think it should be possible to backport them post-GA.
Re: svn commit: r1209766 [9/12] - in /httpd/httpd/trunk: docs/log-message-tags/ modules/aaa/ modules/apreq/ modules/arch/netware/ modules/arch/unix/ modules/arch/win32/ modules/cache/ modules/cluster/
On Wednesday 07 December 2011, Kaspar Brand wrote: These changes aren't doing the right thing, I think... both ssl_log_ssl_error() and ssl_log_cert_error() are basically wrappers for ap_log_*(), and are therefore called from various places in mod_ssl - i.e. the messages triggering them should get different tags. (Also, in the case of ssl_log_cert_error, the error tag is currently inserted somewhere in the middle of the message, because ssl_log_cert_error appends to a message already stored in buf.) ssl_log_ssl_error is (virtually?) always called in combination with ap_log_*, so we might do without a tag for these messages - or otherwise, modify ssl_log_ssl_error to allow a tag to be passed in, and then use the same tag in both log calls (?). Finally, as far as ssl_log_xerror/ssl_log_cxerror/ssl_log_rxerror are concerned, would it be possible to extend the Coccinelle patch file so that these are also recognized? Good point. Fixed/added in r1211680. I have not modified ssl_log_ssl_error() besides removing the tag. It's really only called together with some other log message, so the tag doesn't add any value.
Re: Time for httpd 2.4.0-RC1 ??
On Sunday 11 December 2011, Graham Leggett wrote: On 11 Dec 2011, at 15:01, Jim Jagielski j...@jagunet.com wrote: Now that apu-1.4.1 is close to release, it looks like we are close to being able to have our 1st RC for 2.4.0... My plan is to TR sometime this week... +1. BTW, is there any reason why r1210261 (mod_slotmem_shm conf syntax) has not been backported yet? Apart from that, +1.
Re: svn commit: r1210378 - /httpd/httpd/trunk/server/util_expr_eval.c
On Tuesday 13 December 2011, Guenter Knauf wrote: Hi Stefan, Am 05.12.2011 10:38, schrieb s...@apache.org: Author: sf Date: Mon Dec 5 09:38:44 2011 New Revision: 1210378 URL: http://svn.apache.org/viewvc?rev=1210378view=rev Log: Fix a few compiler warning reported by Steffen: - some signed/unsigned mismatches - const for a function does not make sense Modified: httpd/httpd/trunk/server/util_expr_eval.c I got reported that since this commit NetWare compilation breaks: CC server/util_expr_eval.c ### mwccnlm Compiler: #File: server\util_expr_eval.c # # 92: result = ap_expr_eval_var(ctx, node-node_arg1, node-node_arg2); # Error: ^ # illegal implicit conversion from 'const void *const ' to # 'char * (*)(struct *, const void *)' ### mwccnlm Compiler: # 677: ap_expr_op_unary_t *op_func = info-node_arg1; # Error: ^ # illegal implicit conversion from 'const void *const ' to # 'int (*)(struct *, const void *, const char *)' ### mwccnlm Compiler: # 690: ap_expr_op_binary_t *op_func = info-node_arg1; # Error:^ # illegal implicit conversion from 'const void *const ' to # 'int (*)(struct *, const void *, const char *, const char *)' sorry, but I only post due to RC1 plans in the hope you can take a look - I had absolutely no time yet to look at (and ATM still dont have) greets, Gün. I think r1213567/r1213570 should fix it. Would be nice if you (or someone else) could check it, though. Cheers, Stefan
Re: [VOTE] Release 2.3.16-beta as beta
On Thursday 15 December 2011, Jim Jagielski wrote: The 2.3.16-beta (prerelease) tarballs are available for download at test: http://httpd.apache.org/dev/dist/ I'm calling a VOTE on releasing these as 2.3.16-beta BETA and, with luck, this IS our last beta and the next release in ~2weeks or less will be 2.4.0 GA!! Vote will last the normal 72 hours... +1, tested on Debian unstable with prefork and event
Re: Win64 2.3.16 :: build warnings
Hi Steffen, On Saturday 17 December 2011, Steffen wrote: Here the Win64 warnings attached. Quite a lot, 442. Most of these are conversions between various integer types. I think the majority of these are in fact correct code. It would be quite a lot of worth to fix these and I am not sure that it would be worth the effort. I would recommend to simply disable that warning for now. The same is true for various signed/unsigned mismatch warnings. BTW, some warnings lack line numbers and can't be checked ATM. After filtering these out, the following remain: Warning 111 warning C4715: 'ap_expr_str_exec_re' : not all control paths return a value e:\vc9\win64\httpd-2.3.16- beta\server\util_expr_eval.c 925 This seems to imply that the compiler does not know that ap_log_assert() does not return. Is there some equivalent of __attribute__(noreturn) for VC? Warning 161 warning C4133: 'function' : incompatible types - from 'timeval *' to 'l_timeval *' E:\VC9\Win64\httpd-2.3.16- beta\modules\ldap\util_ldap.c 502 Warning 162 warning C4133: 'function' : incompatible types - from 'timeval *' to 'l_timeval *' E:\VC9\Win64\httpd-2.3.16- beta\modules\ldap\util_ldap.c 937 Warning 163 warning C4133: 'function' : incompatible types - from 'timeval *' to 'l_timeval *' E:\VC9\Win64\httpd-2.3.16- beta\modules\ldap\util_ldap.c 1688 Warning 164 warning C4133: 'function' : incompatible types - from 'timeval *' to 'l_timeval *' E:\VC9\Win64\httpd-2.3.16- beta\modules\ldap\util_ldap.c 1946 I don't know what l_timeval is. But if Windows ldap uses something different than timeval, maybe there should be some #ifdef magic to deal with that. Warning 165 warning LNK4013: image size 0x12000 exceeds specified maximum 0x1 mod_ldap This sounds like it could be a problem. Can some Windows dev take a look?
Re: [VOTE] Release 2.3.16-beta as beta
On Sunday 18 December 2011, Jim Jagielski wrote: On Dec 18, 2011, at 11:53 AM, Stefan Fritsch wrote: On Thursday 15 December 2011, Jim Jagielski wrote: The 2.3.16-beta (prerelease) tarballs are available for download at test: http://httpd.apache.org/dev/dist/ I'm calling a VOTE on releasing these as 2.3.16-beta BETA and, with luck, this IS our last beta and the next release in ~2weeks or less will be 2.4.0 GA!! Vote will last the normal 72 hours... +1, tested on Debian unstable with prefork and event unstable ?? That's what the development branch of Debian is called. I did not mean that httpd was unstable.
Re: CVE-2011-3607, int overflow ap_pregsub()
On Wed, 21 Dec 2011, Greg Ames wrote: On Tue, Dec 20, 2011 at 4:26 AM, William A. Rowe Jr. wr...@rowe-clan.net wrote: We should come to a conclusion on this. How about this for 2.2.x ? --- server/util.c (revision 1179624) +++ server/util.c (working copy) @@ -82,6 +82,8 @@ #define IS_SLASH(s) (s == '/') #endif +/* same as APR_SIZE_MAX which doesn't appear until APR 1.3 */ +#define UTIL_SIZE_MAX (~((apr_size_t)0)) /* * Examine a field value (such as a media-/content-type) string and return @@ -391,6 +393,11 @@ len++; } else if (no nmatch pmatch[no].rm_so pmatch[no].rm_eo) { +if (UTIL_SIZE_MAX - len = pmatch[no].rm_eo - pmatch[no].rm_so) { +ap_log_error(APLOG_MARK, APLOG_WARNING, APR_ENOMEM, NULL, +integer overflow or out of memory condition. ); +return NULL; +} len += pmatch[no].rm_eo - pmatch[no].rm_so; } len is int in 2.2. This should be changed into an apr_size_t, too. full discloser: my make using apr 1.2 choked trying to compile byterange_filter because apr_array_clear wasn't defined. It is in apr 1.3. However httpd's configure.in appears to be happy with any apr 1.x release. Then configure should be fixed, IMHO.
Looking for Windows testers for r1225199
Hi, Author: sf Date: Wed Dec 28 14:54:49 2011 New Revision: 1225199 URL: http://svn.apache.org/viewvc?rev=1225199view=rev Log: Check during configtest that the directories for error logs exist Testing under Windows is welcome PR: 29941 I think that the combination of ap_server_root_relative and ap_make_dirstr_parent should also work under Windows with backslashes as path separators. But it would be nice if somebody could test that before I backport to 2.4. Cheers, Stefan
Re: Fwd: svn commit: r1225199 - in /httpd/httpd/trunk: docs/log-message-tags/next-number server/core.c
On Wednesday 28 December 2011, Rüdiger Plüm wrote: Author: sf Date: Wed Dec 28 14:54:49 2011 New Revision: 1225199 URL: http://svn.apache.org/viewvc?rev=1225199view=rev Log: Check during configtest that the directories for error logs exist Testing under Windows is welcome PR: 29941 Modified: httpd/httpd/trunk/docs/log-message-tags/next-number httpd/httpd/trunk/server/core.c Modified: httpd/httpd/trunk/server/core.c URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/server/core.c?rev=1 225199r1=1225198r2=1225199view=diff == --- httpd/httpd/trunk/server/core.c (original) +++ httpd/httpd/trunk/server/core.c Wed Dec 28 14:54:49 2011 @@ -4342,6 +4342,44 @@ AP_DECLARE(int) ap_sys_privileges_handle return sys_privileges; } +static int check_errorlog_dir(apr_pool_t *p, server_rec *s) +{ +if (s-error_fname[0] == '|' strcmp(s-error_fname, syslog) == 0) Doesn't this need to be || instead of? yes, thanks. And error_fname can be NULL in vhosts. Fixed in r1225223.
Re: Looking for Windows testers for r1225199
On Wednesday 28 December 2011, Mario Brandt wrote: I guess it is now r1225223 ? Yes, r1225199 plus r1225223.
Re: Advanced status table?
On Wednesday 28 December 2011, Mario Brandt wrote: Since 2.3.? there is this nice overview table in the server-status page. http://www.images-hack.de/bild.php/15079,statusKCQ3G.png Since it shows the status for apache working with threads. Why I see that only with event mpm, and not with worker mpm nor winnt mpm? It is a real nice to have feature, since it also shows the PIDs. Most of the columns are only relevant for asynchronous MPMs, and event is currently the only asynchronous MPM. On non-async threaded MPMs, only the PID and idle/busy threas columns would be relevant. The number of connections is always identical to the number of busy threads and there are no async connections. If you think that a per-process summary of the worker slot states (R, K, C, W, etc.) would be useful, that could be added. But that's not what is currently displayed with event MPM.
Re: 2.3.15 RewriteRule P
On Wednesday 16 November 2011, Steffen wrote: What I noticed, it is connecting to a port by a formerly used proxied connection (port 7080 instead of 81); Summary log: [proxy:debug] [pid 8680:tid 2668] proxy_util.c(2140): proxy: HTTP: has acquired connection for (*) [proxy:debug] [pid 8680:tid 2668] proxy_util.c(2193): proxy: connecting http://127.0.0.1:81/sysadmin to 127.0.0.1:81 [proxy:debug] [pid 8680:tid 2668] proxy_util.c(2319): proxy: connected /sysadmin to 127.0.0.1:7080 After a cursory glance at the code, I have a suspicion about the reason. It seems to me that this check in proxy_util.c /* * Make sure that we pick the the correct and valid worker. * If a single keepalive connection triggers different workers, * then we have a problem (we don't select the correct one). * Do an expensive check in this case, where we compare the * the hostnames associated between the two. * * TODO: Handle this much better... */ if (!conn-hostname || !worker-s-is_address_reusable || worker-s-disablereuse || (r-connection-keepalives (r-proxyreq == PROXYREQ_PROXY || r-proxyreq == PROXYREQ_REVERSE) (strcasecmp(conn-hostname, uri-hostname) != 0) ) ) { should also compare conn-port and uri-port, i.e. the (strcasecmp(conn-hostname, uri-hostname) != 0) should be (strcasecmp(conn-hostname, uri-hostname) != 0 || conn-port != uri-port) Can anyone more familiar with the code verify this? Steffen, maybe you can try the change and see if it helps? Cheers, Stefan
Re: [PATCH] mod_mbox: show list name in the h1/
On Monday 02 January 2012, Daniel Shahaf wrote: Henri Yandell (Created) (JIRA) wrote on Wed, Dec 21, 2011 at 06:03:30 +: Note list name on mail detail page -- Key: INFRA-4238 URL: https://issues.apache.org/jira/browse/INFRA-423 8 Project: Infrastructure Issue Type: Improvement Security Level: public (Regular issues) Reporter: Henri Yandell Priority: Minor Looking at http://mail-archives.apache.org/mod_mbox/www-infrastructure-dev/2 01112.mbox/%3ca603ffce-623b-43e9-87f8-39baa51c7...@gbiv.com%3E - it would be nice to know what mailing list the email is to in the UI. Currently you have to infer it from links or the url. Having it as some kind of title would be good. Possibly it could replace the 'list index' text for the link to the emails for that mailing list. [[[ Tweak h1/ title in the message display screen, for INFRA-4238. Patch by: danielsh * module-2.0/mod_mbox_out.c (mbox_ajax_browser, mbox_static_browser): Include the directory's basename in the title. ]]] Hi Daniel, I have done some work on mod_mbox last week which I have committed now. It includes displaying the list name in h1 and title tags, though implemented slightly differently than your patch. This and some more code cleanup is now in trunk. This code should still work with 2.2 In the new convert-charsets branch, I have committed some work to actually convert mail bodies, subject and from lines to UTF-8. This should take care of a load of displaying problems with various mails. I have tested it with Firefox 9, Chrome, and KDE Konqueror 4.6. There are two caveats, though: It adds a field to the cache files. This means that all mbox cache files need to be re-created. And it depends on HTTPD 2.3.x functionality. It would of course be very nice if you could test the code from the branch, but doing the re-indexing is of course more work. Maybe you could pull some tricks with ZFS snapshots to allow for a fall-back path. Cheers, Stefan
Re: 2.3.15 RewriteRule P
On Sun, 1 Jan 2012, Eric Covener wrote: Can anyone more familiar with the code verify this? Steffen, maybe you can try the change and see if it helps? I think there are a few additional wrinkles -- I couldn't repro after this but no confident about what's right with the addr handling: http://people.apache.org/~covener/patches/trunk-proxy_toggle_ports.diff Index: modules/proxy/proxy_util.c === --- modules/proxy/proxy_util.c (revision 1226300) +++ modules/proxy/proxy_util.c (working copy) @@ -2029,7 +2029,7 @@ * TODO: Handle this much better... */ if (!conn-hostname || !worker-s-is_address_reusable || - worker-s-disablereuse || + worker-s-disablereuse || conn-port != uri-port || (r-connection-keepalives (r-proxyreq == PROXYREQ_PROXY || r-proxyreq == PROXYREQ_REVERSE) (strcasecmp(conn-hostname, uri-hostname) != 0) ) ) { The different handling of conn-port and conn-hostname doesn't look right to me. Can the r-proxyreq check actually be false at this point or is it simply redundant? And shouldn't the strcasecmp(conn-hostname, uri-hostname) check be done regardless of r-connection-keepalives? @@ -2076,6 +2076,8 @@ conn-pool); } else if (!worker-cp-addr) { +conn-port = uri-port; +conn-hostname = apr_pstrdup(conn-pool, uri-hostname); if ((err = PROXY_THREAD_LOCK(worker)) != APR_SUCCESS) { ap_log_rerror(APLOG_MARK, APLOG_ERR, err, r, APLOGNO(00945) lock); return HTTP_INTERNAL_SERVER_ERROR; @@ -2097,7 +2099,9 @@ } } else { -conn-addr = worker-cp-addr; +if (!conn-addr) { +conn-addr = worker-cp-addr; +} } /* Close a possible existing socket if we are told to do so */ if (conn-close) { virtualhost *:80 RewriteEngine on RewriteRule ^/a http://localhost:81/a [P] RewriteRule ^/b http://localhost:82/b [P] /virtualhost virtualhost *:81 AliasMatch ^/a$ /home/covener/SRC/httpd-trunk/built/icons/a.gif /virtualhost virtualhost *:82 AliasMatch ^/b$ /home/covener/SRC/httpd-trunk/built/icons/a.gif /virtualhost Before any patch, as simple as a/b fails. The worker-cp-addr copy issue fails with something like a/b/b.
remove mod_heart* from 2.4?(was: 2.4.0 GA This week?)
On Monday 02 January 2012, Takashi Sato wrote: mod_heartbeat mod_heartmonitor mod_lbmethod_heartbeat These have no documentation. Is that OK we release GA with no docs modules? I am +0.5 for removing them from 2.4.0. They can be re-added when someone writes the docs. Other opinions?