[STATUS] (flood) Wed Jun 9 23:46:05 EDT 2004
flood STATUS: -*-text-*- Last modified at [$Date: 2003/07/01 20:55:12 $] Release: 1.0: Released July 23, 2002 milestone-03: Tagged January 16, 2002 ASF-transfer: Released July 17, 2001 milestone-02: Tagged August 13, 2001 milestone-01: Tagged July 11, 2001 (tag lost during transfer) RELEASE SHOWSTOPPERS: * Everything needs to work perfectly Other bugs that need fixing: * I get a SIGBUS on Darwin with our examples/round-robin-ssl.xml config, on the second URL. I'm using OpenSSL 0.9.6c 21 dec 2001. * iPlanet sends Content-length - there is a hack in there now to recognize it. However, all HTTP headers need to be normalized before checking their values. This isn't easy to do. Grr. * OpenSSL 0.9.6 Segfaults under high load. Upgrade to OpenSSL 0.9.6b. Aaron says: I just found a big bug that might have been causing this all along (we weren't closing ssl sockets). How can I reproduce the problem you were seeing to verify if this was the fix? * SEGVs when /tmp/.rnd doesn't exist are bad. Make it configurable and at least bomb with a good error message. (See Doug's patch.) Status: This is fixed, no? * If APR has disabled threads, flood should as well. We might want to have an enable/disable parameter that does this also, providing an error if threads are desired but not available. * flood needs to clear pools more often. With a long running test it can chew up memory very quickly. We should just bite the bullet and create/destroy/clear pools for each level of our model: farm, farmer, profile, url/request-cycle, etc. * APR needs to have a unified interface for ephemeral port exhaustion, but aparently Solaris and Linux return different errors at the moment. Fix this in APR then take advantage of it in flood. * The examples/analyze-relative scripts fail when there are less than 5 unique URLs. Other features that need writing: * More analysis and graphing scripts are needed * Write robust tool (using tethereal perhaps) to take network dumps and convert them to flood's XML format. Status: Justin volunteers. Aaron had a script somewhere that is a start. Jacek is working on a Mozilla application, codename Flood URL bag (much like Live HTTP Headers) and small HTTP proxy. * Get chunked encoding support working. Status: Justin volunteers. He got sidetracked by the httpd implementation of input filtering and never finished this. This is a stopgap until apr-serf is completed. * Maybe we should make randfile and capath runtime directives that come out of the XML, instead of autoconf parameters. * We are using apr_os_thread_current() and getpid() in some places when what we really want is a GUID. The GUID will be used to correlate raw output data with each farmer. We may wish to print a unique ID for each of farm, farmer, profile, and url to help in postprocessing. * We are using strtol() in some places and strtoll() in others. Pick one (Aaron says strtol(), but he's not sure). * Validation of responses (known C-L, specific strings in response) Status: Justin volunteers * HTTP error codes (ie. teach it about 302s) Justin says: Yeah, this won't be with round_robin as implemented. Need a linked list-based profile where we can insert new URLs into the sequence. * Farmer (Single-thread, multiple profiles) Status: Aaron says: If you have threads, then any Farmer can be run as part of any Farm. If you don't have threads, you can currently only run one Farmer named Joe right now (this will be changed so that if you don't have threads, flood will attempt to run all Farmers in serial under one process). * Collective (Single-host, multiple farms) This is a number of Farms that have been fork()ed into child processes. * Megaconglomerate (Multiple hosts each running a collective) This is a number of Collectives running on a number of hosts, invoked via RSH/SSH or maybe even some proprietary mechanism. * Other types of urllists a) Random / Random-weighted b) Sequenced (useful with cookie propogation) c) Round-robin d) Chaining of the above strategies Status: Round-robin is complete. * Other types of reports Status: Aaron says: simple reports are functional. Justin added a new type that simply prints the approx. timestamp when the test was run, and the result as OK/FAIL; it is called easy reports (see flood_easy_reports.h). Furthermore,
[STATUS] (perl-framework) Wed Jun 9 23:46:11 EDT 2004
httpd-test/perl-framework STATUS: -*-text-*- Last modified at [$Date: 2002/03/09 05:22:48 $] Stuff to do: * finish the t/TEST exit code issue (ORed with 0x2C if framework failed) * change existing tests that frob the DocumentRoot (e.g., t/modules/access.t) to *not* do that; instead, have Makefile.PL prepare appropriate subdirectory configs for them. Why? So t/TEST can be used to test a remote server. * problems with -d perl mode, doesn't work as documented Message-ID: [EMAIL PROTECTED] Date: Sat, 20 Oct 2001 12:58:33 +0800 Subject: Re: perldb Tests to be written: * t/apache - simulations of network failures (incomplete POST bodies, chunked and unchunked; missing POST bodies; slooow client connexions, such as taking 1 minute to send 1KiB; ...) * t/modules/autoindex - something seems possibly broken with inheritance on 2.0 * t/ssl - SSLPassPhraseDialog exec: - SSLRandomSeed exec:
Re: switching t_cmp() argument order
Geoffrey Young wrote: But it's quite possible that argument could be readonly while not a string, a simple example is a return value of a function: % perl -le 'a(b(), b); sub a {($_[0], $_[1]) = ($_[1], $_[0]);}; \ sub b { 5 }' Modification of a read-only value attempted at -e line 1. I think you need to revisit that example :) I fail to see what do you mean. ok, the attached patch reflects that. excellent! the only remaining nit is the deprecation cycle, let's say we happen to release the next few versions within this month, then you hit 1.15 really soon. I think it's a matter of time and not release numbers. So may be it's better to say, let's give people some 3 months to move over and set a certain date as a cutoff rather than a version number? Just an idea... -- __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: switching t_cmp() argument order
Geoffrey Young wrote: Stas Bekman wrote: Geoffrey Young wrote: But it's quite possible that argument could be readonly while not a string, a simple example is a return value of a function: % perl -le 'a(b(), b); sub a {($_[0], $_[1]) = ($_[1], $_[0]);}; \ sub b { 5 }' Modification of a read-only value attempted at -e line 1. I think you need to revisit that example :) I fail to see what do you mean. perl -le 'a(b(), c()); sub a {($_[0], $_[1]) = ($_[1], $_[0]);}; \ sub b{5}; sub c{6}; I still don't get it. What are you trying to say? ok, the attached patch reflects that. excellent! the only remaining nit is the deprecation cycle, let's say we happen to release the next few versions within this month, then you hit 1.15 really soon. I think it's a matter of time and not release numbers. So may be it's better to say, let's give people some 3 months to move over and set a certain date as a cutoff rather than a version number? Just an idea... sure, we could do that, but then the cutoff isn't really clear. I think that three revisions will get us through at least another mod_perl release, when people are perhaps likely to glance at the Changes file. but if we need more time I think we can take it. if the deprecation cycle is very long (like 3 new releases takes us a year) I don't think that's necessarily a bad thing. Not for Apache-Test. There will be lots of new mp2 releases in the next few months to come. Once the API is completed and a few remaining release issues resolved we will start doing release candidates. So as long as we do at least one tweak to A-T between these releases I promise you A-T 1.15 before the end of the summer. but either way is fine. I just didn't want it removed in, say, the release after the next one. just add a line like, remove after Sep 15th or so. and may be take it into if ($deprecated) { ...} so it's easy to remove the whole branch at once without thinking too much. -- __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: switching t_cmp() argument order
Geoffrey Young wrote: just add a line like, remove after Sep 15th or so. and may be take it into if ($deprecated) { ...} so it's easy to remove the whole branch at once without thinking too much. the logic was already wrapped in an if block, so I just made a note that we want it removed by mid september and left it at that. committed, but feel free to tweak it more if you like. no, it's fine thanks for the input :) ;) -- __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: [PATCH] mod_deflate + mod_proxy bug
On Wed, Jun 09, 2004 at 05:23:38PM -0400, Allan Edwards wrote: Running ProxyPass with mod_deflate results in an extraneous 20 bytes being tacked onto 304 responses from the backend. The problem is that mod_deflate doesn't handle the zero byte body, adds the gzip header and tries to compress 0 bytes. This patch detects the fact that there was no data to compress and removes the gzip header from the bucket brigade. Wouldn't it be simpler to just check for a brigade containing just EOS and do nothing for that case in the first place? But the fact that the proxy passes such a brigade through the output filters in the first place sounds like the real bug, it doesn't happen for a non-proxied 304 response. joe
Re: [PATCH] mod_deflate + mod_proxy bug
On Wed, 9 Jun 2004, Allan Edwards wrote: Running ProxyPass with mod_deflate results in an extraneous 20 bytes being tacked onto 304 responses from the backend. The problem is that mod_deflate doesn't handle the zero byte body, adds the gzip header and tries to compress 0 bytes. This patch detects the fact that there was no data to compress and removes the gzip header from the bucket brigade. Any comments before I commit to head? This is part of a slightly broader problem with proxying and mod_deflate: it'll also waste time gzipping already-compressed data from the backend in those cases where the compression is not explicitly indicated in the Content-Encoding header. Obvious examples are all the main image formats. I'm currently running a hack that works around this, and planning a better review when time permits (i.e. when I've caught up with things after http://www.theatreroyal.com/showpage.php?dd=1theid=2578 which now has three nights left to run). More interesting is the entire subject of filtering in a dynamic context such as a proxy. The directives available to control filtering are simply not up to it. Watch this space:-) -- Nick Kew
Re: util_ldap [Bug 29217] - Remove references to calloc() and free()
Brad Nicholes wrote: I guess that is a possibility but I still don't understand what the problem is with using calloc() and free() for the ldap caching code. This seems to be a common thing to do when global memory needs to be allocated and deallocated constantly. To avoid having the memory grow uncontrolably, you have to be able to control it at a much finer level than apr_pool allows you. What I've found in the LDAP code is that it isn't very defensive, most of the code simply assumes the rest of the code worked - it has resulted in me finding all sorts of side problems in the code, but not the real problem - the false negatives the code reports after it has been idle for a long time. Changing malloc and free into something more bulletproof (or at least more robust), like pools and/or the reslist stuff would make the code more resilient, and for me, easier to debug :) Regards, Graham --
CAN-2004-0492 mod_proxy security issue
A security issue has been reported in mod_proxy. See http://www.guninski.com/modproxy1.html The flaw affects Apache httpd 1.3.25 to 1.3.31 that have mod_proxy enabled and configured. Apache httpd 2.0 is unaffected. The security issue is a buffer overflow which can be triggered by getting mod_proxy to connect to a remote server which returns an invalid (negative) Content-Length. This results in a memcpy to the heap with a large length value, which will in most cases cause the Apache child to crash. This does not represent a significant Denial of Service attack as requests will continue to be handled by other Apache child processes. Under some circumstances it may be possible to exploit this issue to cause arbitrary code execution. However an attacker would need to get an Apache installation that was configured as a proxy to connect to a malicious site. 1. On older OpenBSD/FreeBSD distributions it is easily exploitable because of the internal implementation of memcpy which rereads it's length from the stack. 2. On newer BSD distributions it may be exploitable because the implementation of memcpy will write three arbitrary bytes to an attacker controlled location. 3. It may be exploitable on any platform if the optional (and not default) define AP_ENABLE_EXCEPTION_HOOK is enabled. This is used for example by the experimental mod_whatkilledus module. In all other circumstances this issue looks to be unexploitable other than to crash the Apache child that is processing the proxy request. A patch to correct this issue is attached. Note that the reporter of this issue contacted [EMAIL PROTECTED] on June 8th but was unwilling to keep the issue private until the investigation was completed or a new Apache release was available. He published his advisory on June 10th. Mark -- Mark J Cox ... www.awe.com/mark Apache Software Foundation . OpenSSL Group . Apache Week editor Index: src/CHANGES === RCS file: /home/cvs/apache-1.3/src/CHANGES,v retrieving revision 1.1942 diff -u -p -u -r1.1942 CHANGES --- src/CHANGES 2 Jun 2004 22:49:03 - 1.1942 +++ src/CHANGES 9 Jun 2004 15:58:44 - @@ -1,5 +1,9 @@ Changes with Apache 1.3.32 + *) SECURITY: CAN-2004-0492 (cve.mitre.org) + Reject responses from a remote server if sent an invalid (negative) + Content-Length. [Mark Cox] + *) Fix a bunch of cases where the return code of the regex compiler was not checked properly. This affects mod_usertrack and core. PR 28218. [André Malo] Index: src/modules/proxy/proxy_http.c === RCS file: /home/cvs/apache-1.3/src/modules/proxy/proxy_http.c,v retrieving revision 1.106 diff -u -p -u -r1.106 proxy_http.c --- src/modules/proxy/proxy_http.c 29 Mar 2004 17:47:15 - 1.106 +++ src/modules/proxy/proxy_http.c 8 Jun 2004 14:23:05 - @@ -485,6 +485,13 @@ int ap_proxy_http_handler(request_rec *r content_length = ap_table_get(resp_hdrs, Content-Length); if (content_length != NULL) { c-len = ap_strtol(content_length, NULL, 10); + + if (c-len 0) { + ap_kill_timeout(r); + return ap_proxyerror(r, HTTP_BAD_GATEWAY, ap_pstrcat(r-pool, +Invalid Content-Length from remote server, + NULL)); + } } }
Re: util_ldap [Bug 29217] - Remove references to calloc() and free()
I agree that the LDAP code needs to do more error checking and between the work that you did, along with the holes that I plugged previously and the shared memory fixes that Mathieu Estrade did, I think the code is much more robust than it has ever been. Many of our NetWare customers use auth_ldap as the primary means of authentication because it is the easiest way to authenticate against eDirectory. Outside of the shared memory problems, since NetWare doesn't use it, we have run into the same issues that you have seen. But since these latest patches that have gone into 2.0.50-dev, those problems have gone away. At least on NetWare, switching to pools would make the code much more complex. Rather than simply calling calloc and free in the same way that we are calling apr_rmm_calloc() and apr_rmm_free(), we would have to implement essentially the same model using pools and reslists. It seems to me like using a sledge hammer to drive a finishing nail in this instance. Also in the end, it all boils down to malloc and free anyway. As far as debugging goes, I can understand why it might be easier using the pool debug code, but we have never been successful in making the pool debug code work on NetWare. Granted we probably haven't tried real hard mainly because NetWare already has some good memory debugging capabilities built into the OS. If debugging is a problem, I think it might be easier to implement some memory debugging code specifically for the LDAP_cache rather than trying to retrofit it with pools and reslists. Brad Brad Nicholes Senior Software Engineer Novell, Inc., the leading provider of Net business solutions http://www.novell.com [EMAIL PROTECTED] Thursday, June 10, 2004 4:59:30 AM Brad Nicholes wrote: I guess that is a possibility but I still don't understand what the problem is with using calloc() and free() for the ldap caching code. This seems to be a common thing to do when global memory needs to be allocated and deallocated constantly. To avoid having the memory grow uncontrolably, you have to be able to control it at a much finer level than apr_pool allows you. What I've found in the LDAP code is that it isn't very defensive, most of the code simply assumes the rest of the code worked - it has resulted in me finding all sorts of side problems in the code, but not the real problem - the false negatives the code reports after it has been idle for a long time. Changing malloc and free into something more bulletproof (or at least more robust), like pools and/or the reslist stuff would make the code more resilient, and for me, easier to debug :) Regards, Graham --
Re: [PATCH] mod_deflate + mod_proxy bug
Joe Orton wrote: Wouldn't it be simpler to just check for a brigade containing just EOS and do nothing for that case in the first place? Yes I had considered that. The initial patch covered some pathological cases but after having looked over the code some more I think the simpler more efficient way of bailing on just EOS is sufficient. But the fact that the proxy passes such a brigade through the output filters in the first place sounds like the real bug, it doesn't happen for a non-proxied 304 response. Whether or not this is a bug I guess is open for debate. What happens for non proxied error responses is that ap_send_error_response resets r-output_flters to r-proto_output_filters so the deflate filter is taken out of the respone path. In the proxy path there is no such logic for error responses, so error pages would get zipped. But I don't know that this violates the RFC and browsers seem to be able to handle it. Allan
Re: util_ldap [Bug 29217] - Remove references to calloc() and free()
Brad Nicholes wrote: At least on NetWare, switching to pools would make the code much more complex. Rather than simply calling calloc and free in the same way that we are calling apr_rmm_calloc() and apr_rmm_free(), we would have to implement essentially the same model using pools and reslists. It seems to me like using a sledge hammer to drive a finishing nail in this instance. Also in the end, it all boils down to malloc and free anyway. As far as debugging goes, I can understand why it might be easier using the pool debug code, but we have never been successful in making the pool debug code work on NetWare. Granted we probably haven't tried real hard mainly because NetWare already has some good memory debugging capabilities built into the OS. If debugging is a problem, I think it might be easier to implement some memory debugging code specifically for the LDAP_cache rather than trying to retrofit it with pools and reslists. The theory is that the pools code is already hopefully been pre-debugged, you can allocate memory from a pool, and be reasonably sure that the problems of freeing the memory is handled for you. (If this is not the case, it won't be an LDAP bug, but an Apache wide bug). The only real issue really is worrying about the scope of the pool. Thinking further about this, we could use one pool per cache entry. To delete that cache entry just means to destroy the pool. No more need to walk the cache entry and delete each buffer one by one, and no room to make mistakes. No more chance that somebody adds a field to the cache entry, and then forgets the code to free the cache entry. Creation of the pool would only be done on creation of the cache entry, which in turn is done only on the first time this user is authenticated, all further requests being cached, so it doesn't seem to be expensive either, unless someone with a better understanding of the internals of pools would be able to say whether this idea is good or bad. Regards, Graham --
Re: util_ldap [Bug 29217] - Remove references to calloc() and free()
I have considered using a pool per entry in other caching code that I have written just so that I could have much finer control over allocating and freeing the memory. But in the end what it really comes down to is malloc and free and if that is what you really want, then why not just use malloc and free. Pools just add a layer of memory management between your application and the actual allocation that may be of no use. You have simply replaced malloc() with apr_pool_create() and free() with apr_pool_destroy(). If you forget to call apr_pool_destroy(), you have a memory leak in the same way as you would if you forget to call free(). When the application shuts down, the pool management will clean up the memory but then so will the process management of the OS. Outside of some debugging help during development, converting to pools and reslists is just adding a lot of unneeded overhead. The advantage to using memory pools is in situations where you need a pool of memory that can created, pieced out during a specific operation and then completely cleared once the operation is complete and reused without having to recreate it. Global caches just don't lend themselves to this model. The pieces can come can go, but you can never really clear the whole thing out and reuse it because the operation never actually ends. Brad Brad Nicholes Senior Software Engineer Novell, Inc., the leading provider of Net business solutions http://www.novell.com Graham Leggett [EMAIL PROTECTED] Thursday, June 10, 2004 9:46:16 AM Brad Nicholes wrote: At least on NetWare, switching to pools would make the code much more complex. Rather than simply calling calloc and free in the same way that we are calling apr_rmm_calloc() and apr_rmm_free(), we would have to implement essentially the same model using pools and reslists. It seems to me like using a sledge hammer to drive a finishing nail in this instance. Also in the end, it all boils down to malloc and free anyway. As far as debugging goes, I can understand why it might be easier using the pool debug code, but we have never been successful in making the pool debug code work on NetWare. Granted we probably haven't tried real hard mainly because NetWare already has some good memory debugging capabilities built into the OS. If debugging is a problem, I think it might be easier to implement some memory debugging code specifically for the LDAP_cache rather than trying to retrofit it with pools and reslists. The theory is that the pools code is already hopefully been pre-debugged, you can allocate memory from a pool, and be reasonably sure that the problems of freeing the memory is handled for you. (If this is not the case, it won't be an LDAP bug, but an Apache wide bug). The only real issue really is worrying about the scope of the pool. Thinking further about this, we could use one pool per cache entry. To delete that cache entry just means to destroy the pool. No more need to walk the cache entry and delete each buffer one by one, and no room to make mistakes. No more chance that somebody adds a field to the cache entry, and then forgets the code to free the cache entry. Creation of the pool would only be done on creation of the cache entry, which in turn is done only on the first time this user is authenticated, all further requests being cached, so it doesn't seem to be expensive either, unless someone with a better understanding of the internals of pools would be able to say whether this idea is good or bad. Regards, Graham --
Re: 1.3.31 regression affecting Front Page?
On Wed, 9 Jun 2004, Jim Jagielski wrote: On Jun 9, 2004, at 3:24 PM, Rasmus Lerdorf wrote: I guess what we are agreeing on here is that the logic that sets keepalive to 0 is faulty and that is probably where the real fix lies. yeah... it's pretty inconsistent. Looking at ap_set_keepalive even after we know the connection will be closed, we set keepalive to 0, for example. Ok, I had a closer look at the flow. There are 6 main cases I care about. Static, Dynamic and Error requests for configs KeepAlive=Off and KeepAlive=On. Here is what happens: With KeepAlive On For both static and dynamic requests the flow is similar. We start connection-keepalive starts at 0 at the beginning of the request process_request_internal eventually leads to an ap_send_http_header call which calls ap_set_keepalive which determines that the config has KeepAlive on and sets connection-keepalive to 1 For an error, like a 404, it is different. We start off with connection-keepalive being set to 0, process_internal_request calls ap_die on the error which calls ap_send_error_response which in turn calls ap_send_http_header which finally sets connection_keepalive to 1. But this happens after ap_die checks conn-keepalive to determine whether to discard the request body or not. With KeepAlive Off The picture is as above, except ap_set_keepalive called from ap_send_http_header sets connection-keepalive to 0 instead of 1. So for the duration of the request, before and after checking whether we are on a keepalive connection, connection-keepalive is 0. The summary here is that checking connection-keepalive before ap_set_keepalive() has been called really doesn't make any sense. And we can't just call ap_set_keepalive in ap_die before the check because it would end up getting called twice and there is no guard against that in it. It would double-count the request in the keepalives counter. We need to either call ap_set_keepalive earlier on, like in process_request_internal before we hit ap_die, or we need to add a double-call guard in it and just add a call in ap_die before the keepalive check. Another alternative would be to clean up this mess which has our undecided state indistinguishable from our disabled state. Checking for 0 in ap_die is only wrong because the check is before the ap_set_keepalive call. The meaning of that 0 changes on that call. -Rasmus
Re: util_ldap [Bug 29217] - Remove references to calloc() and free()
Brad Nicholes wrote: Do we even know that there is a problem with this code? I haven't seen any memory leaks so far. I would hate to go to all of the work to redesign and rewrite the ldap_cache manager for little to no gain. It does not seem to handle the we ran out of memory while trying to add to the cache case very gracefully. It doesn't crash any more, but I'm experiencing false negatives still, which is the problem I was trying to fix when I started trying to fix the code. Regards, Graham --
Re: util_ldap [Bug 29217] - Remove references to calloc() and free()
It appears to me that if it doesn't handle low memory situations or it is giving false positives, those are separate issues from pools vs. calloc/free. I still think we need to implement some better monitoring or logging code in the cache_mgr and enhance the cache-status pages so that we can track things like false positives. Maybe tracking the entries by user name and authentication state rather than just the number entries and how often the cache was hit. Maybe the real problem is with the locking. In fact just taking a quick scan through the code again, I am seeing something that bothers me in util_ldap_cache_comparedn() if (curl) { /* compare successful - add to the compare cache */ LDAP_CACHE_RDLOCK(); newnode.reqdn = (char *)reqdn; newnode.dn = (char *)dn; util_ald_cache_insert(curl-dn_compare_cache, newnode); LDAP_CACHE_UNLOCK(); } It appears to be acquiring a read lock but then inserts a new node into the cache. Shouldn't it be acquiring a write lock before doing an insert? Brad Brad Nicholes Senior Software Engineer Novell, Inc., the leading provider of Net business solutions http://www.novell.com Graham Leggett [EMAIL PROTECTED] Thursday, June 10, 2004 4:24:54 PM Brad Nicholes wrote: Do we even know that there is a problem with this code? I haven't seen any memory leaks so far. I would hate to go to all of the work to redesign and rewrite the ldap_cache manager for little to no gain. It does not seem to handle the we ran out of memory while trying to add to the cache case very gracefully. It doesn't crash any more, but I'm experiencing false negatives still, which is the problem I was trying to fix when I started trying to fix the code. Regards, Graham --
Accept Filter Backport Request
Back in February I submitted a patch to use the httpready accept filter, and it was committed to 2.1: http://cvs.apache.org/viewcvs.cgi/httpd-2.0/server/listen.c?r1=1.95r2=1.96 Pretty simple change, is there any chance someone can start a vote on getting it backported to the 2.0 Branch? Thanks, - Paul Querna
Re: util_ldap [Bug 29217] - Remove references to calloc() and free()
In fact, I don't think that these are shared locks at all #define LDAP_CACHE_LOCK_CREATE(p) \ if (!st-util_ldap_cache_lock) apr_thread_rwlock_create(st-util_ldap_cache_lock, st-pool) which means that in the shared memory cache, it is highly likely that multiple processes could be altering the cache at the same time. True? Since NetWare is multi-threaded only, we never see this problem. Brad Brad Nicholes Senior Software Engineer Novell, Inc., the leading provider of Net business solutions http://www.novell.com [EMAIL PROTECTED] Thursday, June 10, 2004 5:07:52 PM It appears to me that if it doesn't handle low memory situations or it is giving false positives, those are separate issues from pools vs. calloc/free. I still think we need to implement some better monitoring or logging code in the cache_mgr and enhance the cache-status pages so that we can track things like false positives. Maybe tracking the entries by user name and authentication state rather than just the number entries and how often the cache was hit. Maybe the real problem is with the locking. In fact just taking a quick scan through the code again, I am seeing something that bothers me in util_ldap_cache_comparedn() if (curl) { /* compare successful - add to the compare cache */ LDAP_CACHE_RDLOCK(); newnode.reqdn = (char *)reqdn; newnode.dn = (char *)dn; util_ald_cache_insert(curl-dn_compare_cache, newnode); LDAP_CACHE_UNLOCK(); } It appears to be acquiring a read lock but then inserts a new node into the cache. Shouldn't it be acquiring a write lock before doing an insert? Brad Brad Nicholes Senior Software Engineer Novell, Inc., the leading provider of Net business solutions http://www.novell.com Graham Leggett [EMAIL PROTECTED] Thursday, June 10, 2004 4:24:54 PM Brad Nicholes wrote: Do we even know that there is a problem with this code? I haven't seen any memory leaks so far. I would hate to go to all of the work to redesign and rewrite the ldap_cache manager for little to no gain. It does not seem to handle the we ran out of memory while trying to add to the cache case very gracefully. It doesn't crash any more, but I'm experiencing false negatives still, which is the problem I was trying to fix when I started trying to fix the code. Regards, Graham --
Re: Accept Filter Backport Request
On Thu, 10 Jun 2004, Paul Querna wrote: Pretty simple change, is there any chance someone can start a vote on getting it backported to the 2.0 Branch? done. i'll review it myself sometime tonight.
Re: dynamic hook ordering
Cliff Woolley wrote: On Wed, 9 Jun 2004, Geoffrey Young wrote: I wanted to ping everyone about an idea I've been throwing around for a few months now. I'd like the ability to shuffle the declared hook ordering around, most likely during the post-config phase. There was some discussion about this or something at least vaguely like it a while back, but nobody ever got around to implementing it. ok, here's a first pass at just a small part - achieve the hook listing by offering an apr_table_do()-esque iterator just for hooks. the output of httpd -o (for hOok, I guess) looks something like this: Registered Hooks: Pre-MPM core.c (10) ... Open Logs prefork.c (10) core.c (-10) mod_log_config.c (10) ... Map-to-Storage mod_proxy.c (0) http_core.c (10) core.c (30) etc. where the number in parentheses is the (untranslated) APR_HOOK_* value. this is obviously a work in progress (and perhaps ugly as well), so comments on all aspects very, very welcome. the next step would to make mod_info use the new hook iterator and pull out the logic that was mostly stolen from there. but I'll wait for feedback on what I have so far before doing that, as well as stuff like ap_hook_order_set() or somesuch :) --Geoff Index: NWGNUmakefile === RCS file: /home/cvs/httpd-2.0/NWGNUmakefile,v retrieving revision 1.25 diff -u -r1.25 NWGNUmakefile --- NWGNUmakefile 1 Jun 2004 17:48:21 - 1.25 +++ NWGNUmakefile 11 Jun 2004 04:33:46 - @@ -238,6 +238,7 @@ $(OBJDIR)/util_md5.o \ $(OBJDIR)/util_nw.o \ $(OBJDIR)/util_script.o \ + $(OBJDIR)/util_hook.o \ $(OBJDIR)/util_time.o \ $(OBJDIR)/util_xml.o \ $(OBJDIR)/vhost.o \ Index: build/nw_export.inc === RCS file: /home/cvs/httpd-2.0/build/nw_export.inc,v retrieving revision 1.5 diff -u -r1.5 nw_export.inc --- build/nw_export.inc 20 Jan 2003 21:38:49 - 1.5 +++ build/nw_export.inc 11 Jun 2004 04:34:03 - @@ -42,6 +42,7 @@ /*#include util_ldap.h*/ #include util_md5.h #include util_script.h +#include util_hook.h #include util_time.h #include util_xml.h Index: include/http_config.h === RCS file: /home/cvs/httpd-2.0/include/http_config.h,v retrieving revision 1.110 diff -u -r1.110 http_config.h --- include/http_config.h 4 Jun 2004 22:40:46 - 1.110 +++ include/http_config.h 11 Jun 2004 04:34:06 - @@ -768,6 +768,11 @@ AP_DECLARE(void) ap_show_modules(void); /** + * Show registered hooks. Used for httpd -k. + */ +AP_DECLARE(void) ap_show_hooks(void); + +/** * Show the MPM name. Used in reporting modules such as mod_info to * provide extra information to the user */ Index: include/http_main.h === RCS file: /home/cvs/httpd-2.0/include/http_main.h,v retrieving revision 1.30 diff -u -r1.30 http_main.h --- include/http_main.h 9 Feb 2004 20:38:21 - 1.30 +++ include/http_main.h 11 Jun 2004 04:34:08 - @@ -22,7 +22,7 @@ * in apr_getopt() format. Use this for default'ing args that the MPM * can safely ignore and pass on from its rewrite_args() handler. */ -#define AP_SERVER_BASEARGS C:c:D:d:E:e:f:vVlLtSh?X +#define AP_SERVER_BASEARGS C:c:D:d:E:e:f:vVloLtSh?X #ifdef __cplusplus extern C { Index: server/Makefile.in === RCS file: /home/cvs/httpd-2.0/server/Makefile.in,v retrieving revision 1.94 diff -u -r1.94 Makefile.in --- server/Makefile.in 15 Mar 2004 21:49:35 - 1.94 +++ server/Makefile.in 11 Jun 2004 04:36:29 - @@ -9,7 +9,7 @@ LTLIBRARY_SOURCES = \ test_char.h \ config.c log.c main.c vhost.c util.c \ - util_script.c util_md5.c util_cfgtree.c util_ebcdic.c util_time.c \ + util_script.c util_md5.c util_cfgtree.c util_ebcdic.c util_time.c util_hook.c \ connection.c listen.c \ mpm_common.c util_charset.c util_debug.c util_xml.c \ util_filter.c exports.c buildmark.c \ @@ -66,6 +66,8 @@ export_vars.h: export_files $(AWK) -f $(top_srcdir)/build/make_var_export.awk `cat $?` $@ + +util_hook.c: httpd.exp # Rule to make def file for OS/2 core dll ApacheCoreOS2.def: exports.c export_vars.h $(top_srcdir)/os/$(OS_DIR)/core_header.def Index: server/config.c === RCS file: /home/cvs/httpd-2.0/server/config.c,v retrieving revision 1.177 diff -u -r1.177 config.c --- server/config.c 25 Apr 2004 17:23:31 - 1.177 +++ server/config.c 11 Jun 2004 04:36:32 - @@ -50,6 +50,7 @@ #include http_main.h #include http_vhost.h #include util_cfgtree.h +#include util_hook.h #include mpm.h @@ -2047,4 +2048,28 @@ AP_DECLARE(const char *) ap_show_mpm(void) { return MPM_NAME; +} + +static int list_hooks(char **phase, ap_hook_struct_t *hook) +{ +if (strcmp(*phase, hook-desc)) { + printf(