Re: ServerLimit, MaxClients
Daniel Lopez [EMAIL PROTECTED] writes: Ok, now it makes sense :) I suggest your explanation gets added to the docs and linked as a See also for all the directives involved. Okay, I'll try to do that. Also, if we can point out that the same controls always existed (though you had to recompile Apache instead of just tweak the config file), perhaps the largest set of people won't fret over it (I never had to change HARD_SERVER_LIMIT so I guess I don't care about ServerLimit). -- Jeff Trawick | [EMAIL PROTECTED] | PGP public key at web site: http://www.geocities.com/SiliconValley/Park/9289/ Born in Roswell... married an alien...
Re: cvs commit: httpd-2.0/modules/proxy config.m4
Victor J. Orlikowski [EMAIL PROTECTED] writes: The issue is as follows: mod_proxy_{connect,ftp,http} all depend on symbols in mod_proxy. Run-time linking (as added by Jeff) fixes the problem for most modules (including DAV and DAV_fs, which depends on DAV), but proxy is still being the odd man out, for whatever reason. This is (likely) not libtool's fault this time (I can't believe I just typed that), but the issue has not yet been properly diagnosed. Is it possible that we're getting bit by make_exports.awk not being able to grok declarations that span lines? (just a wild guess) What is the symptom? link failure? segfault? However, I believe Jeff is speaking to the libtool guys re: their handling of shared {libraries,objects} on AIX, w.r.t. other issues. I took a break from that trying to get Apache to deal with the currently-available libtool version. We're not done yet, though hopefully we're closing in on it. Hopefully I can get some time to help identify some needed changes to libtool for AIX. Note that I'm *extremely* uninterested in any sort of situation where standard libtool can't be used to build Apache on AIX. I suspect Justin is interested in the library dependency thing that failed on AIX back before we were using rtl. I posted something to [EMAIL PROTECTED] a few days back showing the results when I tried to do the same thing after we switched to rtl. -- Jeff Trawick | [EMAIL PROTECTED] | PGP public key at web site: http://www.geocities.com/SiliconValley/Park/9289/ Born in Roswell... married an alien...
Re: cvs commit: httpd-2.0/server/mpm/perchild mpm.h perchild.c
[EMAIL PROTECTED] writes: rbb 01/12/19 09:50:39 Modified:server/mpm/perchild mpm.h perchild.c Log: This gets perchild compiling and serving pages again. It does NOT pass file descriptors yet. That is a much bigger project. cool... I'll try to get the dynamic scoreboard sizing into perchild in the next day or so... -- Jeff Trawick | [EMAIL PROTECTED] | PGP public key at web site: http://www.geocities.com/SiliconValley/Park/9289/ Born in Roswell... married an alien...
Re: cvs commit: httpd-2.0/server/mpm/worker mpm_default.h worker.c
On 19 Dec 2001 10:11:09 -0500, Jeff Trawick wrote: Jeff Trawick [EMAIL PROTECTED] writes: Greg Ames [EMAIL PROTECTED] writes: Hmmm... (2nd thoughts :) ) mpm_default.h exists so people can edit default settings in one nice place... it wasn't nice for me to move these things out of mpm_defaults.h... I'll move them back in there... hopefully these settings won't be abused by modules and instead modules will call ap_mpm_query() Of course we don't want admins editing .c files as a rule. That would be a step backwards. But if our goal is to put these params in the config file, we can live with the limits in the MPM .c files for a day or two. okay, let's defer any possible movement at least until we see what MPMs folks want to make smarter... At this point, prefork and worker don't even have something like HARD_SERVER_LIMIT/HARD_THRAD_LIMIT, FirstBill is going to do the same thing for the WinNT MPM, I will do the same thing for perchild once somebody gets the darn thing to compile/run, and I'll send a query in a minute to the maintainers of the other MPMs asking if they want a) to add the directives to their MPM themself, such that this issue is moot b) they want me to move the defines for their MPM back to mpm_default.h where other user-tunable MPM defines are (but for a single-process multiple-thread MPM I would recommend *NOT* moving HARD_SERVER_LIMIT back there :) ) c) they want me to fix the log messages for their MPM that refer to those defines so that they point to the right file to edit I'm happy to get the OS/2 MPM fixed up with the directives when I get a bit of time to spare. BTW, forget spmt_os2, it's defunct. I just haven't got around to deleting it. -- __ | Brian Havard | He is not the messiah! | | [EMAIL PROTECTED] | He's a very naughty boy! - Life of Brian | --
Re: cvs commit: httpd-2.0/server Makefile.in
[EMAIL PROTECTED] wrote: -/^[ \t]*AP[RU]?_DECLARE[^(]*[(][^)]*[)]([^ ]* )*[^(]+[(]/ { -sub([ \t]*AP[RU]?_DECLARE[^(]*[(][^)]*[)][ \t]*, ) +/^[ \t]*AP[RU]?_(CORE_)?DECLARE[^(]*[(][^)]*[)]([^ ]* )*[^(]+[(]/ { +sub([ \t]*AP[RU]?_(CORE_)?DECLARE[^(]*[(][^)]*[)][ \t]*, ) sub([(].*, ) sub(([^ ]* (^([ \t]*[(])))+, ) I hadn't looked at it closely before, but that is code with hair on it! :-) -- #kenP-)} Ken Coar, Sanagendamgagwedweinini http://Golux.Com/coar/ Author, developer, opinionist http://Apache-Server.Com/ All right everyone! Step away from the glowing hamburger!
Re: [PATCH] get mod_ssl to work again
On 18 Dec 2001, Jeff Trawick wrote: or just an entropy function? why should any module care that it is from the scoreboard? +1 on that or anything to get mod_ssl working again.
Re: cvs commit: apache-1.3/src CHANGES
With this, I'm thinking it's time for 1.3.23, which should hold us for 1.3 releases for the better part of 2002. Yeas or Neas? Bill - Original Message - From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, December 18, 2001 10:34 AM Subject: cvs commit: apache-1.3/src CHANGES wrowe 01/12/18 07:34:02 Modified:src CHANGES Log: Notes are good here too Revision ChangesPath 1.1746+4 -0 apache-1.3/src/CHANGES Index: CHANGES === RCS file: /home/cvs/apache-1.3/src/CHANGES,v retrieving revision 1.1745 retrieving revision 1.1746 diff -u -r1.1745 -r1.1746 --- CHANGES 2001/11/30 17:10:07 1.1745 +++ CHANGES 2001/12/18 15:34:01 1.1746 @@ -1,4 +1,8 @@ Changes with Apache 1.3.23 + + *) Revert mod_negotation's handling of path_info and query_args + to the 1.3.20 behavior. PR: 8628, 8582, 8538 [William Rowe] + *) Modify buff.h and buff.c to enable modules to intercept the output byte stream for dynamic page caching. A pointer to a 'filter callback' function is added to the end of buff.h.
Re: cvs commit: apache-1.3/src CHANGES
From: Bill Stoddard [EMAIL PROTECTED] Sent: Thursday, December 20, 2001 11:24 AM With this, I'm thinking it's time for 1.3.23, which should hold us for 1.3 releases for the better part of 2002. Yeas or Neas? Yea [but] but we are seeing a lot of problems with parsed (SHTML, PHP etc) content being corrupted on WinXP machines. It would be good to fix that once and not need 'yet another ugly hack' two months from now. Not every XP user sees this. Both loopback and external IP stacks are affected. Some report this with a clean install. Continues even after disabling all the eXPtra cruft MS has introduced. So if any win32 hackers have some time to dig into this, it would be a good thing to fix, and then call 1.3 'done' for a while :) Bill
Re: [PATCH] get mod_ssl to work again
From: Doug MacEachern [EMAIL PROTECTED] Sent: Thursday, December 20, 2001 11:07 AM On 18 Dec 2001, Jeff Trawick wrote: or just an entropy function? why should any module care that it is from the scoreboard? +1 on that or anything to get mod_ssl working again. I'll see your +1 and double :) Yes - perhaps the MPM itself should generate ap_server_entropy.
Re: cvs commit: apache-1.3/src CHANGES
On Thu, Dec 20, 2001 at 11:28:22AM -0600, William A. Rowe, Jr. wrote: From: Bill Stoddard [EMAIL PROTECTED] Sent: Thursday, December 20, 2001 11:24 AM With this, I'm thinking it's time for 1.3.23, which should hold us for 1.3 releases for the better part of 2002. Yeas or Neas? Yea [but] but we are seeing a lot of problems with parsed (SHTML, PHP etc) content being corrupted on WinXP machines. It would be good to fix that once and not need 'yet another ugly hack' two months from now. Not every XP user sees this. Both loopback and external IP stacks are affected. Some report this with a clean install. Continues even after disabling all the eXPtra cruft MS has introduced. So if any win32 hackers have some time to dig into this, it would be a good thing to fix, and then call 1.3 'done' for a while :) Would it be possible to integrate the changes needed to export all of the API functions consistently to all platforms? I still have a patch in-progress for a few months now, but I might schedule time to get it done over the weekend now. -- Thomas Eibner http://thomas.eibner.dk/ DnsZone http://dnszone.org/ mod_pointer http://stderr.net/mod_pointer
Re: cvs commit: apache-1.3/src CHANGES
From: Bill Stoddard [EMAIL PROTECTED] Sent: Thursday, December 20, 2001 11:24 AM With this, I'm thinking it's time for 1.3.23, which should hold us for 1.3 releases for the better part of 2002. Yeas or Neas? Yea [but] but we are seeing a lot of problems with parsed (SHTML, PHP etc) content being corrupted on WinXP machines. It would be good to fix that once and not need 'yet another ugly hack' two months from now. Not every XP user sees this. Both loopback and external IP stacks are affected. Some report this with a clean install. Continues even after disabling all the eXPtra cruft MS has introduced. So if any win32 hackers have some time to dig into this, it would be a good thing to fix, and then call 1.3 'done' for a while :) Bill Bleh, your right. I spent a bit of time looking at the XP reports over the weekend; really atrange stuff. Unfortunately, I don't have a copy of XP to play with... Bill
Re: [PATCH] get mod_ssl to work again
On Thu, Dec 20, 2001 at 11:29:43AM -0600, William A. Rowe, Jr. wrote: I'll see your +1 and double :) Yes - perhaps the MPM itself should generate ap_server_entropy. FWIW, DougM submitted this function to flood to generate OpenSSL entropy. I'd almost suggest somehow factoring this into apr-util since flood needs this too (and doesn't have a scoreboard). However, that'd require linking against OpenSSL in apr-util which may be a no-no. I wonder what ways we could do this though (some reason I'm thinking as a nasty #define)? But, there is a definite value to merging the implementations. -- justin static void load_rand(void) { unsigned char stackdata[256]; time_t tt; pid_t pid; int l, n; tt = time(NULL); l = sizeof(time_t); RAND_seed((unsigned char *)tt, l); pid = (pid_t)getpid(); l = sizeof(pid_t); RAND_seed((unsigned char *)pid, l); n = ssl_rand_choosenum(0, sizeof(stackdata)-128-1); RAND_seed(stackdata+n, 128); }
AP_CHILD_THREAD_FROM_ID
Hi - I noticed that AP_CHILD_THREAD_FROM_ID was removed from the prefork mpm_default.h a couple of days ago - was that a mistake? Or is this macro no longer available to module writers? It is defined for beos and in perchild.c, but nowhere else (and not used either). Any one have some pointers? Is there a replacement? thanks a bunch sterling p.s. the only other reference i found to this macro was in protocol.c (ifdefed out) and has been there since at least last february when revision history begins: #if 0 /* XXX If we want to keep track of the Method, the protocol module should do * it. That support isn't in the scoreboard yet. Hopefully next week * sometime. rbb */ ap_update_connection_status(AP_CHILD_THREAD_FROM_ID(conn-id), Method, r-method); #endif
Re: AP_CHILD_THREAD_FROM_ID
[EMAIL PROTECTED] writes: Any one have some pointers? Is there a replacement? no replacement... what do you need it for? if you need to pass it to ap_update_child_status() or ap_increment_counts(), pass conn-sbh instead of AP_CHILD_THREAD_FROM_ID()... -- Jeff Trawick | [EMAIL PROTECTED] | PGP public key at web site: http://www.geocities.com/SiliconValley/Park/9289/ Born in Roswell... married an alien...
Re: [PATCH] get mod_ssl to work again
On Thu, 20 Dec 2001, Justin Erenkrantz wrote: FWIW, DougM submitted this function to flood to generate OpenSSL entropy. I'd almost suggest somehow factoring this into apr-util since flood needs this too (and doesn't have a scoreboard). that function was derived from mod_ssl-1.xx and have learned some things since. i recently noticed OpenSSL internally calls RAND_seed(time()) during negotiation. so i was planning to remove that same call from modssl or at least change it to use r-request_time. (main goal: getting rid of time() and getpid() syscalls on every connect) since flood only seeds at startup time, might be better for you just to use apr_generate_random_bytes(). don't want to use that in modssl for 'SSLRandomSeed builtin connect', since /dev/random blocking will be too slow for every connect. but will probably change it to use that for 'SSLRandomSeed builtin startup'.
Re: AP_CHILD_THREAD_FROM_ID
On 20 Dec 2001, Jeff Trawick wrote: [EMAIL PROTECTED] writes: Any one have some pointers? Is there a replacement? no replacement... what do you need it for? if you need to pass it to ap_update_child_status() or ap_increment_counts(), pass conn-sbh instead of AP_CHILD_THREAD_FROM_ID()... Ah, sorry, i missed that change - i understand now. thanks for the help sterling
Re: core dump in ap_send_fd
[moving this to dev@httpd since it's an httpd issue] On Fri, 21 Dec 2001, Stas Bekman wrote: ap_send_fd expects the length of the input to send. First of all is there a way not to specify the length? I've a fd (can be a pipe to a process), and I've no way to figure out the length of the output. How APR can handle this? This breaks the compatibility with send_fd from httpd-1.3.x. You don't want to send a pipe with ap_send_fd() in Apache 2.0, as it would put the pipe in a file bucket, which is wrong. Maybe we should add an ap_send_pipe(), or maybe we should change the ap_send_fd() API to accept -1 as a magic length and to recognize it as meaning the thing is a pipe not a file. In the meanwhile, you need to use the buckets directly. It's only four lines of code as opposed to one. apr_bucket_brigade *bb = apr_brigade_create(r-pool); apr_bucket *b = apr_bucket_pipe_create(thepipe); APR_BRIGADE_INSERT_TAIL(bb, b); ap_pass_brigade(bb); Second, I've found something that appears to be a bug. If I tell ap_send_fd to send one char more than the size of the file that I try to send, it asserts and dumps core, the latter probably is bad. Um, why would you ever do that and expect it to work? I mean, I guess we could theoretically check the length you pass in against the length of the file, but that goes against the we-expect-the-caller-to-pass-us-sane-arguments mentality that we typically maintain around here. --Cliff -- Cliff Woolley [EMAIL PROTECTED] Charlottesville, VA
Re: [PATCH] get mod_ssl to work again
On Thu, 20 Dec 2001, Aaron Bannert wrote: /dev/urandom won't block, so maybe we could live with that once per request and use the /dev/random for startup. right, only problem is apr doesn't support /dev/urandom. maybe we need an apr_generate_urandom_bytes() function or a non-blocking flag to apr_generate_random_bytes()? modssl already has: SSLRandomSeed connect file:/dev/urandom 512 but something more portable would be nice.
mod_deflate
Sorry, I had overlooked discussion about renaming mod_gz to mod_deflate but mod_deflate module is already exists: ftp://ftp.lexa.ru/pub/apache-rus/contrib/ It was public available from April 2001 and is already installed on many Russian sites and several non-Russian ones. Documentation is in Russian only. Sorry. Some features: It patches Apache 1.3.x so it allows to compress content without temporary files as mod_gzip does. It allows two encoding - gzip and deflate. It has some workarounds for buggy browsers. On FreeBSD it can check CPU idle to disable compression. Igor Sysoev
Re: [PATCH] get mod_ssl to work again
On Thu, Dec 20, 2001 at 10:17:13AM -0800, Doug MacEachern wrote: since flood only seeds at startup time, might be better for you just to use apr_generate_random_bytes(). don't want to use that in modssl for 'SSLRandomSeed builtin connect', since /dev/random blocking will be too slow for every connect. but will probably change it to use that for 'SSLRandomSeed builtin startup'. As Daniel pointed out, /dev/{u}random isn't available on certain platforms (Solaris). And, in flood, this seeding is only used when /dev/{u}random are not available. APR does not support an internal PRNG. I've suggested it before and perhaps it is time that we integrate truerand.c (anyone have a better version than what is in mod_ssl?) so that we can always call apr_generate_random_bytes()? I think that truerand isn't installed in enough places that it merits our redistribution in APR. -- justin
Re: [PATCH] get mod_ssl to work again
On Thu, Dec 20, 2001 at 10:55:02AM -0800, Justin Erenkrantz wrote: As Daniel pointed out, /dev/{u}random isn't available on certain platforms (Solaris). And, in flood, this seeding is only used when /dev/{u}random are not available. APR does not support an internal PRNG. I've suggested it before and perhaps it is time that we integrate truerand.c (anyone have a better version than what is in mod_ssl?) so that we can always call apr_generate_random_bytes()? I think that truerand isn't installed in enough places that it merits our redistribution in APR. -- justin What is truerand.c? Can you provide a URL or perhaps a Message-ID in case it came up before and I missed it? -aaron
Re: [PATCH] get mod_ssl to work again
On Thu, 20 Dec 2001, Daniel Lopez wrote: /dev/urandom is not available in all platforms right, which is why it is not portable to use directly. /dev/random is also not available on all platforms, so apr uses whats available to provide the same functionality for the given platform in apr_generate_random_bytes().
Re: [PATCH] get mod_ssl to work again
On Thu, 20 Dec 2001, Justin Erenkrantz wrote: so that we can always call apr_generate_random_bytes()? oh, i assumed we already could. +1 on whatever it takes to make that function usable on all platforms.
Re: [PATCH] get mod_ssl to work again
On Thu, Dec 20, 2001 at 11:07:13AM -0800, Doug MacEachern wrote: On Thu, 20 Dec 2001, Daniel Lopez wrote: /dev/urandom is not available in all platforms right, which is why it is not portable to use directly. I was not arguing, I was just reinstating your point :) On NT openssl uses data from the screen. I like the idea of using truerand.c or whatever the genrandom program bundled with openssl uses. /dev/random is also not available on all platforms, so apr uses whats available to provide the same functionality for the given platform in apr_generate_random_bytes().
Re: related config directives
On Thu, Dec 20, 2001 at 03:22:33PM -0500, Greg Ames wrote: ...are more painful to deal with than you might think, if the user is allowed to code them in any order. I'd like the comparisons between MaxClients and ServerLimit and the equivalent thread directives to be coding order insensitive. So I created a post-config hook to do the comparing adjusting. The code was very simple, but the log messages went to the error log file twice, rather than to the console once which is what people are accustomed to for config errors. worker already has the same situation with MaxClients and ThreadsPerChild. It has a clever but kind of scary solution: a pre-config hook which swaps nodes in the config tree if it doesn't like the order. Then the comparisons, adjusting, and logging are done when the second directive is parsed. While I think it's a very cool hack, mucking with the innards of the config tree bothers me some. config.c could provide an API to do order checking and swapping, but I can't see this as a general solution. Think about cases where one directive is in container X and the other isn't - yuck! I agree (and I'm the one who wrote it). What if the very first ap_run_post_config hook was moved to before the logs files were opened? Then we could have simple logic to compare or otherwise process multiple directives. Any error messages would go directly to the console. If you wanted them to also go to the error log file, that would just happen during the second post-config call; if you don't want that, it's easy to avoid. I was thinking the same thing. My only concern was that something has already been written that assumes post_config already has the log opened and stderr redirected. Unless that is a problem or someone else can come up with another reason not to do this, I'm +1. Another possibility that would avoid the whole issue altogether would be to simply add another hook before open_logs. I started doing this yesterday on the plane: Index: server/main.c === RCS file: /home/cvs/httpd-2.0/server/main.c,v retrieving revision 1.111 diff -u -u -r1.111 main.c --- server/main.c 2001/12/18 20:26:15 1.111 +++ server/main.c 2001/12/20 20:31:52 @@ -412,6 +412,10 @@ destroy_and_exit_process(process, 0); } apr_pool_clear(plog); +if ( ap_run_validate_config(pconf, plog, ptemp, server_conf) != OK) { +ap_log_error(APLOG_MARK, APLOG_STARTUP |APLOG_ERR| APLOG_NOERRNO, 0, NULL, +Unable to validate the config\n); +destroy_and_exit_process(process, 1); +} if ( ap_run_open_logs(pconf, plog, ptemp, server_conf) != OK) { ap_log_error(APLOG_MARK, APLOG_STARTUP |APLOG_ERR| APLOG_NOERRNO, 0, NULL, Unable to open logs\n); destroy_and_exit_process(process, 1); Index: include/http_config.h === RCS file: /home/cvs/httpd-2.0/include/http_config.h,v retrieving revision 1.92 diff -u -u -r1.92 http_config.h --- include/http_config.h 2001/11/24 00:08:29 1.92 +++ include/http_config.h 2001/12/20 20:31:55 @@ -974,6 +974,18 @@ AP_DECLARE_HOOK(void,pre_config,(apr_pool_t *pconf,apr_pool_t *plog,apr_pool_t *ptemp)) /** + * Run the validate_config function for each module. This function should + * validate the config parameters accepted by the module, and return + * OK or non-OK to signify success. + * @param pconf The config pool + * @param plog The logging streams pool + * @param ptemp The temporary pool + * @param s The list of server_recs + * @return OK or DECLINED on success anything else is a error + */ +AP_DECLARE_HOOK(int,validate_config,(apr_pool_t *pconf,apr_pool_t *plog,apr_pool_t +*ptemp,server_rec *s)) + +/** * Run the post_config function for each module * @param pconf The config pool * @param plog The logging streams pool -aaron
Re: related config directives
On Thursday 20 December 2001 12:22 pm, Greg Ames wrote: ...are more painful to deal with than you might think, if the user is allowed to code them in any order. I'd like the comparisons between MaxClients and ServerLimit and the equivalent thread directives to be coding order insensitive. So I created a post-config hook to do the comparing adjusting. The code was very simple, but the log messages went to the error log file twice, rather than to the console once which is what people are accustomed to for config errors. worker already has the same situation with MaxClients and ThreadsPerChild. It has a clever but kind of scary solution: a pre-config hook which swaps nodes in the config tree if it doesn't like the order. Then the comparisons, adjusting, and logging are done when the second directive is parsed. While I think it's a very cool hack, mucking with the innards of the config tree bothers me some. config.c could provide an API to do order checking and swapping, but I can't see this as a general solution. Think about cases where one directive is in container X and the other isn't - yuck! This is the point of the config tree, it allows you to modify the config before we actually run the configuration, so that you can find these dependancies and fix them quickly. Putting the logic in the directive handlers doesn't really work, because you still have an order dependancy. Think of it this way, these are good configs: MaxClients 500 ServerLimit 525 ServerLimit 20 MaxClients 5 This is a bad config: MaxClients 525 ServerLimit 500 Where do you catch it though? In ServerLimit, then you will also catch the second good config above, which is a good config If you try to catch it in MaxClients, you will catch the first config above, which is a good config. The only solution is to choose default values that are invalid, so that you can make sure that you always check in the last directive handler, but that is incredibly ugly. We can easily create a function to do the swapping, which is more generic, but the swapping is the only 100% correct way to do this. Ryan I'm inclined to agree with Ryan. Let's just make a function to do the swapping. Bill
Re: [PATCH] get mod_ssl to work again
From: Daniel Lopez [EMAIL PROTECTED] Sent: Thursday, December 20, 2001 1:26 PM On Thu, Dec 20, 2001 at 11:07:13AM -0800, Doug MacEachern wrote: On Thu, 20 Dec 2001, Daniel Lopez wrote: /dev/urandom is not available in all platforms right, which is why it is not portable to use directly. I was not arguing, I was just reinstating your point :) On NT openssl uses data from the screen. I like the idea of using truerand.c or whatever the genrandom program bundled with openssl uses. Huh? No ... maybe a long time ago, but now it is digging up performance metrics and so forth from the cpu (I'm certain - had to fix some older SSL code that assumed we had the opcodes - they are so recent that older MSVCs asm{} blocks couldn't parse them.)
Re: related config directives
On Thursday 20 December 2001 01:01 pm, Bill Stoddard wrote: On Thursday 20 December 2001 12:22 pm, Greg Ames wrote: ...are more painful to deal with than you might think, if the user is allowed to code them in any order. I'd like the comparisons between MaxClients and ServerLimit and the equivalent thread directives to be coding order insensitive. So I created a post-config hook to do the comparing adjusting. The code was very simple, but the log messages went to the error log file twice, rather than to the console once which is what people are accustomed to for config errors. worker already has the same situation with MaxClients and ThreadsPerChild. It has a clever but kind of scary solution: a pre-config hook which swaps nodes in the config tree if it doesn't like the order. Then the comparisons, adjusting, and logging are done when the second directive is parsed. While I think it's a very cool hack, mucking with the innards of the config tree bothers me some. config.c could provide an API to do order checking and swapping, but I can't see this as a general solution. Think about cases where one directive is in container X and the other isn't - yuck! This is the point of the config tree, it allows you to modify the config before we actually run the configuration, so that you can find these dependancies and fix them quickly. Putting the logic in the directive handlers doesn't really work, because you still have an order dependancy. Think of it this way, these are good configs: MaxClients 500 ServerLimit 525 ServerLimit 20 MaxClients 5 This is a bad config: MaxClients 525 ServerLimit 500 Where do you catch it though? In ServerLimit, then you will also catch the second good config above, which is a good config If you try to catch it in MaxClients, you will catch the first config above, which is a good config. The only solution is to choose default values that are invalid, so that you can make sure that you always check in the last directive handler, but that is incredibly ugly. We can easily create a function to do the swapping, which is more generic, but the swapping is the only 100% correct way to do this. Ryan I'm inclined to agree with Ryan. Let's just make a function to do the swapping. I meant to mention that if you really want to fix this problem, we should move to an XML based config language. Ryan __ Ryan Bloom [EMAIL PROTECTED] Covalent Technologies [EMAIL PROTECTED] --
Re: related config directives
Ryan Bloom [EMAIL PROTECTED] writes: I'm inclined to agree with Ryan. Let's just make a function to do the swapping. I had the same opinion... a swap function owned by the config tree code. I meant to mention that if you really want to fix this problem, we should move to an XML based config language. First awk, now XML. !*^%$%$## (just kidding; XML and related tools are worth being comfortable with) -- Jeff Trawick | [EMAIL PROTECTED] | PGP public key at web site: http://www.geocities.com/SiliconValley/Park/9289/ Born in Roswell... married an alien...
[PATCH] Apache 1.3 - mod_auth_dbm.c
Patch to get mod_auth_dbm.c compiled on NetWare: --- mod_auth_dbm.c.orig Wed Mar 21 04:09:46 2001 +++ mod_auth_dbm.c Wed Oct 10 17:32:14 2001 @@ -75,7 +75,7 @@ #include http_core.h #include http_log.h #include http_protocol.h -#if defined(WIN32) +#if (defined(WIN32) || defined(NETWARE)) #include sdbm.h #define dbm_open sdbm_open #define dbm_fetch sdbm_fetch mod_auth_dbm.c.patch Description: Binary data
Re: New LICENSE file...
On Thu, Dec 20, 2001 at 02:38:19PM -0500, Bill Stoddard wrote: I'd like to propose that we extend our LICENSE file to include references to/licenses of all the other components we include in the server. This gets all the license information in one place. +1 Roy
read errors in 2.0.30
httpd 2.0.30-dev was in production for about 11 minutes on daedalus. Then I noticed a bunch of unusual errors in the log, so I moved us back to 2_0_28. They all seem to deal with reading lines from the input filters. There were 85 request failed: error reading the headers logs in 11 minutes, compared to just 8 on 2_0_28 so far today. There were a lot of complaints about methods and URIs, with some funky stuff printed out as the request: [Thu Dec 20 12:40:45 2001] [error] [client 212.171.216.10] Invalid method in request 1.1 [Thu Dec 20 12:42:54 2001] [error] [client 212.171.216.10] Invalid URI in request j/graphics/ext-6j/graphics/ext-6j/graphics/ex [Thu Dec 20 12:42:54 2001] [error] [client 212.171.216.10] Invalid URI in request ces2-j/graphics/ces2-j/graphics/ces2-j/graphics/c [Thu Dec 20 12:44:16 2001] [error] [client 128.226.140.172] Invalid URI in request Connection: Keep-Alive, TE [Thu Dec 20 12:46:30 2001] [error] [client 207.159.169.134] Invalid URI in request graphicsgraphicsgraphicsgraphicsgra [Thu Dec 20 12:47:14 2001] [error] [client 211.252.182.129] Invalid URI in request Host: www Notice the repetition of pieces of valid URIs and fragments of headers. We had some changes to ap_getline between the two builds; it might be worth looking at them again. Any other ideas? I don't see these failures with log replay. Greg
Re: read errors in 2.0.30
Greg Ames wrote: httpd 2.0.30-dev was in production for about 11 minutes on daedalus. Then I noticed a bunch of unusual errors in the log, so I moved us back to 2_0_28. They all seem to deal with reading lines from the input filters. There were 85 request failed: error reading the headers logs in 11 minutes, compared to just 8 on 2_0_28 so far today. There were a lot of complaints about methods and URIs, with some funky stuff printed out as the request: [Thu Dec 20 12:40:45 2001] [error] [client 212.171.216.10] Invalid method in request 1.1 [Thu Dec 20 12:42:54 2001] [error] [client 212.171.216.10] Invalid URI in request j/graphics/ext-6j/graphics/ext-6j/graphics/ex [Thu Dec 20 12:42:54 2001] [error] [client 212.171.216.10] Invalid URI in request ces2-j/graphics/ces2-j/graphics/ces2-j/graphics/c [Thu Dec 20 12:44:16 2001] [error] [client 128.226.140.172] Invalid URI in request Connection: Keep-Alive, TE [Thu Dec 20 12:46:30 2001] [error] [client 207.159.169.134] Invalid URI in request graphicsgraphicsgraphicsgraphicsgra [Thu Dec 20 12:47:14 2001] [error] [client 211.252.182.129] Invalid URI in request Host: www Notice the repetition of pieces of valid URIs and fragments of headers. We had some changes to ap_getline between the two builds; it might be worth looking at them again. Any other ideas? I don't see these failures with log replay. Greg I've been hammering 29 VERY hard and have seen these very infrequently. when I got a network trace I noticed that it was a invalid packet. I'm just about to build 30-dev and will see if I can reproduce it over here.
[PATCH] Apache 1.3 - ab.c standalone compile
Here's a patch which makes it possible to compile ab.c without Apache headers for NetWare; it's only adding some includes and changing the #if to #if defined() so that our compiler understands it. #include sys/ioctl.h appears twice, is this really needed a second time?? Guenter. ab.c.patch Description: Binary data
Re: read errors in 2.0.30
Greg Ames wrote: httpd 2.0.30-dev was in production for about 11 minutes on daedalus. Then I noticed a bunch of unusual errors in the log, so I moved us back to 2_0_28. They all seem to deal with reading lines from the input filters. There were 85 request failed: error reading the headers logs in 11 minutes, compared to just 8 on 2_0_28 so far today. There were a lot of complaints about methods and URIs, with some funky stuff printed out as the request: I think I found the problem; if a request line arrives in two packets (or in any other way requires two read calls), apr_rgetline writes the second block of data on top of the first. I'm working on a fix right now. --Brian
Re: core dump in ap_send_fd
Cliff Woolley wrote: [moving this to dev@httpd since it's an httpd issue] On Fri, 21 Dec 2001, Stas Bekman wrote: ap_send_fd expects the length of the input to send. First of all is there a way not to specify the length? I've a fd (can be a pipe to a process), and I've no way to figure out the length of the output. How APR can handle this? This breaks the compatibility with send_fd from httpd-1.3.x. You don't want to send a pipe with ap_send_fd() in Apache 2.0, as it would put the pipe in a file bucket, which is wrong. Agreed. But it breaks the backwards compatibility with ap_send_fd from 1.3.x. I am porting ap_send_fd for mod_perl 2.0 and it has to stay the same. Maybe we should add an ap_send_pipe(), or maybe we should change the ap_send_fd() API to accept -1 as a magic length and to recognize it as meaning the thing is a pipe not a file. Yup, that's how it was in 1.3.x's version. which will work for files as well. If you can do that, the problem is solved. In the meanwhile, you need to use the buckets directly. It's only four lines of code as opposed to one. apr_bucket_brigade *bb = apr_brigade_create(r-pool); apr_bucket *b = apr_bucket_pipe_create(thepipe); APR_BRIGADE_INSERT_TAIL(bb, b); ap_pass_brigade(bb); can I use this for sending an opened file as well? given that I've a fd already opened from Perl. I use ap_os_file_put to convert it into apr_file_t. Does using a ap_send_pipe (from above) instead of a ap_send_fd have any performance implications? Other than the fact that the fast native sendfile won't be used for sending real files? Second, I've found something that appears to be a bug. If I tell ap_send_fd to send one char more than the size of the file that I try to send, it asserts and dumps core, the latter probably is bad. Um, why would you ever do that and expect it to work? I mean, I guess we could theoretically check the length you pass in against the length of the file, but that goes against the we-expect-the-caller-to-pass-us-sane-arguments mentality that we typically maintain around here. Well, I don't think it is normal to dump core if some of the arguments is not proper, especially in this case where it's just the count that can be wrong. But that bothers me less, if you think it's fine to core dump, rather than cleanly exit with an error message :) Thanks Cliff, _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://ticketmaster.com http://apacheweek.com http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
Re: core dump in ap_send_fd
From: Stas Bekman [EMAIL PROTECTED] Sent: Friday, December 21, 2001 12:14 AM In the meanwhile, you need to use the buckets directly. It's only four lines of code as opposed to one. apr_bucket_brigade *bb = apr_brigade_create(r-pool); apr_bucket *b = apr_bucket_pipe_create(thepipe); APR_BRIGADE_INSERT_TAIL(bb, b); ap_pass_brigade(bb); can I use this for sending an opened file as well? given that I've a fd already opened from Perl. I use ap_os_file_put to convert it into apr_file_t. That is most certainly not portable. Win32, particularly, has alternate semantics for opening files as 'sendfile ready'. Passing in an fd [which would be a HANDLE on win32] isn't send-file-able. Um, why would you ever do that and expect it to work? I mean, I guess we could theoretically check the length you pass in against the length of the file, but that goes against the we-expect-the-caller-to-pass-us-sane-arguments mentality that we typically maintain around here. Well, I don't think it is normal to dump core if some of the arguments is not proper, especially in this case where it's just the count that can be wrong. But that bothers me less, if you think it's fine to core dump, rather than cleanly exit with an error message :) Where are we going with this, to the win32-friendly and safe api? eyeballs rolling Seriously, we've made a very consiencious decision in APR not to check safety on user args. If mod-perl needs to be the 'safe and simple' solution - it will need to wrap up perl with arg checking itself. This is simply not compatible with anyone's vision of APR performance. We have enough arguments over debug-asserts, that I doubt we will take the hit on an fstat() just to see if the coder had a clue about what they were doing.