Re: [patch] more robust startup + counting
this broke something. i keep getting: % t/TEST ... waiting for server to start: ok (waited 0 secs) ... still waiting for server to warm up: ok (waited 1 secs) failed to start server! (please examine t/logs/error_log) and yet the server is running.
Re: [patch] more robust startup + counting
That would be my patch to detect an 'extra unused arg' to httpd. As it is, there was no quick-fix I could see, so I've reverted. Update your httpd-2.0 cvs - Original Message - From: Doug MacEachern [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, November 29, 2001 1:34 AM Subject: Re: [patch] more robust startup + counting this broke something. i keep getting: % t/TEST ... waiting for server to start: ok (waited 0 secs) ... still waiting for server to warm up: ok (waited 1 secs) failed to start server! (please examine t/logs/error_log) and yet the server is running.
Re: [patch] more robust startup + counting
William A. Rowe, Jr. wrote: That would be my patch to detect an 'extra unused arg' to httpd. As it is, there was no quick-fix I could see, so I've reverted. Update your httpd-2.0 cvs In this particular case it was a bug in my latest patch. It's fixed now. - Original Message - From: Doug MacEachern [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, November 29, 2001 1:34 AM Subject: Re: [patch] more robust startup + counting this broke something. i keep getting: % t/TEST ... waiting for server to start: ok (waited 0 secs) ... still waiting for server to warm up: ok (waited 1 secs) failed to start server! (please examine t/logs/error_log) and yet the server is running. -- _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://ticketmaster.com http://apacheweek.com http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
Re: example of a t/SKIP file?
Doing this on Win32 (NT4), I have a t\SKIP file containing: modules/dav ssl/all And yet I get these when running t\TEST. And yes, I've tried it with sloshes rather than slashes. Is it checking for requirements *before* checking t\SKIP? modules\dav.skipped: cannot find module 'dav', cannot find module 'HTTP::DAV' ssl\all.skipped: cannot find module 'mod_ssl', cannot find module 'LWP::Protocol::https' -- #kenP-)} Ken Coar, Sanagendamgagwedweinini http://Golux.Com/coar/ Author, developer, opinionist http://Apache-Server.Com/ All right everyone! Step away from the glowing hamburger!
Re: example of a t/SKIP file?
Rodent of Unusual Size wrote: Doing this on Win32 (NT4), I have a t\SKIP file containing: modules/dav ssl/all Stone me! OtherBill was right; these need to be specified as modules\\dav ssl\\all on Win32. Bleargh.. Thanks, Bill! -- #kenP-)} Ken Coar, Sanagendamgagwedweinini http://Golux.Com/coar/ Author, developer, opinionist http://Apache-Server.Com/ All right everyone! Step away from the glowing hamburger!
Re: example of a t/SKIP file?
Rodent of Unusual Size wrote: Rodent of Unusual Size wrote: Doing this on Win32 (NT4), I have a t\SKIP file containing: modules/dav ssl/all Stone me! OtherBill was right; these need to be specified as modules\\dav ssl\\all on Win32. Bleargh.. May be the SKIP file's parser should complain when it cannot find the specified files? _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://ticketmaster.com http://apacheweek.com http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
Re: example of a t/SKIP file?
William A. Rowe, Jr. wrote: I'd like to see the modules::dav syntax adopted +1 This would certainly make things more consistent/simple to document. Don't you mean 'more consistent::simple'? :-) -- #kenP-)} Ken Coar, Sanagendamgagwedweinini http://Golux.Com/coar/ Author, developer, opinionist http://Apache-Server.Com/ All right everyone! Step away from the glowing hamburger!
Re: example of a t/SKIP file?
Stas Bekman wrote: modules\\dav ssl\\all May be the SKIP file's parser should complain when it cannot find the specified files? No, I don't think so -- then you'd have to special-case wildcards. I'd just rather it was consistent -- and even better, platform-neutral. I like OtherBill's :: syntax. I would like to see it come up into 't/TEST modules::dav, too. -- #kenP-)} Ken Coar, Sanagendamgagwedweinini http://Golux.Com/coar/ Author, developer, opinionist http://Apache-Server.Com/ All right everyone! Step away from the glowing hamburger!
Re: [patch] more robust startup + counting
Stas Bekman wrote: In this particular case it was a bug in my latest patch. It's fixed now. Eh, I'm now getting this on Win32: perl t\TEST apache.exe -v failed: Bad file descriptor at Apache-Test/lib/Apache/TestConfig.pm line 687. ?? -- #kenP-)} Ken Coar, Sanagendamgagwedweinini http://Golux.Com/coar/ Author, developer, opinionist http://Apache-Server.Com/ All right everyone! Step away from the glowing hamburger!
Re: [patch] more robust startup + counting
Stas Bekman wrote: Rodent of Unusual Size wrote: Eh, I'm now getting this on Win32: perl t\TEST apache.exe -v failed: Bad file descriptor at Apache-Test/lib/Apache/TestConfig.pm line 687. I don't think this has anything to do with this. If the line counter wasn't shifted, you've got a broken Symbol::gensym. I ran it this morning witk a checkout from a couple of days ago. No worries. I updated this morning and ran it again, with the results above. The only file changed by the update was perl-framework/Apache-Test/lib/Apache/TestServer.pm Nothing else has happened on my system; no Perl module changes or anything. Ergo, the change to the above file is what broke my harness. -- #kenP-)} Ken Coar, Sanagendamgagwedweinini http://Golux.Com/coar/ Author, developer, opinionist http://Apache-Server.Com/ All right everyone! Step away from the glowing hamburger!
Re: [patch] more robust startup + counting
Rodent of Unusual Size wrote: Stas Bekman wrote: I don't think this has anything to do with this. If the line counter wasn't shifted, you've got a broken Symbol::gensym. The failing line is: open $handle, $cmd| or die $cmd failed: $!; -- #kenP-)} Ken Coar, Sanagendamgagwedweinini http://Golux.Com/coar/ Author, developer, opinionist http://Apache-Server.Com/ All right everyone! Step away from the glowing hamburger!
Re: [STATUS] (httpd-2.0) Wed Nov 28 23:45:08 EST 2001
Hello William... This is Kevin Kiley again... See comments inline below... In a message dated 11/28/2001 10:59:26 PM Pacific Standard Time, [EMAIL PROTECTED] writes: From: [EMAIL PROTECTED] Sent: Thursday, November 29, 2001 12:30 AM In a message dated 11/28/2001 10:21:46 PM Pacific Standard Time, [EMAIL PROTECTED] writes: If you have any doubts about why sometimes submissions aren't considered for inclusion in any open source project ... well there you have it. Ya know what... I am going to stand still for this little spanking session because I am willing to admit when I have made a mistake and unlike most times before on this forum when you guys have tried to make a punching bag out of me... this time I give you permission to fire away. Ok... now on to the next question ( already asked by someone else ) Whatever happened to the 'other candidate for submission' as described by Coar? I assume that was mod_gzip itself? As described by Ken? Once again, what would he have to do with that? Nothing, really, other than the fact that the only reason I asked is that he is the one who updated the status file to read... +1 Cliff ( there's now another candidate to be evaluated ) ...and failed to actually mention what that 'other candidate' really was. I believe I missed any messages from a 'Cliff' and so I was never sure myself what that was all about since it wasn't clear in the STATUS. I have never really been sure what happened to the complete working mod_gzip for Apache 2.0 that was (freely) submitted ( after both public and private urgings by Apache developers ) If that's what Ken's note was really referring to then OK but I was personally never sure since it wasn't specific. I thought maybe this guy Cliff submitted something, too. I remember mod_gzip for Apache 2.0 was immediately hacked upon right after submission by some people ( Justin? Ian? Don't remember ) and they started removing features without even fully reading the source code and/or understanding what they were for ( and then put some of them back after I explained some things ) but all of that work just died out into silence and the STATUS file became the only remnant of the whole firestorm that Justin started by asking for Ian's mod_gz to be dumped into the tree ASAP. A LOT of folks on the mod_gzip forum caught the whole discussion at Apache and started asking us 'Is that 'other candidate really mod_gzip for 2.0 or is it 'something else' and our response was always 'We do not know for sure... ask them'. And (FYI) a few people came to Apache and DID ask 'What is the staus of mod_gzip version 2.0?' and no one even ack'ed their messages so we assumed it wasn't even being considered. jwoolley01/09/15 12:18:59 Modified:.STATUS Log: A chilly day in Charlottesville... Revision ChangesPath 1.294 +3 -2 httpd-2.0/STATUS [snip] @@ -117,7 +117,8 @@ and in-your-face.) This proposed change would not depricate Alias. * add mod_gz to httpd-2.0 (in modules/experimental/) - +1: Greg, Justin, Cliff, ben, Ken, Jeff + +1: Greg, Justin, ben, Ken, Jeff + 0: Cliff (there's now another candidate to be evaluated) 0: Jim (premature decision at present, IMO) -0: Doug, Ryan Kevin... I believe I've generally treated you civilly... Read the end of my previous response above. A good number of non-combatants to these 'gzip wars' are really disgusted with the language and attitude on list. Much of that has turned on your comments and hostility. If anyone really views a few 'heated exchanges' on a public forum over some specific technology issues as a 'war' then I'm sorry but I still won't apologize for being passionate about something and willing to argue/defend it. Email is a strange medium. Some people take it way too seriously, methinks. In accepting a contribution, the submitter is generally expected to support the submission, ongoing. Okay... mind blown... that is the exact OPPOSITE of the argument that I beleive even YOU were making during the 'Please why won't you submit mod_gzip for 2.0 before we go BETA' exchanges. One of the arguments I made ( Capital I for emphasis ) was that if I was going to 'support' it I wanted to see at least one good beta of Apache 2.0 before the submission was made. Strings of arguments came right back saying That should NOT be your concern... if you submit mod_gzip for Apache 2.0 then WE will support it, not YOU. Seriously... check the threads if you have time... that fact that Apache would NOT be relying on us to support it was one of the 'arm twisting' arguments that was made to try and get us to submit the code BEFORE Beta so that Ian's mod_gz wouldn't be the 'only choice'. Everyone here enjoys working on or with Apache, or they would find another server. Even
Re: cvs commit: httpd-2.0/support ab.c
Thank goodness for compilers who can read xprintf syntax, and thanks for taking a few minutes on this, Jeff. Bill - Original Message - From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, November 29, 2001 5:30 AM Subject: cvs commit: httpd-2.0/support ab.c trawick 01/11/29 03:30:57 Modified:support ab.c Log: totalcon / requests is no longer double either, so %5e doesn't fly
Re: cvs commit: httpd-2.0/support ab.c
William A. Rowe, Jr. [EMAIL PROTECTED] writes: Thank goodness for compilers who can read xprintf syntax, and thanks for taking and thank goodness for cron and unattended updates/builds that compare old make.stderr with new make.stderr and send e-mail as appropriate :) -- Jeff Trawick | [EMAIL PROTECTED] | PGP public key at web site: http://www.geocities.com/SiliconValley/Park/9289/ Born in Roswell... married an alien...
Re: [PATCH] apache-1.3/src/os/netware/os.c
Pavel, Your patch looks good. It looks like a much cleaner solution. What version of Winsock have you tested this patch with? Did you try it on NW6? As soon as I get some time to implement it and test it myself, I will get it checked in. thanks, Brad [EMAIL PROTECTED] Wednesday, November 28, 2001 7:50:57 AM Hi, attached patch to fix invalid redirections from https://some_site/some_location; to http://some_site/some_location/;. Pavel
Re: [STATUS] (httpd-2.0) Wed Nov 28 23:45:08 EST 2001
In a message dated 11/29/2001 3:23:32 AM Pacific Standard Time, [EMAIL PROTECTED] writes: William A. Rowe, Jr. wrote: What is the http content-encoding value for this facility? deflate Ergo, mod_deflate. And the name change from mod_gz to mod_deflate was suggested by Roy, whom I think knows HTTP better than anyone else here.. Knowing HTTP is one thing... knowing compression formats is another. Does the output of mod_deflate have a GZIP and/or ZLIB header on it, or not? Even those 2 headers are NOT the same but that's yet another story. If it does... then it's not really 'pure deflate'. If it doesn't... then it MIGHT be pure deflate but it won't do squat in most legacy and/or modern browsers. Yours... Kevin
Re: worker mpm: can we optimize away the listener thread?
On Thursday 29 November 2001 09:20 am, Brian Pane wrote: From a performance perspective, the two limitations that I see in the current worker implementation are: * We're basically guaranteed to have to do an extra context switch on each connection, in order to pass the connection from the listener thread to a worker thread. * The passing of a pool from listener to worker makes it very tricky to optimize away all the mutexes within the pools. So...please forgive me if this has already been considered and dismissed a long time ago, but...why can't the listener and worker be the same thread? That's where we were before worker, with the threaded MPM. There are thread management issues with that model, and it doesn't scale as well. Ryan __ Ryan Bloom [EMAIL PROTECTED] Covalent Technologies [EMAIL PROTECTED] --
Re: [STATUS] (httpd-2.0) Wed Nov 28 23:45:08 EST 2001
In a message dated 11/29/2001 3:23:27 AM Pacific Standard Time, [EMAIL PROTECTED] writes: As described by Ken? Once again, what would he have to do with that? I just happen to be the chap with the cron job that sends the current STATUS file every Wednesday. I don't maintain it; that's a shared responsibility (and one of its deficiencies, IMHO, but I've ranted about that before :-). Then even more apologies are due and are hereby given. I thought that the 'final editing' of the auto-broadcast was someone's ongoing task ( yours ). Then I have no idea who added... +1 Cliff ( there's now another candidate to be evaluated ) ...or why that comment was so ambiguous. I really did 'miss' a message somewhere and I thought some guy named Cliff ( Wooley? Doesn't say ) submitted something for consideration as well and mod_gzip was already 'off' the table. This impression was then substantiated when a few people from the mod_gzip forum who were curious about the status of the 'mod_gzip 2.0 submission' queried this forum and got absolutely no reply from anyone. Doesn't matter now... But is it too much for future reference to ask STATUS file comments to be a little more explicit? It would help others track what's really happening there at Apache. Later... Kevin
Re: worker mpm: can we optimize away the listener thread?
On Thu, 29 Nov 2001, Brian Pane wrote: Weren't the thread management problems with the threaded MPM related specifically to shutdown? If it's just shutdown that's a problem, it may be possible to solve it. Graceful restart was the big problem. --Cliff -- Cliff Woolley [EMAIL PROTECTED] Charlottesville, VA
Re: worker mpm: can we optimize away the listener thread?
Aaron Bannert wrote: On Thu, Nov 29, 2001 at 09:20:48AM -0800, Brian Pane wrote: From a performance perspective, the two limitations that I see in the current worker implementation are: * We're basically guaranteed to have to do an extra context switch on each connection, in order to pass the connection from the listener thread to a worker thread. * The passing of a pool from listener to worker makes it very tricky to optimize away all the mutexes within the pools. IIRC, the problem isn't so much the fact that pools may be passed around, since in that respect they are already threadsafe without mutexes (at least in the current CVS and in the recent time-space tradeoff patch. I believe the actual problem as you have described it to me is how destroying a pool requires that the parent be locked. Perhaps you can better characterize the problem. Right--the fact that the transaction pools are children of a pool owned by the listener thread means that we have to do locking when we destroy a transaction pool (to avoid race conditions when unregistering it from its parent). --Brian
Re: worker mpm: can we optimize away the listener thread?
On Thu, Nov 29, 2001 at 09:31:01AM -0800, Ryan Bloom wrote: On Thursday 29 November 2001 09:20 am, Brian Pane wrote: So...please forgive me if this has already been considered and dismissed a long time ago, but...why can't the listener and worker be the same thread? That's where we were before worker, with the threaded MPM. There are thread management issues with that model, and it doesn't scale as well. Not exactly, in Brian's model we still have the benefit of only having one thread per process in the accept loop at one time, which means significantly reduced overhead from lock contention (remember my posts a few months back about how terrible fcntl() gets when there are even more than just a few threads/processes contending for the lock?). Thread mangement (at shutdown) has always been a problem in our threaded MPMs. I'm still not completely comfortable with the current state of worker, but that has more to do with signals than threads. -aaron
Re: worker mpm: can we optimize away the listener thread?
On Thursday 29 November 2001 09:48 am, Aaron Bannert wrote: On Thu, Nov 29, 2001 at 09:31:01AM -0800, Ryan Bloom wrote: On Thursday 29 November 2001 09:20 am, Brian Pane wrote: So...please forgive me if this has already been considered and dismissed a long time ago, but...why can't the listener and worker be the same thread? That's where we were before worker, with the threaded MPM. There are thread management issues with that model, and it doesn't scale as well. Not exactly, in Brian's model we still have the benefit of only having one thread per process in the accept loop at one time, which means significantly reduced overhead from lock contention (remember my posts a few months back about how terrible fcntl() gets when there are even more than just a few threads/processes contending for the lock?). Thread mangement (at shutdown) has always been a problem in our threaded MPMs. I'm still not completely comfortable with the current state of worker, but that has more to do with signals than threads. The model that Brian posted is exactly what we used to do with threaded, if you had multiple ports. In fact, it was the very first implementation of threaded, where we always did multiple locks. Ryan __ Ryan Bloom [EMAIL PROTECTED] Covalent Technologies [EMAIL PROTECTED] --
Re: worker mpm: can we optimize away the listener thread?
On Thursday 29 November 2001 09:41 am, Brian Pane wrote: Ryan Bloom wrote: On Thursday 29 November 2001 09:20 am, Brian Pane wrote: From a performance perspective, the two limitations that I see in the current worker implementation are: * We're basically guaranteed to have to do an extra context switch on each connection, in order to pass the connection from the listener thread to a worker thread. * The passing of a pool from listener to worker makes it very tricky to optimize away all the mutexes within the pools. So...please forgive me if this has already been considered and dismissed a long time ago, but...why can't the listener and worker be the same thread? That's where we were before worker, with the threaded MPM. There are thread management issues with that model, and it doesn't scale as well. Weren't the thread management problems with the threaded MPM related specifically to shutdown? If it's just shutdown that's a problem, it may be possible to solve it. The problem is that without a master thread to manage the other threads, things start to fall apart. shutdown, restart they both didn't really work well. __ Ryan Bloom [EMAIL PROTECTED] Covalent Technologies [EMAIL PROTECTED] --
Re: worker mpm: can we optimize away the listener thread?
On Thursday 29 November 2001 09:45 am, Brian Pane wrote: Aaron Bannert wrote: On Thu, Nov 29, 2001 at 09:20:48AM -0800, Brian Pane wrote: From a performance perspective, the two limitations that I see in the current worker implementation are: * We're basically guaranteed to have to do an extra context switch on each connection, in order to pass the connection from the listener thread to a worker thread. * The passing of a pool from listener to worker makes it very tricky to optimize away all the mutexes within the pools. IIRC, the problem isn't so much the fact that pools may be passed around, since in that respect they are already threadsafe without mutexes (at least in the current CVS and in the recent time-space tradeoff patch. I believe the actual problem as you have described it to me is how destroying a pool requires that the parent be locked. Perhaps you can better characterize the problem. Right--the fact that the transaction pools are children of a pool owned by the listener thread means that we have to do locking when we destroy a transaction pool (to avoid race conditions when unregistering it from its parent). I'm not sure this will fix the problem, but can't we fix this with one more level of indirection? Basically, the listener thread would have a pool of pools, one per thread. Instead of creating the transaction pool off of the man listener pool, it creates the pool of pools off the listener pool, and the transaction pool off the next pool in the pool of pools. listener sub-pool1 sub-pool2 sub-pool3 trans-pool trans-pool trans-pool The sub-pools are long-lived, as long as the listener pool. By moving the trans-pool to under the sub-pool, and only allowing one trans-pool off each sub-pool at a time, we remove the race-condition, and can remove the lock. Ryan __ Ryan Bloom [EMAIL PROTECTED] Covalent Technologies [EMAIL PROTECTED] --
Re: worker mpm: can we optimize away the listener thread?
Ryan Bloom wrote: The model that Brian posted is exactly what we used to do with threaded, if you had multiple ports. No, you're missing a key difference. There's no intra-process mutex in Brian's MPM. One thread at a time is chosen to be the accept thread without using a mutex. Once it picks off a new connection/pod byte, it chooses the next accept thread and wakes it up before getting bogged down in serving the request. Greg
Re: worker mpm: can we optimize away the listener thread?
On Thu, Nov 29, 2001 at 09:59:10AM -0800, Ryan Bloom wrote: On Thursday 29 November 2001 09:45 am, Brian Pane wrote: Right--the fact that the transaction pools are children of a pool owned by the listener thread means that we have to do locking when we destroy a transaction pool (to avoid race conditions when unregistering it from its parent). I'm not sure this will fix the problem, but can't we fix this with one more level of indirection? Basically, the listener thread would have a pool of pools, one per thread. Instead of creating the transaction pool off of the man listener pool, it creates the pool of pools off the listener pool, and the transaction pool off the next pool in the pool of pools. listener sub-pool1 sub-pool2 sub-pool3 trans-pooltrans-pool trans-pool The sub-pools are long-lived, as long as the listener pool. By moving the trans-pool to under the sub-pool, and only allowing one trans-pool off each sub-pool at a time, we remove the race-condition, and can remove the lock. That also opens us up for the possibility of pool reuse as well as cache hit performance. We implement the pool of pools as a simple mutex-protected stack with a set number of pools equal to the number of threads (or maybe one extra). Then we make sure that the worker clears and returns the pool before reentering the worker queue. We don't get the benefit of reusing the most recent thread, which would give us cache hits on that thread's stack segment, but we do get hits on the pools, and we even reduce overall memory consumption since the pools that grow to meet the demand are reused (and we don't force all pools to grow to meet the peak demand). Interesting. -aaron
Re: [PATCH] apache-1.3/src/os/netware/os.c
Brad, I've tested my patch on NW5.1 SP3+ box with the latest (Beta) patches installed (011022n5 = wsock4f, ...), not running other versions/configurations of NetWare such as NW6, so I can't test it here. I'm not absolutely sure if WSAIoctl(..., SO_SSL_GET_FLAGS,...) is 100% okay. I have no documentation, but I am wondering that zero is returned in lpcbBytesReturned (NULL pointer passed in my patch)... Anyway, value returned by this call in lpvOutBuffer seems to be correct (at least on my server). Additionally to this fix, I am pretty sure that we need more complex ap_is_default_port()/ap_default_port() macro/function. I think that redirection from https://my_site/location; to https://my_site:443/location/; is inaccurate (Apache on Linux with mod_ssl doesn't work this way). Port 443 should be considered as default for https scheme. I am also experiencing another issue - if accessing server from our local domain and omitting a domain name in the URL: http(s)://my_server_without_a_domain_name/location - http(s)://my_server_domain_name/location/ I can't understand why Location returned in redirection response (code 301) is not constructed from incoming URI (a trailing slash added), but from (virtual) server's configuration parameters. This is not NetWare specific behaviour and mod_dir (and so on) is responsible for this. However, I'm not too familiar with W3C specification... Regards, Pavel Brad Nicholes wrote: Pavel, Your patch looks good. It looks like a much cleaner solution. What version of Winsock have you tested this patch with? Did you try it on NW6? As soon as I get some time to implement it and test it myself, I will get it checked in. thanks, Brad [EMAIL PROTECTED] Wednesday, November 28, 2001 7:50:57 AM Hi, attached patch to fix invalid redirections from https://some_site/some_location; to http://some_site/some_location/;. Pavel
Re: [STATUS] (httpd-2.0) Wed Nov 28 23:45:08 EST 2001
And the name change from mod_gz to mod_deflate was suggested by Roy, whom I think knows HTTP better than anyone else here.. Knowing HTTP is one thing... knowing compression formats is another. Heh, that's amusing. Does the output of mod_deflate have a GZIP and/or ZLIB header on it, or not? Even those 2 headers are NOT the same but that's yet another story. Correct. deflate is the algorithm. deflate + gzip header is gzip. The module should be capable of producing both, if not now then eventually. Roy
Does Apache require child processes to die on a SIGHUP?
I've got a piped logfile program I write to handle my logfiles, and someone using it on Solaris said that when they try to restart apache, it hangs on waiting for the piped program to terminate. Last time I checked Apache puts out a SIGHUP and then a SIGTERM to all child processes. The program calls exit() for a SIGTERM, and on Linux, seems to exit and let Apache restart properly, however on Solaris, it seems that it doesn't die properly and you have to send a SIGKILL to get it to die and let Apache restart. We were able to fix this problem by having the program call exit() when it receives a SIGHUP too, but is this what Apache is expecting? Should it die on a SIGHUP? If not, any idea why the child process would be hanging on a Solaris system? Eli.
ISAPICacheFile - Crashes apache Please help
Hi, I have written a ISAPI extension (which is basically a dll) which does some db query stuff. I want this extension to be loaded in the memory as long as apache is running. In new Apache release (2.0) this particular feature has been implemented, that's by using directive ISAPICacheFile. When I make a entry in the httpd.conf file and start the Apache the apache crashes. I read somewhere that apache tries to read httpd.conf file twice so eventually it tries to load dll twice and that is where it fails. So, I am wondering whether I am doing any stupid mistake or it's a known bug which is yet to be fixed. I greatly appreciate any input/solutions for this problem. Thanks in advance, Greg
Re: support for multiple tcp/udp ports
On Thursday 29 November 2001 12:12 pm, Michal Szymaniak wrote: Hello again. It is now possible to write a module that will make Apache listen on UDP ports. However, as somebody who has done this in the past, it's not a good idea. You lose too much data on every request. Could you explain what you mean by saying 'you lose data'? Is it losing data because of lack of reliability in udp, or missing datagrams that arrive to your udp socket and are subsequently overwritten by next ones before you manage to service them? I am assuming it was because of the reliability of udp, but we ran out of time on the project before we figured out exactly what was happening, and I never got back to it. Anyway, I have tried to modify the echo module to manage additonal sockets: I added post_config hook that created new sockets together with associated apr_listen_rec structures and then simply inserted them into the 'ap_listeners' list. As long as the sockets were tcp-oriented, everything was just fine. However, after switching from SOCK_STREAM to SOCK_DGRAM, apache exited with a critical error, leaving (in 'error_log') a few lines about 'invalid operations on non-tcp socket'. You are adding the sockets too early. There are two ways to handle this. 1) Use the pre_mpm hook instead of post_config. 2) We need a new hook If you look at the worker MPM, you will see that it actually adds a pipe to the listen_rec list, but it doesn't use a hook to do it. Can you modify your code to use the pre_mpm hook, and let me know if that works? Even if it does, we may need a new hook, because the pre_mpm hook doesn't get called for graceful restarts. Ryan __ Ryan Bloom [EMAIL PROTECTED] Covalent Technologies [EMAIL PROTECTED] --
Re: Request for Patch to 1.3.x
It's kinda crufty, but so are a lot of other things in 1.3. It is a small patch which is goodness and I appreciate what it is used for. If it is useful enough for you to be still interested in it after a month, I'll add my +1 to Gregs :-) +1 Bill - Original Message - From: Kevin Mallory [EMAIL PROTECTED] To: 'Greg Stein' [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Thursday, November 29, 2001 4:35 PM Subject: RE: Request for Patch to 1.3.x Does anyone have any objections to adding this capability?? -Original Message- From: Greg Stein [mailto:[EMAIL PROTECTED]] Sent: Sunday, October 28, 2001 10:28 PM To: [EMAIL PROTECTED] Cc: Kevin Mallory Subject: Re: Request for Patch to 1.3.x On Wed, Oct 03, 2001 at 11:19:34AM -0700, Kevin Mallory wrote: ... [ patch allows custom caching mechanisms ] ... The patch simply adds a new callback (the 'filter callback') into the handling in buff.c's routine writev_it_all() and buff_write(). When not registered, there is no performance impact to users. This filter callback makes it possible for SpiderCache to correctly intercept the request as it is being processed, thus allowing our product to perform dynamic page caching. +1 on including this patch. I see no bad effects, and it has definite utility. Cheers, -g ... *** orig_buff.c Tue Aug 21 17:45:34 2001 --- spidercache_buff.c Tue Aug 21 17:45:35 2001 *** *** 356,361 --- 356,365 { int rv; + if (fb-filter_callback != NULL) { + fb-filter_callback(fb, buf, nbyte); + } + #if defined(WIN32) || defined(NETWARE) if (fb-flags B_SOCKET) { rv = sendwithtimeout(fb-fd, buf, nbyte, 0); *** *** 438,443 --- 442,450 (size_t) SF_UNBOUND, 1, SF_WRITE); #endif + fb-callback_data = NULL; + fb-filter_callback = NULL; + return fb; } *** *** 1077,1082 --- 1084,1095 static int writev_it_all(BUFF *fb, struct iovec *vec, int nvec) { int i, rv; + + if (fb-filter_callback != NULL) { + for (i = 0; i nvec; i++) { + fb-filter_callback(fb, vec[i].iov_base, vec[i].iov_len); + } + } /* while it's nice an easy to build the vector and crud, it's painful * to deal with a partial writev() *** orig_buff.h Tue Aug 21 17:45:34 2001 --- spidercache_buff.h Tue Aug 21 17:45:35 2001 *** *** 129,134 --- 129,138 Sfio_t *sf_in; Sfio_t *sf_out; #endif + + void *callback_data; + void (*filter_callback)(BUFF *, const void *, int ); + }; #ifdef B_SFIO -- Greg Stein, http://www.lyra.org/
mod_rewrite and location directives.
hi. I was just wondering if anyone knows why rewrite won't allow a subrequest to work on a directory rewrite rule. It's looks like the code has been in there forever... here's the code fragment I'm talking about. static int hook_fixup(request_rec *r) { rewrite_perdir_conf *dconf; char *cp; char *cp2; const char *ccp; char *prefix; int l; int rulestatus; int n; char *ofilename; dconf = (rewrite_perdir_conf *)ap_get_module_config(r-per_dir_config, rewrite_module); /* if there is no per-dir config we return immediately */ if (dconf == NULL) { return DECLINED; } /* we shouldn't do anything in subrequests */ if (r-main != NULL) { return DECLINED; }
Mod_cgi doesn't seem to write stderr to the error_log
I haven't had time to verify it myself, but I have been told that it is happening. Mod_cgi is not actually writing error message from the script to the error_log. I would consider this a major showstopper! Ryan __ Ryan Bloom [EMAIL PROTECTED] Covalent Technologies [EMAIL PROTECTED] --
Re: worker mpm: can we optimize away the listener thread?
Brian Pane wrote: From a performance perspective, the two limitations that I see in the current worker implementation are: * We're basically guaranteed to have to do an extra context switch on each connection, in order to pass the connection from the listener thread to a worker thread. * The passing of a pool from listener to worker makes it very tricky to optimize away all the mutexes within the pools. Brian this sounds similiar to the 'Leader/Follower' Pattern in the ACE framework (http://jerry.cs.uiuc.edu/~plop/plop2k/proceedings/ORyan/ORyan.pdf) So...please forgive me if this has already been considered and dismissed a long time ago, but...why can't the listener and worker be the same thread? I'm thinking of a design like this: * There's no dedicated listener thread. * Each worker thread does this: while (!time to shut down) { wait for another worker to wake me up; if (in shutdown) { exit this thread; } accept on listen mutex or pipe of death; if (pipe of death triggered) { set in shutdown flag; wake up all the other workers; exit this thread; } else { pick another worker and wake it up; handle the connection that I just accepted; } } --Brian
Re: Mod_cgi doesn't seem to write stderr to the error_log
From: Ryan Bloom [EMAIL PROTECTED] Sent: Thursday, November 29, 2001 7:18 PM I haven't had time to verify it myself, but I have been told that it is happening. Mod_cgi is not actually writing error message from the script to the error_log. I would consider this a major showstopper! Definately proven to myself that it's _working_ on HEAD. Perhaps in conjuction with suexec, specifically? Or something else I can't verify, like mod_cgid. Bill
Re: Mod_cgi doesn't seem to write stderr to the error_log
On Thu, Nov 29, 2001 at 05:18:00PM -0800, Ryan Bloom wrote: I haven't had time to verify it myself, but I have been told that it is happening. Mod_cgi is not actually writing error message from the script to the error_log. I would consider this a major showstopper! It's working fine for me with both mod_cgi and mod_cgid. The only difference is that mod_cgid isn't prefixing any log metadata: prefork+mod_cgi: [Thu Nov 29 18:00:41 2001] [error] [client 10.250.1.5] foo worker+mod_cgid: foo Same thing with and without suexec enabled. -aaron
CL for Proxy Requests
Content-Length is not passed through proxy requests, when Apache 2.0 is used as the proxy. Is it a bug? Feature? Limitation? Or is it just me? My configuration? Many clients depend on this data, for example audio/video players, so it is quite bad to lack CL. Is there any way to tell the API that the filters don't change the response size so the original CL can be used? -- Eli Marmor [EMAIL PROTECTED] CTO, Founder Netmask (El-Mar) Internet Technologies Ltd. __ Tel.: +972-9-766-1020 8 Yad-Harutzim St. Fax.: +972-9-766-1314 P.O.B. 7004 Mobile: +972-50-23-7338 Kfar-Saba 44641, Israel
Re: CL for Proxy Requests
On Thursday 29 November 2001 08:01 pm, Eli Marmor wrote: Content-Length is not passed through proxy requests, when Apache 2.0 is used as the proxy. Is it a bug? Feature? Limitation? Or is it just me? My configuration? Many clients depend on this data, for example audio/video players, so it is quite bad to lack CL. Is there any way to tell the API that the filters don't change the response size so the original CL can be used? There is no way to do that, because you will never know if filters changed the data or not. The reason we don't return a C-L, is that we don't have all of the data, so we can't computer the C-L. There is a possibility that we could fix this, with a hack. Basically, have the C-L filter check to see if the only bucket is a socket bucket or a pipe bucket. If so, leave the C-L alone. We can be sure that the data hasn't been changed in that case. If the only bucket is any other type, we will automagically compute the C-L. Ryan __ Ryan Bloom [EMAIL PROTECTED] Covalent Technologies [EMAIL PROTECTED] --