[RELEASE CANDIDATE] Apache-Test-1.13
a release candidate for Apache-Test 1.13 is now available. http://perl.apache.org/~geoff/Apache-Test-1.13-dev.tar.gz please take the time to excercise the candidate through all your existing applications that use Apache-Test and report back successes or failures. --Geoff Changes since 1.12: the have() function was removed entirely - use need() instead. [Geoffrey Young] add need() and need_* variant functions (need_module(), need_apache(), etc) for use specifically with plan() to decide whether or not a test should run. have_* variants (have_module(), have_apache(), etc) are now specifically for use outside of plan(), although they can continue to be used within plan() without fear of current tests breaking. [Geoffrey Young] add need_php() and have_php() which will return true when either mod_php4 or mod_php5 are available, providing functionality similar to need_cgi() and have_cgi(). [Geoffrey Young] Add APACHE_TEST_EXTRA_ARGS make variable to all invocations to t/TEST to allow passing extra arguments from the command line. [Gozer] When APACHE_TEST_NO_STICKY_PREFERENCES=1 is used don't even try to interactively configure the server, as we don't save any config it was entering an infinite loop. [Stas] If a directory t/lib exists from where the tests are run, adjust @INC so that this directory is added when running the tests, both within t/TEST and within t/conf/modperl_inc.pl. This allows inclusion of modules specific to the tests that aren't intended to be installed. [Stas, Randy] make a special case for threaded mpm configuration, to ensure that unless maxclients was specified, MaxClients will be exactly twice bigger than ThreadsPerChild (minclients), since if we don't do that, Apache will reduce MaxClients to the same value as ThreadsPerChild. [Stas] Renamed generate_test_script() to generate_script() in Apache::TestMB to match the naming convention used in Apache::TestMM and elsewhere. [David] Apache::TestMB now only prints the Generating test running script message if verbosity is enabled (e.g., by passing --verbose when executing Build.PL). [David] Fixed the requests_redirectable parameter to Apache::TestRequest::user_agent() so that it works as docmented when passed a negative value. [Boris Zentner] Documented support for passing an array reference to the requests_redirectable parameter to Apache::TestRequest::user_agent() to be passed to LWP::UserAgent if LWP ist installed. [David]
Re: [RELEASE CANDIDATE] Apache-Test-1.13
Geoffrey Young wrote: a release candidate for Apache-Test 1.13 is now available. http://perl.apache.org/~geoff/Apache-Test-1.13-dev.tar.gz please take the time to excercise the candidate through all your existing applications that use Apache-Test and report back successes or failures. All tests OK on : OpenBSD 3.5 (httpd-1 and httpd-2) Fedora Core 2 --Geoff Changes since 1.12: the have() function was removed entirely - use need() instead. [Geoffrey Young] add need() and need_* variant functions (need_module(), need_apache(), etc) for use specifically with plan() to decide whether or not a test should run. have_* variants (have_module(), have_apache(), etc) are now specifically for use outside of plan(), although they can continue to be used within plan() without fear of current tests breaking. [Geoffrey Young] add need_php() and have_php() which will return true when either mod_php4 or mod_php5 are available, providing functionality similar to need_cgi() and have_cgi(). [Geoffrey Young] Add APACHE_TEST_EXTRA_ARGS make variable to all invocations to t/TEST to allow passing extra arguments from the command line. [Gozer] When APACHE_TEST_NO_STICKY_PREFERENCES=1 is used don't even try to interactively configure the server, as we don't save any config it was entering an infinite loop. [Stas] If a directory t/lib exists from where the tests are run, adjust @INC so that this directory is added when running the tests, both within t/TEST and within t/conf/modperl_inc.pl. This allows inclusion of modules specific to the tests that aren't intended to be installed. [Stas, Randy] make a special case for threaded mpm configuration, to ensure that unless maxclients was specified, MaxClients will be exactly twice bigger than ThreadsPerChild (minclients), since if we don't do that, Apache will reduce MaxClients to the same value as ThreadsPerChild. [Stas] Renamed generate_test_script() to generate_script() in Apache::TestMB to match the naming convention used in Apache::TestMM and elsewhere. [David] Apache::TestMB now only prints the Generating test running script message if verbosity is enabled (e.g., by passing --verbose when executing Build.PL). [David] Fixed the requests_redirectable parameter to Apache::TestRequest::user_agent() so that it works as docmented when passed a negative value. [Boris Zentner] Documented support for passing an array reference to the requests_redirectable parameter to Apache::TestRequest::user_agent() to be passed to LWP::UserAgent if LWP ist installed. [David] -- Philippe M. Chiasson m/gozer\@(apache|cpan|ectoplasm)\.org/ GPG KeyID : 88C3A5A5 http://gozer.ectoplasm.org/ F9BF E0C2 480E 7680 1AE5 3631 CB32 A107 88C3A5A5 signature.asc Description: OpenPGP digital signature
mod_cache performance
--On Monday, August 2, 2004 2:49 PM -0400 Bill Stoddard [EMAIL PROTECTED] wrote: To get mod_cache/mod_mem_cache (I know little or nothing about mod_disk_cache) really performing competatively against best-of-breed caches will require bypassing output filters (and prebuilding headers) and possibly Here's some comparative numbers to chew on. One client and one server on 100Mbps network (cheapy 100Base-T switch); 50 simulated users hitting 7 URLs 100 times with flood (35,000 requests). mod_disk_cache: Requests: 35000 Time: 40.91 Req/Sec: 856.78 mod_mem_cache: Requests: 35000 Time: 54.90 Req/Sec: 637.81 no cache: Requests: 35000 Time: 54.86 Req/Sec: 638.81 squid: Requests: 35000 Time: 105.35 Req/Sec: 332.25 mod_disk_cache completely filled out the network at ~50% CPU usage. [Can't push through more than ~8MB/sec (~64Mb/sec) without GigE.] mod_mem_cache filled up the CPU but not the network [Poor scaling characteristics. It goes to 100% CPU with just 5 users!] No caching was better CPU-wise (less utilization) than mod_mem_cache [Still not as good network or CPU-wise as mod_disk_cache] squid was really inefficient both CPU and network-wise. The squid numbers *completely* baffle me. I have to believe I've got something stupid configured in squid (or I did something stupid with flood; but the network traces and truss output convince me otherwise). My squid is using the default RHEL3 installation (Squid Cache: Version 2.5.STABLE3). squid and httpd are on the same box - I may try to move squid to another box - will see if I have time tomorrow to find a suitable target to move to... For those playing along at home, I am hitting the following URLs with flood: / /apache_pb.gif /manual/ /manual/images/left.gif /manual/images/feather.gif /manual/content-negotiation.html /icons/ HTH. -- justin
Re: Re^4: [patch] perchild.c
On Tue, 3 Aug 2004 11:45:51 +0900 (JST), Tsuyoshi SASAMOTO [EMAIL PROTECTED] wrote: attached patch looks a bit simpler; does it look okay to you? Yes, it looks good and smart. # I wonder about intention of the original code `if (!body)`; if we received a message, it will have the '\0' in it (or there is a bug in the send logic); so I'm not worried about the original check for if (!body)... # in what case could it occur... recvmsg() could fail? looks like pass_request() is what does the sendmsg(), yet there is an error path in there -- when apr_brigade_flatten() fails; on this error path, the connection to the recvmsg() loop would be dropped with no sendmsg(), so recvmsg() would return a failure # If so, rather return value of the recvmsg() should be checked... another thing I wonder about with regards to this AF_UNIX/SOCK_STREAM logic: SOCK_STREAM isn't normally message oriented, but no logic on the receive side to handle a partial read; when will that blow up? I suspect (but haven't done the testingresearch) that the only thing we can assume is sent over intact is the file escriptor that is getting passed, and the receiver should be prepared to continue reading after calling recvmsg() --/-- also, why send over the request body on this initial message? how would the request body get filtered? seems like we should let the process which is going to handle this request read the request body through the input filter chain
Re: mod_cache performance
Justin Erenkrantz wrote: squid was really inefficient both CPU and network-wise. Under load, squid will always use 100% of the CPU. This is because it uses poll/select. The squid numbers *completely* baffle me. I have to believe I've got something stupid configured in squid (or I did something stupid with flood; but the network traces and truss output convince me otherwise). My squid is using the default RHEL3 installation (Squid Cache: Version 2.5.STABLE3). RHEL 3 sucks. Fedora Core 2 would have been a much better choice. Also, did you use poll? I know a large website that does several dozen hits per day using squid :) On an OS that supports sendfile, a disk based cache will almost always bury a memory based one. -- Brian Akins Senior Systems Engineer CNN Internet Technologies
Re: [PATCH] mod_cache fixes: #8
Bill Stoddard wrote: Please, no more specialized knobs which 99.9% of the world cares nothing about. How do you define that percentage? By domains? In that case 99.999% probably care nothing about what we are doing. If you look at total traffic, however, would not options that help the top 10% be a good thing? Also, if the defaults are reasonable, what difference does it make? -- Brian Akins Senior Systems Engineer CNN Internet Technologies
Re: [PATCH] event driven MPM
Greg Ames wrote: Bill Stoddard created an event driven socket I/O patch a couple of years ago that could serve pages. I picked it up and decided to see if I could simplify it to minimize the changes to request processing. What's the status of this? I'd be willing to help if needed. We are interested in this. -- Brian Akins Senior Systems Engineer CNN Internet Technologies
Re: mod_cache performance
Brian Akins wrote: On an OS that supports sendfile, a disk based cache will almost always bury a memory based one. Quite probably. But on a system without a disk, chances are it won't. :( Regards, Graham -- smime.p7s Description: S/MIME Cryptographic Signature
Re: mod_cache performance
Graham Leggett wrote: Brian Akins wrote: On an OS that supports sendfile, a disk based cache will almost always bury a memory based one. Quite probably. But on a system without a disk, chances are it won't. :( It will. Unless mod_disk_cache + ram-disk + sendfile doesn't outperform mod_mem_cache. -- Eli Marmor [EMAIL PROTECTED] CTO, Founder Netmask (El-Mar) Internet Technologies Ltd. __ Tel.: +972-9-766-1020 8 Yad-Harutzim St. Fax.: +972-9-766-1314 P.O.B. 7004 Mobile: +972-50-23-7338 Kfar-Saba 44641, Israel
Re: mod_cache performance
Eli Marmor wrote: Graham Leggett wrote: Brian Akins wrote: On an OS that supports sendfile, a disk based cache will almost always bury a memory based one. Quite probably. But on a system without a disk, chances are it won't. :( It will. Unless mod_disk_cache + ram-disk + sendfile doesn't outperform mod_mem_cache. This setup performs quite nicely on Linux. The big hits for mem cache are: The cache is not shared between processes, so you use alot more memory and have a lot less hits. You have to copy data from user to kernel, which can be a huge hit. Even without sendfile, mmap is generally faster than mem cache. -- Brian Akins Senior Systems Engineer CNN Internet Technologies
Re: mod_cache performance
Brian Akins wrote: The big hits for mem cache are: The cache is not shared between processes, so you use alot more memory and have a lot less hits. This is true - mem cache would probably improve drastically with a shared memory cache. Regards, Graham -- smime.p7s Description: S/MIME Cryptographic Signature
Re: mod_cache performance
Graham Leggett wrote: This is true - mem cache would probably improve drastically with a shared memory cache. Propably not, because you would propably have to lock around it. It just seems it's better to let the filesystem worry about alot of this stuff (locking, reference counting, etc.). -- Brian Akins Senior Systems Engineer CNN Internet Technologies
Suggestion: log request when it happens
The information contained in the access log can be quite useful in tracking down problems. However, because the log entry is created only after all output has been sent to the client, there is a problem I sometimes run into: If the child process creating the output (like a CGI script) dies unexpectedly with a core dump, there is no access log entry at all. There is usually an error log entry, but that's just the stderr of the problem, which is nowhere near as useful as the actual request info. Now, obviously, some log info (number of bytes transferred, status code) can't be logged until after the child has sent the data. So, what I propose, is a new option to log each request as it arrives, in addition to the way it works now. What would actually be most useful is if this was entered into the error log. That way, one could easily see what request produced the given stderr output. As it is right now, any system-level error messages produced by the script are simply appended, without timestamp or other relevant info. Having the option to add the request log entry would serve as a marker for each bit of stderr output. I did try LogLevel debug, but that doesn't produce the needed info. Opinions? -- Dan Wilga [EMAIL PROTECTED] Web Technology Specialist http://www.mtholyoke.edu Mount Holyoke CollegeTel: 413-538-3027 South Hadley, MA 01075Who left the cake out in the rain?
Re: Suggestion: log request when it happens
Dan Wilga wrote: The information contained in the access log can be quite useful in tracking down problems. However, because the log entry is created only after all output has been sent to the client, there is a problem I sometimes run into: If the child process creating the output (like a CGI script) dies unexpectedly with a core dump, there is no access log entry at all. There is usually an error log entry, but that's just the stderr of the problem, which is nowhere near as useful as the actual request info. Opinions? see mod_log_forensic, now in the standard distribution. --Geoff
Re: Suggestion: log request when it happens
On Tue, 3 Aug 2004 10:19:23 -0400, Dan Wilga [EMAIL PROTECTED] wrote: The information contained in the access log can be quite useful in tracking down problems. However, because the log entry is created only after all output has been sent to the client, there is a problem I sometimes run into: try mod_log_forensic, which does log-before for correlation with normal access log when unfortunately necessary; mod_log_forensic is in 1.3 and in 2.0 there is also mod_whatkilledus, which doesn't do log-before but does log info about the request in the event of a child process crash; mod_whatkilledus is in 1.3; version for 2.0 is at http://www.apache.org/~trawick/
Re: mod_cache performance
Justin Erenkrantz wrote: --On Monday, August 2, 2004 2:49 PM -0400 Bill Stoddard [EMAIL PROTECTED] wrote: To get mod_cache/mod_mem_cache (I know little or nothing about mod_disk_cache) really performing competatively against best-of-breed caches will require bypassing output filters (and prebuilding headers) and possibly Here's some comparative numbers to chew on. One client and one server on 100Mbps network (cheapy 100Base-T switch); 50 simulated users hitting 7 URLs 100 times with flood (35,000 requests). mod_disk_cache: Requests: 35000 Time: 40.91 Req/Sec: 856.78 mod_mem_cache: Requests: 35000 Time: 54.90 Req/Sec: 637.81 no cache: Requests: 35000 Time: 54.86 Req/Sec: 638.81 squid: Requests: 35000 Time: 105.35 Req/Sec: 332.25 mod_disk_cache completely filled out the network at ~50% CPU usage. [Can't push through more than ~8MB/sec (~64Mb/sec) without GigE.] mod_mem_cache filled up the CPU but not the network [Poor scaling characteristics. It goes to 100% CPU with just 5 users!] mod_mem_cache is broken then. It used to kick the pants off of 'no cache' and mod_disk_cache. Bill
Re: mod_cache performance
Bill Stoddard wrote: Justin Erenkrantz wrote: --On Monday, August 2, 2004 2:49 PM -0400 Bill Stoddard [EMAIL PROTECTED] wrote: To get mod_cache/mod_mem_cache (I know little or nothing about mod_disk_cache) really performing competatively against best-of-breed caches will require bypassing output filters (and prebuilding headers) and possibly Here's some comparative numbers to chew on. One client and one server on 100Mbps network (cheapy 100Base-T switch); 50 simulated users hitting 7 URLs 100 times with flood (35,000 requests). mod_disk_cache: Requests: 35000 Time: 40.91 Req/Sec: 856.78 mod_mem_cache: Requests: 35000 Time: 54.90 Req/Sec: 637.81 no cache: Requests: 35000 Time: 54.86 Req/Sec: 638.81 squid: Requests: 35000 Time: 105.35 Req/Sec: 332.25 mod_disk_cache completely filled out the network at ~50% CPU usage. [Can't push through more than ~8MB/sec (~64Mb/sec) without GigE.] mod_mem_cache filled up the CPU but not the network [Poor scaling characteristics. It goes to 100% CPU with just 5 users!] mod_mem_cache is broken Or mistuned? Here are the defaults for the mem_cache directives: MCacheSize ~100 MB MCacheMaxObjectCount 1009 MCacheMinObjectSize 0 (bytes) MCacheMaxObjectSize 1 (bytes) MCacheRemovalAlgorithmGDSF MCacheMaxStreamingBuffer 10 (bytes) I have no idea if the urls ending in / are being served at all by mod_mem_cache. Wouldn;t suprise me if tehre is a bug there. Bill
Re: mod_cache performance
Bill Stoddard wrote: mod_mem_cache is broken then. It used to kick the pants off of 'no cache' and mod_disk_cache. If mod_disk_cache was patched to use sendfile, it will perform better. -- Brian Akins Senior Systems Engineer CNN Internet Technologies
Re: [PATCH] event driven MPM
On Tue, 2004-08-03 at 08:18 -0400, Brian Akins wrote: Greg Ames wrote: Bill Stoddard created an event driven socket I/O patch a couple of years ago that could serve pages. I picked it up and decided to see if I could simplify it to minimize the changes to request processing. What's the status of this? I'd be willing to help if needed. We are interested in this. I am interested in it too. I talked to Greg Ames a week ago, and he doesn't have any updates for the patch. I planned on mailing this list about it later this week. I would like to get the patch(or something based off of it) into CVS soon. My basic question for the list is, are we better off modifying the Worker MPM, or should we create a new 'event' MPM for now? -Paul Querna
Re: mod_cache performance
Bill Stoddard wrote: mod_mem_cache: Requests: 35000 Time: 54.90 Req/Sec: 637.81 no cache: Requests: 35000 Time: 54.86 Req/Sec: 638.81 The above result would suggest that mod_mem_cache isn't being used in this case. It could be that mem cache has decided not to cache the requested file for whatever reason, which is being served via the normal no cache path. Regards, Graham -- smime.p7s Description: S/MIME Cryptographic Signature
Re: mod_cache performance
Brian Akins wrote: mod_mem_cache is broken then. It used to kick the pants off of 'no cache' and mod_disk_cache. If mod_disk_cache was patched to use sendfile, it will perform better. mem cache and disk cache were created because not every platform performs best using the same techniques. This competition between mem cache and disk cache will hopefully make them both faster, and in turn faster than other caches out there. Regards, Graham -- smime.p7s Description: S/MIME Cryptographic Signature
RE: mod_cache performance
: -Original Message- : From: Bill Stoddard [mailto:[EMAIL PROTECTED] [SNIP] : : Here's some comparative numbers to chew on. : : One client and one server on 100Mbps network (cheapy : 100Base-T switch); : 50 simulated users hitting 7 URLs 100 times with flood : (35,000 requests). : : mod_disk_cache: Requests: 35000 Time: 40.91 Req/Sec: 856.78 : mod_mem_cache: Requests: 35000 Time: 54.90 Req/Sec: 637.81 : no cache: Requests: 35000 Time: 54.86 Req/Sec: 638.81 : squid: Requests: 35000 Time: 105.35 Req/Sec: 332.25 : : mod_disk_cache completely filled out the network at ~50% CPU usage. : [Can't push through more than ~8MB/sec (~64Mb/sec) without GigE.] : mod_mem_cache filled up the CPU but not the network : [Poor scaling characteristics. It goes to 100% CPU with : just 5 users!] : : mod_mem_cache is broken then. It used to kick the pants off : of 'no cache' and mod_disk_cache. .. Well, doesn't it depend upon the size of the data set. With 'ab', I guess that's possible that mod_mem_cache can beat mod_disk_cache - but with a dataset like SPECweb99, I'd really doubt if it can really do it. BTW, I wonder how mem_cache can significantly out-perform no-cache scenario - 'cause a good file system should buffer cache the most-accessed files, and there should be minimal perf. difference. -Madhu
Re: mod_cache performance
--On Tuesday, August 3, 2004 8:11 AM -0400 Brian Akins [EMAIL PROTECTED] wrote: Under load, squid will always use 100% of the CPU. This is because it uses poll/select. Ouch. That sucks. (But, httpd uses poll - so why does that force 100% CPU usage?) RHEL 3 sucks. Fedora Core 2 would have been a much better choice. Also, did you use poll? I know a large website that does several dozen hits per day using squid :) Heh. RHEL3 is the Linux distribution we use within the ASF. (My local box is a mirror of the ASF Linux and FreeBSD setups.) Fedora Core 2 isn't an option. Is it worth compiling my own squid then? (Read that as 'reboot my box to FreeBSD and use the squid port.') On an OS that supports sendfile, a disk based cache will almost always bury a memory based one. Agreed. I don't think it's worth putting a lot of effort into mod_mem_cache. Doing zero-copy is just going to scale better than memory caching. -- justin
Re: mod_cache performance
--On Tuesday, August 3, 2004 9:12 AM -0400 Brian Akins [EMAIL PROTECTED] wrote: Propably not, because you would propably have to lock around it. It just seems it's better to let the filesystem worry about alot of this stuff (locking, reference counting, etc.). +1. =) -- justin
Re: mod_cache performance
--On Tuesday, August 3, 2004 6:50 PM +0200 Graham Leggett [EMAIL PROTECTED] wrote: mod_mem_cache: Requests: 35000 Time: 54.90 Req/Sec: 637.81 no cache: Requests: 35000 Time: 54.86 Req/Sec: 638.81 The above result would suggest that mod_mem_cache isn't being used in this case. It could be that mem cache has decided not to cache the requested file for whatever reason, which is being served via the normal no cache path. It'd help if I compiled mod_mem_cache in. *duck* (We need better error messages when the cache type isn't found! Can't we error out at config time?) Anyway, mod_mem_cache yields (after bumping up MCacheMaxObjectSize to 100k): Requests: 35000 Time: 40.99 Req/Sec: 856.73 That brings it in line with mod_disk_cache in maxing out my network. Time to craft some better tests or find a faster network... -- justin
Re: mod_cache performance
Graham Leggett wrote: mem cache and disk cache were created because not every platform performs best using the same techniques. This competition between mem cache and disk cache will hopefully make them both faster, and in turn faster than other caches out there. True. Competetion is good. -- Brian Akins Senior Systems Engineer CNN Internet Technologies
Re: mod_cache performance
Justin Erenkrantz wrote: That brings it in line with mod_disk_cache in maxing out my network. Time to craft some better tests or find a faster network... -- justin I can probably help with the latter :) Can you send me details of your setup and I'll try to test later this week. -- Brian Akins Senior Systems Engineer CNN Internet Technologies
Re: mod_cache performance
Justin Erenkrantz wrote: --On Tuesday, August 3, 2004 8:11 AM -0400 Brian Akins [EMAIL PROTECTED] wrote: Under load, squid will always use 100% of the CPU. This is because it uses poll/select. Ouch. That sucks. (But, httpd uses poll - so why does that force 100% CPU usage?) httpd blocks. Squid doesn't in general. Squid just calls poll over and over and does lots of very small reads and writes. Is it worth compiling my own squid then? (Read that as 'reboot my box to FreeBSD and use the squid port.') Check the configure and make sure you up open files and use poll. Also kill ident checks. -- Brian Akins Senior Systems Engineer CNN Internet Technologies
Re: mod_cache performance
Mathihalli, Madhusudan wrote: .. Well, doesn't it depend upon the size of the data set. With 'ab', I guess that's possible that mod_mem_cache can beat mod_disk_cache - but with a dataset like SPECweb99, I'd really doubt if it can really do it. BTW, I wonder how mem_cache can significantly out-perform no-cache scenario - 'cause a good file system should buffer cache the most-accessed files, and there should be minimal perf. difference. If you're caching CGI, mod_perl, or proxy? Regards, Graham -- smime.p7s Description: S/MIME Cryptographic Signature
Re: mod_cache performance
Hi, Send us your squid.conf and your configure options from when you built it (as well as what squid version), and I can tell you how to optimize it. I've had a lot of practice.. Brian Akins wrote: Justin Erenkrantz wrote: --On Tuesday, August 3, 2004 8:11 AM -0400 Brian Akins [EMAIL PROTECTED] wrote: Under load, squid will always use 100% of the CPU. This is because it uses poll/select. Ouch. That sucks. (But, httpd uses poll - so why does that force 100% CPU usage?) httpd blocks. Squid doesn't in general. Squid just calls poll over and over and does lots of very small reads and writes. Is it worth compiling my own squid then? (Read that as 'reboot my box to FreeBSD and use the squid port.') Check the configure and make sure you up open files and use poll. Also kill ident checks. -- David Nicklay O- Location: CNN Center - SE0811A Office: 404-827-2698Cell: 404-545-6218
Re: mod_cache performance
--On Tuesday, August 3, 2004 2:35 PM -0400 David Nicklay [EMAIL PROTECTED] wrote: Send us your squid.conf and your configure options from when you built it (as well as what squid version), and I can tell you how to optimize it. I've had a lot of practice.. I've posted the squid.conf from RHEL3 at: http://www.ics.uci.edu/~jerenkra/caching/ There is also the output of 'squid -v' in squid-configure. This is just the straight RHEL3 install. I'm open to building from sources as long as someone tells me what config options and squid.conf to use. ;-) At that URL is the proxy.xml flood test case as well. Plus, summary results from flood's analyze-relative report. (mod_mem_cache and mod_disk_cache are maxing out the network now...) Thanks! -- justin
Re: cvs commit: httpd-2.0/modules/aaa mod_auth_digest.c
hmm, I guess this fell off the collective radar. any comments? otherwise, I guess it's good enough and I'll just commit it to both 2.0 and 2.1. --Geoff Geoffrey Young wrote: [EMAIL PROTECTED] wrote: pquerna 2004/07/10 00:47:23 Modified:.Tag: APACHE_2_0_BRANCH CHANGES STATUS modules/aaa Tag: APACHE_2_0_BRANCH mod_auth_digest.c Log: Backport of AuthDigestEnableQueryStringHack Needs a doc update to explain what it does. something like the attached? corrections, tweaks, or other feedback welcome. --Geoff Index: mod_auth_digest.xml === RCS file: /home/cvs/httpd-2.0/docs/manual/mod/mod_auth_digest.xml,v retrieving revision 1.5.2.8 diff -u -r1.5.2.8 mod_auth_digest.xml --- mod_auth_digest.xml 17 Apr 2004 18:43:37 - 1.5.2.8 +++ mod_auth_digest.xml 12 Jul 2004 14:16:11 - @@ -72,7 +72,9 @@ browsers. As of November 2002, the major browsers that support digest authentication are a href=http://www.opera.com/;Opera/a, a href=http://www.microsoft.com/windows/ie/;MS Internet -Explorer/a (fails when used with a query string), a +Explorer/a (fails when used with a query string - see the +directive module=mod_auth_digestAuthDigestEnableQueryStringHack +/directive option below for a workaround), a href=http://www.w3.org/Amaya/;Amaya/a, a href=http://www.mozilla.org;Mozilla/a and a href=http://channels.netscape.com/ns/browsers/download.jsp; @@ -81,6 +83,36 @@ in controlled environments./p /note /section + +section id=msietitleWorking with MS Internet Explorer/title +pThe Digest authentication implementation in current Internet +Explorer implementations has known issues, namely that codeGET/code +requests with a query string are not RFC compliant. There are a +few ways to work around this issue./p + +p +The first way is to use codePOST/code requests instead of +codeGET/code requests to pass data to your program. This method +is the simplest approach if your application can work with this +limitation. +/p + +pApache also provides a workaround in the +codeAuthDigestEnableQueryStringHack/code environment variable. +If codeAuthDigestEnableQueryStringHack/code is true for the +request, Apache will take steps to work around the MSIE bug and +remove the request URI from the digest comparison. Using this +method would look like similar to the following./p + +exampletitleUsing Digest Authentication with MSIE:/title +BrowserMatch MSIE AuthDigestEnableQueryStringHack=On +/example + +pSee the directive module=mod_setenvifBrowserMatch/directive +directive for more details on conditionally setting environment +variables/p +/section + directivesynopsis nameAuthDigestFile/name
Re: mod_cache performance
Mathihalli, Madhusudan wrote: : -Original Message- : From: Bill Stoddard [mailto:[EMAIL PROTECTED] [SNIP] : : Here's some comparative numbers to chew on. : : One client and one server on 100Mbps network (cheapy : 100Base-T switch); : 50 simulated users hitting 7 URLs 100 times with flood : (35,000 requests). : : mod_disk_cache: Requests: 35000 Time: 40.91 Req/Sec: 856.78 : mod_mem_cache: Requests: 35000 Time: 54.90 Req/Sec: 637.81 : no cache: Requests: 35000 Time: 54.86 Req/Sec: 638.81 : squid: Requests: 35000 Time: 105.35 Req/Sec: 332.25 : : mod_disk_cache completely filled out the network at ~50% CPU usage. : [Can't push through more than ~8MB/sec (~64Mb/sec) without GigE.] : mod_mem_cache filled up the CPU but not the network : [Poor scaling characteristics. It goes to 100% CPU with : just 5 users!] : : mod_mem_cache is broken then. It used to kick the pants off : of 'no cache' and mod_disk_cache. .. Well, doesn't it depend upon the size of the data set. With 'ab', I guess that's possible that mod_mem_cache can beat mod_disk_cache - but with a dataset like SPECweb99, I'd really doubt if it can really do it. BTW, I wonder how mem_cache can significantly out-perform no-cache scenario - 'cause a good file system should buffer cache the most-accessed files, and there should be minimal perf. difference. -Madhu It depends almost entirely on the expense of the file open. On Windows, opening a file for i/o is hideously expensive, so a simple memory cache works well on Windows. Caching open file descriptors (and using TransmitFile) works even better. Bill
mod_dir and mod_cache
I think I missed the answer to this: Has the feature that prevents mod_cache from caching urls ending in / (as related to mod_dir) been fixed? If so, will this make it into 2.0? -- Brian Akins Senior Systems Engineer CNN Internet Technologies
Fwd: [PROOF-OF-CONCEPT?] logging memory used by an allocator
-- Forwarded message -- From: Jeff Trawick [EMAIL PROTECTED] Date: Tue, 3 Aug 2004 14:45:16 -0400 Subject: Re: [PROOF-OF-CONCEPT?] logging memory used by an allocator To: Sander Striker [EMAIL PROTECTED] On Sun, 1 Aug 2004 19:46:14 +0200, Sander Striker [EMAIL PROTECTED] wrote: From: Jeff Trawick [mailto:[EMAIL PROTECTED] Sent: Wednesday, July 28, 2004 1:33 PM To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: [PROOF-OF-CONCEPT?] logging memory used by an allocator A couple of questions come up from an application perspective: am I leaking memory? if so, on what operation? how much memory does it take to perform a certain operation? If the application can find out how much heap memory is presently owned by a certain allocator, it can be easier to address such questions. Wouldn't you want to know the memory currently being held by the allocator as well as all the memory the allocator dished out to it's pools? I'm sure that could be useful to some folks; I can't think of a use at the present with Apache unless some logging of the allocator shows that some request uses a bunch of memory, and we know that there are various pools in use on that request so we add some temporary debug code to add pool granularity to the measurement The attached apr.patch adds apr_allocator_memsize_get() to find the amount of heap memory presently owned by the allocator. (debug pool flavor not implemented; what is implemented isn't tested much ;) ) Given that memory management is on the critical path we need to be careful what we add. But this patch seems pretty harmless in that respect. but doesn't memsize_get suck as a name? any better ideas? The attached httpd.patch adds %Z log format to mod_log_config to log the memory size of the allocator used by the request pool. (I would lean towards implementing this feature in a debug module instead of in mod_log_config.) I assume you implemented this because of an itch? :) biggest itch was, in the context of a suspected storage leak, wanting to draw a line between Apache memory use and some potential third-party module or library issue; hard without some such measurements to even understand the normal Apache storage growth in a threaded server as more and more threads in the process eventually handle expensive requests more specifically: good for a *rough* idea of memory requirements; I haven't a clue how much heap memory it takes to process some request; 10K? 100K? good for determining when it is useful to use MaxMemFree by identifying situations where an infrequent request takes a lot more pool memory than normal to process; in such case, it is practical to set MaxMemFree to approximately the normal amount of memory required, since malloc()/free() overhead won't be a killer
Re: mod_dir and mod_cache
Brian Akins wrote: I think I missed the answer to this: Has the feature that prevents mod_cache from caching urls ending in / (as related to mod_dir) been fixed? If so, will this make it into 2.0? yes it has been fixed. I volunteer to help with the backport. Just need to get the votes to backport for each patch. Bill
Re: cvs commit: httpd-2.0/modules/aaa mod_auth_digest.c
On Tue, 2004-08-03 at 15:22 -0400, Geoffrey Young wrote: hmm, I guess this fell off the collective radar. any comments? otherwise, I guess it's good enough and I'll just commit it to both 2.0 and 2.1. Looks good to me. -Paul Querna Geoffrey Young wrote: [EMAIL PROTECTED] wrote: pquerna 2004/07/10 00:47:23 Modified:.Tag: APACHE_2_0_BRANCH CHANGES STATUS modules/aaa Tag: APACHE_2_0_BRANCH mod_auth_digest.c Log: Backport of AuthDigestEnableQueryStringHack Needs a doc update to explain what it does. something like the attached? corrections, tweaks, or other feedback welcome. --Geoff Index: mod_auth_digest.xml === RCS file: /home/cvs/httpd-2.0/docs/manual/mod/mod_auth_digest.xml,v retrieving revision 1.5.2.8 diff -u -r1.5.2.8 mod_auth_digest.xml --- mod_auth_digest.xml 17 Apr 2004 18:43:37 - 1.5.2.8 +++ mod_auth_digest.xml 12 Jul 2004 14:16:11 - @@ -72,7 +72,9 @@ browsers. As of November 2002, the major browsers that support digest authentication are a href=http://www.opera.com/;Opera/a, a href=http://www.microsoft.com/windows/ie/;MS Internet -Explorer/a (fails when used with a query string), a +Explorer/a (fails when used with a query string - see the +directive module=mod_auth_digestAuthDigestEnableQueryStringHack +/directive option below for a workaround), a href=http://www.w3.org/Amaya/;Amaya/a, a href=http://www.mozilla.org;Mozilla/a and a href=http://channels.netscape.com/ns/browsers/download.jsp; @@ -81,6 +83,36 @@ in controlled environments./p /note /section + +section id=msietitleWorking with MS Internet Explorer/title +pThe Digest authentication implementation in current Internet +Explorer implementations has known issues, namely that codeGET/code +requests with a query string are not RFC compliant. There are a +few ways to work around this issue./p + +p +The first way is to use codePOST/code requests instead of +codeGET/code requests to pass data to your program. This method +is the simplest approach if your application can work with this +limitation. +/p + +pApache also provides a workaround in the +codeAuthDigestEnableQueryStringHack/code environment variable. +If codeAuthDigestEnableQueryStringHack/code is true for the +request, Apache will take steps to work around the MSIE bug and +remove the request URI from the digest comparison. Using this +method would look like similar to the following./p + +exampletitleUsing Digest Authentication with MSIE:/title +BrowserMatch MSIE AuthDigestEnableQueryStringHack=On +/example + +pSee the directive module=mod_setenvifBrowserMatch/directive +directive for more details on conditionally setting environment +variables/p +/section + directivesynopsis nameAuthDigestFile/name
Re: [PATCH] mod_cache fixes: #6
Justin Erenkrantz wrote: --On Monday, August 2, 2004 11:44 AM -0400 Bill Stoddard [EMAIL PROTECTED] wrote: These are debug messages so not sure why they are a problem. +0 The logging code is expensive to call for every request like that as many times as it does. IMHO, there's no benefit to such a verbose log. More judicious use of logging would be fine, but what's there now is inappropriate. -- justin In mod-proxy they use a macro called 'DEBUGGING' and if-def all the really verbose log messages with that.. maybe you should apply this here?
Re: mod_cache performance
Brian Akins wrote: Justin Erenkrantz wrote: That brings it in line with mod_disk_cache in maxing out my network. Time to craft some better tests or find a faster network... -- justin I can probably help with the latter :) Can you send me details of your setup and I'll try to test later this week. we have some boxes with a GigE network as well. (set up to use flood with 10 PC's generating the load) also .. we might have 1-2 amd-64 boxes I could presuade the higher ups to use.
Re: mod_dir and mod_cache
Bill Stoddard wrote: Brian Akins wrote: I think I missed the answer to this: Has the feature that prevents mod_cache from caching urls ending in / (as related to mod_dir) been fixed? If so, will this make it into 2.0? yes it has been fixed. I volunteer to help with the backport. Just need to get the votes to backport for each patch. Bill mod_cache, mod_mem_cache and mod_disk_cache are experimental modules in 2.0, so I am going to bypass the votes and just start backporting fixes. Please review as they go in. If something breaks, we'll fix it. Mmmm K? Bill
Re: mod_dir and mod_cache
On Tue, 2004-08-03 at 22:29 -0400, Bill Stoddard wrote: mod_cache, mod_mem_cache and mod_disk_cache are experimental modules in 2.0, so I am going to bypass the votes and just start backporting fixes. Please review as they go in. If something breaks, we'll fix it. Mmmm K? Whoa. I thought the 2.0 branch was *always* Review first. Back porting these changes does not seem right without proper votes. The LDAP modules are also 'experimental' modules, and they have been using votes for all of their back ports. -Paul Querna