The test:
ab -n 2000 -c 10 http://mymachine/500bytefile.html

The results:
1. Apache 2.0 out of the box:
~250 cps

2. then compile with WIN32_SHORT_FILENAME_INSECURE_BEHAVIOR defined:
 ~500 cps.

If this check stays in (over my dead body :-), this will be the bottleneck
that no other performance tweaks will overcome. The problem magnifys if the
file being fetched is multiple directorys deep (a stat is performed on each
directory). This check stays in, the best we can do is around 250 cps relative
to my benchmark machine. We need to find another way to secure short filenames
(or disallow them entirely).

3. then replace *alloc/free in bucket code with apr_*alloc/apr_free:
~615 cps.

Note apr_malloc et. al. uses a lot of locking. Using more specialized routines
should improve the results. Anyone want to step up to creating a good set of
memory allocation routines that allow putting elements back on a free list?
What I have is proof of concept and is probably too generic.

4.  then cache the open file handle with mod_file_cache
~1123 cps

5. then cache the open file handle in my experimental quick handler cache
(same concept as Mike Abbott's quick shortcut cache):
~1300 cps

There are still a lot of cycles lying around to pick up, like the call to
qsort in apr_table_overlap that occurs in get_mime_headers(), a jillion
strlen() calls to get the length of the same string multiple times, etc.

Bill

Reply via email to