I was meaning to respond to this, but forgot until I saw the blurb in ApacheWeek
:-)

> David Burry wrote:
>
> >>Random thoughts:
> >>- Did the content have short expiration times (or recent change dates
> >>
> >>
> >which
> >
> >
> >>would result in the cache making agressive expiration estimates). That
> >>
> >>
> >could
> >
> >
> >>churn the cache.
> >>
> >>
> >
> >No.  files literally never change, when updates appear they are always new
> >files, web pages just point to new ones each update.  In this application
> >these are all executable downloadable files, think FTP repository over HTTP.
> >
>
> For large files, I'd anticipate that mod_cache wouldn't provide much benefit
> at all.  If you characterize the cost of delivering a file as
>
>    time_to_stat_and_open_and_close + time_to_transfer_from_memory_to_network
>
> mod_mem_cache can help reduce the first term but not the second.  For small
> files, the first term is significant, so it makes sense to try to optimize
> away the stat/open/close with an in-httpd cache.  But for large files, where
> the second term is much larger than the first, mod_mem_cache doesn't
> necessarily
> have an advantage.

The read can be expensive over NFS. Yes, one would hope the file system cache
would cover this. And perhaps it does in most cases. Generally I agree with the
analysis. The big expenses are in the stat/open/close.

> And it has at least three disadvantages that I can
> think of:
>   1. With mod_mem_cache, you can't use sendfile(2) to send the content.
>      If your kernel does zero-copy on sendfile but not on writev, it
>      could be faster to deliver a file instead of a cached copy.

mod_mem_cache can cache open fds (CacheEnable fd /). Works really nicely on
Windows. I have not seen much benefit testing on AIX and I don't know if there
are other performance implications on *ix with maintaining a large number of
open fds.

>   2. And as long as mod_mem_cache maintains a separate cache per worker
>      process, it will use memory less efficiently than the filesystem
>      cache.

Yep. Not a big deal if you are caching open fds though.

>   3. On a cache miss, mod_mem_cache needs to read the file in order to
>      cache it.  By default, it uses mmap/munmap to do this.  We've seen
>      mutex contention problems in munmap on high-volume Solaris servers.

This is a result of mod_mem_cache using the bucket code (apr_buckets_file). I
think we could extrace the fd from the bucket then so a read rather than an
mmap. Should I work on a fix for this?

Bill

Reply via email to