DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUGĀ·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://issues.apache.org/bugzilla/show_bug.cgi?id=39380>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED ANDĀ·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=39380

           Summary: mod_disk_cache eats memory, has no LFS support, etc
           Product: Apache httpd-2
           Version: 2.2.0
          Platform: All
        OS/Version: All
            Status: NEW
          Severity: normal
          Priority: P2
         Component: mod_disk_cache
        AssignedTo: [email protected]
        ReportedBy: [EMAIL PROTECTED]


The attached patch addresses the following issues:

* Implement Large File Support (LFS) in mod_disk_cache.
* Try to check if allowed to cache the file before caching it too, first
  caching bits of a huge file and then toss it makes little sense.
* When caching a file, copy it using the file descriptor in the brigade instead
  of using apr_bucket_read which forces the file into memory. This produced a
  segfault if trying to cache a file larger than the available amount of memory.
* When having cached a file, replace the brigade referring to the source file
  with our cached copy. This makes a huge difference when the file is larger
  than your memory and thus not in cache, given that your cache filesystem
  is faster than your backend (a natural assumption, why cache otherwise?).
* When caching a file, keep the cache file even if the connection was aborted.
  There is no reason to toss it, and the penalty for doing so when caching
  DVD images is really huge.
* When multiple downloads of an uncached file is initiated, only allow one of
  them to cache the file and let the others wait for the result. It's not a
  theoretically perfect solution, but in practice it seems to work well.
* Consequently use disk_cache: in error log strings.
* In mod_cache, restore r->filename so %f in LogFormat strings work. This
  really should be solved by saving r->filename with the headers and restore
  it in mod_disk_cache et al, but this at least provides something.

This allows us (http://ftp.acc.umu.se/) to use mod_disk_cache to cache DVD
images on a 32bit machine with "only" 3GB of memory, with the thing behaving
in a sane way and our LogFormat xferlog style emulation working.

An annoying issue remains: When caching a file, mod_disk_cache copies the
entire file before data is being sent to the client. This gets really annoying
if you have large files (say 4.3GB DVD images) and a slow backend. If there
is work in progress to solve this, please point us in that direction so we can
get it finished.

I'm not on any Apache mailing lists, so please CC me any feedback on this.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to