On Wednesday, November 24, 2010 21:48:59 Mohit Anchlia wrote:
> If I use $r->sendfile($filenam) and the file is always the same then
> does mod_perl open and read this file on every execution or is it
> cached? I am wondering if it will be better to copy the contents of
> file (500bytes) in the perl module instead.

Sendfile creates a file bucket (AFAIK). That means if you don't have any 
filter that needs to read the bucket (for instance mod_ssl, mod_deflate) then 
the copying should be done by the kernel alone.

On the other hand caching in RAM is also fast. I'd benchmark it.

If you implement the RAM caching then don't forget to send a Content-Length 
header. Otherwise Apache would use chunked transfer encoding which is much 
slower. I measured that a few years ago but I don't think things have changed 
a lot since then.

> Is there a overhead or chances of running out of file handles if we
> use sendfile? Or is there a better way of using senfile that will
> ensure minimal overhead?

Only once I have run out of file descriptors when I have experimented with the 
worker-MPM with many threads. After increasing ulimit to 10000 descriptors 
that was solved. If you use prefork MPM I think the default limit for open 
files (1024 descriptors on my system) should be by far enough.

Torsten Förtsch

-- 
Need professional modperl support? Hire me! (http://foertsch.name)

Like fantasy? http://kabatinte.net

Reply via email to