On Sat, 30 Jun 2001, Bill Stoddard wrote:
> Your patch calls apr_bucket_file_create with the cached file in the
> pconf pool.
Right.
> If I do the apr_os_file_get()/apr_os_file_put() trick to
> put the fd into an apr_file_t allocated out of the request pool before
> calling apr_bucket_file_create(), everything works (with HTTP/1.0
> non-keep alive request). It is still broken for keep alive requests
> of course, which I know is the problem you are trying to fix...
Not surprising... that's basically taking us back to before my patch.
Good sanity check, though.
> The seg fault only happens when I am sending in multiple concurrent
> requests. ab -n 100 -c 1 server/cached_file.html works. ab -n 100 -c
> >1 server/cached_file.html seg faults everytime. These are HTTP/1.0
> non-keepalive requests and no additional content filters are being
> installed, so we should never attempt to read from the file (ie, we
> should always use sendfile). So the problem is related to one of the
> following:
See, I'm just not seeing that behavior. I've always done my tests with
ab -n 10000 -c 100 cached_file.html
No segfaults for me. <scratching head>
I'll keep pounding on it, though, and look into your suggestions
some more...
--Cliff
--------------------------------------------------------------
Cliff Woolley
[EMAIL PROTECTED]
Charlottesville, VA