Re: mod_deflate update
su, 2004-04-18 kello 15:22, Nick Kew kirjoitti: Also a question: When I create a bucket brigade in a module, I always explicitly apr_brigade_destroy() it. None of the filters in mod_deflate destroy their brigades. A look at apr_brigade.c shows that it's not in fact necessary, but maybe a note to that effect would be in order? Isn't it dangerous to apr_brigade_destroy()? As I understand, apr_brigade_destroy frees the buckets in the brigade AND also frees the brigade structure itself. I used to do it with my output filter and I managed to crash Apache right away. The reason was that mod_proxy_http keeps reusing the brigade and doesn't take it nicely if someone goes and destroys it. Cliff Wolley once said that you must either pass a bucket to the next filter or destroy it but I have not heard from any authoritative source what should be done with brigades. I just know that for my particular case apr_brigade_destroy is a bad thing. On the other hand an output filter should not have to know if it is processing content originated from mod_proxy or mod_perl or any other handler, right? The interface should always be the same. -- Sami
Re: Win32DisableAcceptex
pe, 2004-04-16 kello 23:04, Sami Tikka kirjoitti: Of course, the easy way out is to just increase the number of threads/processes, but then the question is how many threads/processes are enough to handle all HTTP CONNECTs and still have plenty to spare to handle plain HTTP traffic. I think the dedicated handler for HTTP CONNECTs would make more sense. Or would it be a really bad idea? It just occurred to me that perhaps the easiest workaround would be to make the number of threads variable in mpm_winnt. The other mpms seem to be able to create/destroy threads or processes as they go along. The windows mpm uses a fixed number of threads. Why is that? Is there some architectural limitation behind the design? -- Sami
Re: Win32DisableAcceptex
It seems I have tracked down the problem plagueing my client. It seems it has absolutely nothing to do with AcceptEx(). AcceptEx() is reporting errors because the previous proxy is aborting idle connections that Apache has not replied to in 150 seconds. That is causing the specified network name is no longer available errors. The real problem is why Apache does not handle those connections and it is because Apache is out of free threads. All of its threads are busy handling HTTP CONNECT methods (Apache was running as a proxy, remember). It seems that SSL connections are extremely long-lived, I have seen them last as long as 300 seconds. Also, mod_proxy_connect.so does not have any timeout code in it. The tunnel will be open until someone else closes it. (This may be how it is supposed to work, I'm not that familiar with SSL and HTTP CONNECT.) However, it seems to me that dedicating a thread or a process to run in a tight while-loop copying bytes back and forth between 2 sockets is an overkill. Would it be possible to have a dedicated thread/process for that and mod_proxy_connect would not handle the request itself, (perhaps create the backend connection) only pass the 2 sockets to the dedicated thread/process. Of course, the easy way out is to just increase the number of threads/processes, but then the question is how many threads/processes are enough to handle all HTTP CONNECTs and still have plenty to spare to handle plain HTTP traffic. I think the dedicated handler for HTTP CONNECTs would make more sense. Or would it be a really bad idea? -- Sami Tikka tel: +358 9 2520 5115 Senior Software Engineerfax: +358 9 2520 5013 F-Secure Corporationhttp://www.f-secure.com/ BE SURE
Re: Thread terminatinos ?
On Thu, Dec 11, 2003 at 11:25:08AM +0200, Nasko wrote: 2) How can I be notified when a thread is destroyed so I can close the DB connection I think the correct way to do this would be to start a database thread in child_init and register a thread shutdown callback with the pool that is passed as parameter to to child_init. This pool is a child pool and there is one in every worker, be they processes or threads. Child_init is a callback which is called right after creation of a worker. When the pool is destroyed (because the worker is shut down), the callbacks registered with the pool are called. See ap_hook_child_init() and apr_pool_cleanup_register() (Hmm... what is the correct terminology? Should I talk about children? But that sort of implies that they are processes whereas on Windows they are threads. I've used the word worker but perhaps someone could confuse that with the child processes of the Worker MPM.) -- Sami Tikka tel: +358 9 2520 5115 Senior Software Engineerfax: +358 9 2520 5013 F-Secure Corporationhttp://www.F-Secure.com/ F-Secure: Securing the Mobile, Distributed Enterprise
Re: what response code filters should return when they see c-aborted?
On Thu, Dec 11, 2003 at 10:38:06AM -0800, Stas Bekman wrote: What's confusing is that it seems that most consumers (not filters) (e.g. in protocol.c) that call ap_pass_brigade are completely ignoring its response code. It seems to me that many output filters are also ignoring ap_pass_brigade return code. Also, the few sample filters I've seen do that all the time. Lately I have been searching for a memory leak in my output filter and was wondering if it is safe to return ap_pass_brigade(f-next, bb) if my filter is holding on to some buckets in a private brigade which is hanging in f-ctx. Should a filter always check ap_pass_brigade return code and when it sees an error, cleanup its own mess before returning? Or does it matter? If the brigade was created with apr_brigade_create(f-r-pool, f-c-bucket_alloc), does it mean the brigade is cleaned up when the request pool is destroyed? And all the buckets in the brigade are returned to the allocator? -- Sami Tikka tel: +358 9 2520 5115 Senior Software Engineerfax: +358 9 2520 5013 F-Secure Corporationhttp://www.F-Secure.com/ F-Secure: Securing the Mobile, Distributed Enterprise
Re: consider reopening 1.3
[EMAIL PROTECTED] wrote: I popped off and looked at 2.0 code again just now and I can tell you right now it's (still) the filtering that's killing it. I am a novice (written 2 modules for apache 1.3 and 1 for 2.0) but I have examined the apache 2.0 code quite a lot during the last year and I believe this is what happens in 2.0 with static pages: 1. Some core handler creates a bucket-brigade, creates a file bucket referring to the static file and inserts the bucket into the brigade. (Well, it also appends an EOS bucket). The handler passes the brigade to the first output filter. 2. In the basic setup the first filter is probably the CONTENT_LENGTH, which asks the file bucket how long it is and sets Content-Length header in r-headers_out table. Brigade is passed to next filter. 3. ...which is probably HTTP_HEADER filter, which reads r-headers_out and generates the response header. This probably means allocating a couple of memory buckets for the it. The buckets are prepended to the brigade. Pass to next filter. 4. ... which will be CORE output filter, which pumps out the buckets containing the header using writev() and does a single sendfile() call for transmitting the file bucket. As I see it, this is as efficient as it can be. Nothing is copied needlessly. The buckets containing the header might be allocated from a heap (= allocator), even though stack would be more efficient. (Oh, they might come from the stack, I'm too tired to see what goes on under the hood.) I never had to learn the BUFF API in 1.3 but it is hard to imagine it being more efficient than this. This design is, I think, similar to the mbufs in *BSD kernel and skbuffs in the Linux kernel and it seems to work well for them. About re-opening 1.3 tree: I'm not sure I understand what is the big deal. This is open source. You want to work on 1.3, go do it. Your patches are not getting into ASF repository? Create your own. There are other open source projects that have started ignoring patches and it has caused a competing code fork to emerge. I don't see it necessarily as an evil thing. It is called evolution. (I guess if you start your own repository, you can no longer call it Apache, but any other name of an american indian tribe should be ok. :) -- Sami Tikka
C-L of proxy output (Re: [PATCH] Fix proxy's handling of input bodies)
Justin Erenkrantz wrote: Currently, mod_proxy falls down if a filter is in the input chain that changes the content of the original request. It will send the original Content-Length not the size of the data it actually sends. If the request was originally chunked, but the data it actually sends isn't chunked (it sends no C-L header in this case). Oops. I have been wondering about similar thing in the output side of the proxy. In proxy_http.c line 899 Content-Length and Transfer-Encoding headers are deleted. Later the CONTENT_LENGTH output filter attempts to compute the content length the first time it is called but it fails if the content is not all there in the current bucket brigade. Most of the time the proxy is streaming data in small brigades containing just one bucket, which means the CONTENT_LENGTH filter is unable to compute the content length, which means the Content-Length header will be missing and data is terminated by closing the connection. Is it really necessary to remove the Content-Length header? Why cannot it be left there? If a filter is going to change the length of the data, that filter could then adjust the Content-Length header accordingly, or delete it if it cannot compute the new length before headers are transmitted. I can create a fix but it would be nice to know if the proxy was _designed_ to work this way and leaving the C-L header intact would create more problems elsewhere. Thanks for any insight you can provide. -- Sami Tikka tel: +358 9 2520 5115 Senior Software Engineerfax: +358 9 2520 5013 F-Secure Corporationhttp://www.F-Secure.com/ F-Secure: Securing the Mobile Enterprise