> On Mon, Apr 01, 2002 at 02:05:04PM -0800, Ryan Bloom wrote: > > I have two conflicting thoughts, so I'll put them both out there for > > discussion. > > > > 1) I agree (mostly) RESOURCE filters are really the only ones that make > > sense to add multiple times. We should ensure that no other filters are > > added more than once. > > > > 2) It is up to the filter to protect against this case. That can be > > done by walking the filter chain to ensure that the same filter isn't in > > the list already. Of course, walking the chain could be slow, depending > > on how many filters there are. > > How could the filter itself protect against this case? By the > time it is called, it is already too late - the chain is created. > Or am I missing something? > > The only thing I can think of is that it it looks at f/f->next > to make sure that there are no other copies left in the chain that > haven't been called. I think it would be better to just protect > against that when we *add* filters rather than when we execute > them. > > I will commit the strcmp check now. -- Justin
All it needs to do is leave a message for itself in the request_rec. That can be done in either the per_request vector, or the r->notes table. I wouldn't want to use r->notes, because that could get really large quickly. The reality though is that the filter needs to be the one to check this. There are easily some RESOURCE filters that shouldn't be added more than once, for example the mod_header_footer filter should only be inserted once. There are some other methods that I can think of, but none of them are really clean. I kind of like the idea of just putting a note in the per-request vector. Ryan
