On Tue, Jun 14, 2011 at 2:29 PM, Emmanuel Lécharny <[email protected]> wrote: > On 6/13/11 11:19 PM, Stefan Seelmann wrote: >>>> >>>> It's actually quite simple and quite fast. Using the objectclass index >>>> it's trivial to obtain the list of all alias entries within the >>>> database, so >>>> from the outset you already know the maximum size of what you're dealing >>>> with. >>> >>> We already have a cache that is constructed at startup, gathering all the >>> aliases from the backend, using the OC index. This cache is of course >>> updated on the fly, if one alias is added or removed. >>> >>> I don't think it should take more than one day to fix this issue. >> >> In that case we can also get rid of all the alias indices (aliasIdx, >> oneAliasIdx, subAliasIdx). > > Yes absolutely. > > There are a few steps we also have to fulfill : > - create an Alias cache ( I thought we had one, but in fact, we have the > opposite : a notAliasCache in the ExceptionInterceptor)
I wonder why we need an alias cache? For fast lookup of the search base in case the "find" bit is set? > - create an AliasInterceptor to manage the Add and Delete operations done on > alias entries (and also move, rename and combined ops) You mean that interceptor is used to update the cache, right? > - modify the Search to handle a set of met aliases. Yep. > I'll proceed by creating the alias interceptor first, and Ill remove the > part that handle Aliases in ExceptionInterceptor > > The Alias index removal will be done at the end. Ok. Two other issues I see with the new algorighm: - It is efficient if there are only few aliases. But if a user adds million of alias entries we may get a memory problem. I just want to mention that to make clear that such an issue may occur. I don't think it makes sense to create so many alias entries, but I saw an example where group membership was implemented using aliases... - It is possible that duplicates occur, for example if an alias enlarges the initial search scope by pointing to a parent of the initial search base. I think duplicates can be avoided by tracking each search base and filter result enties within already processed search bases. Kind Regards, Stefan
