I don't really like the automatic upgrade, since we really don't know how to differentiate use of mix:language/sling:message in compact subtrees vs the sparse case. It might be to expensive to traverse the mix:language nodes for each bundle activation. we could do some heuristic and only look at the first level and if there are 100% sling:message nodes assume that it's a compact dictionary. in any case, you would need to make this check for each activation.
I would rather have a semi automatic fallback as described before: 0. read the operation-mode osgi config property (default: auto). 1. if mode==auto: 1.1 search (query) for all sling:Dictionary nodes 1.2. if non found, assume old content and set mode=legacy 2. if mode==legacy, do the current behavior by querying all messages below the language roots 3. if mode==modern, query all dictionary roots (or use result from 1.1 if available) and traverse subtrees when resource bundles are loaded regards, toby On Wed, Dec 18, 2013 at 6:38 PM, Alexander Klimetschek <[email protected]> wrote: > On 17.12.2013, at 23:12, Carsten Ziegeler <[email protected]> wrote: > >> The bundle can either set a marker in the repository > > That's probably something we should avoid. The question is where? And why? > >> or a file in the >> bundle private date; > > Sounds better. > >> the repository is the better place as this can be used >> in a clustered installation to avoid duplicate or concurrent migration > > The migration must be idempotent and easily will be: it finds the dictionary > nodes that only have mix:language but no sling:dictionary yet. If these all > have a sling:dictionary it simply does nothing and sets the "migrated" flag. > Thus this will run on startup for each cluster node, is a single query as we > have now, and doesn't cost anything. Better yet, it runs lazy on the first > request access, thus it should avoid concurrent migrations quite well. > Anyway, these can be handled by retrying when a ItemModifiedException comes > up (or whatever the exception is called). > > Cheers, > Alex
