Hi, what version of jackrabbit are you using?
see also the following post for some pointers how to optimize query performance: http://n4.nabble.com/Explanation-and-solutions-of-some-Jackrabbit-queries-regarding-performance-td516614.html regards marcel On Fri, Nov 13, 2009 at 03:12, howard.zhuzhu <[email protected]> wrote: > > Hi, > I have finished the migration to new repository with datastore . The > old repository's size nearly 50G. > First query all "nuke:file" nodes from the old repository and write them > to a folder with a property file who records all filenames with > corresponding jcr-path . > Then write all files to an new repository . > But the xquery with "//element(*, nuke:file) " runs too slow (In my > condition, it Occupied 436 minutes ). > > My cnd definition looks like this : > ************************begin****************************** > [nuke:file] > nt:hierarchyNode, mix:versionable, mix:lockable > - nuke:fileID (string) primary mandatory > - nuke:author (string) > - nuke:name (string) > - nuke:size (long) > - nuke:contentType (string) > - nuke:keywords (string) > - nuke:modifyUser (string) > - nuke:modifyTime (date) > - nuke:notification (string) > - nuke:content (binary) > ************************end****************************** > > my search configuration : > ************************begin***************************** > <SearchIndex > class="org.apache.jackrabbit.core.query.lucene.SearchIndex"> > > > > > > > > > > > > </SearchIndex> > ***********************end******************************** > are there any ways to optimize this query ? > > -- > View this message in context: > http://n4.nabble.com/XQUERY-Optimization-tp620713p620713.html > Sent from the Jackrabbit - Users mailing list archive at Nabble.com. >
