I am working with 4.1 CE bundled webapp edition, and have run into a huge issue related to query performance. Here is a bit of background on what we are trying to do.
We have our database of books and authors that we are bringing into a JCR to allow web site editors to add additional information pertinent to web display on this data. I started out by importing all the data we need into a JackRabbit (1.6.0 now, but started with 1.5.6) Derby repository. We have only about 16K authors and 12K books to deal with now. All nodes are of nt:unstructured type. Standard search queries on JackRabbit are fast: //*[jcr:contains(.,'john') and @nodeType = 'au'] executes in 0.001 seconds and I can iterate over 25 nodes/page in about 0.1 second. That is more than acceptable for our web site and usage patterns. I created our custom Magnolia module and imported the entire dataset into a books workspace. The exact same query running within Magnolia takes 0.1 second (repeatable and is always 100 times slower than JackRabbit), while iterating over 25 node/page takes around 1 second (again repeatable and 10 times slower than JackRabbit). My primary question would be the reason behind such an extreme performance difference. I would expect more or less the same performance, given that Magnolia uses JackRabbit. Are there any performance parameters I need to specify in order to get regular JackRabbit performance? I could of course maintain our repository as a external repository outside of Magnolia and build my pages and dialogs off that, but that would be the last option. I normally run my tests on JackRabbit with 256Mb of heap, while I tried assigning 1Gb of heap for the Magnolia (author) instance. All help and suggestions would be extremely welcome. Thanks in advance, Rakesh ---------------------------------------------------------------- For list details see http://www.magnolia-cms.com/home/community/mailing-lists.html To unsubscribe, E-mail to: <[email protected]> ----------------------------------------------------------------
