On 7/25/06, sowmi <[EMAIL PROTECTED]> wrote:
I am trying to load items in batches of 10000 to my repository. While the first loads are quick enough, the save time seems to be increasing with each subsequent batch. These are the times in seconds for each 10000 items: 42,121,186,267,309,343,372,412,451,498,524,540 Any idea why this happens? Is the save time dependent on the number of items in repository? I am using the same session object for each of the batch saves. Will it make a difference if I recycle the session? Please help.
run the test with jackrabbit's default configuration (i.e. DerbyPersistenceManager) and compare the results. since you're using a custom persistence manager / schema i suspect this to be the reason for the decreasing performance you experience. jackrabbit's scalability is mainly affected by the choice of persistence manager/schema. DerbyPersistenceManager should scale reasonably well. however, if your nodes have 'large' numbers of child nodes (e.g. > 30k) performance will be negatively affected. cheers stefan
-- View this message in context: http://www.nabble.com/Increase-in-save-time-tf1999932.html#a5491201 Sent from the Jackrabbit - Users forum at Nabble.com.
