Such recommendation depends on exactly what the test is doing, I think you were still changing the test so it is hard to know what it doing now until you update the descriptions and send out the source. Having said this I will give advice based one what I saw previously.
1) assuming your benchmark still does deletes and updates of 5000 based on where clause on a single field, I would suggest: o bump the default lock escalation from 5000 to 10000 o make sure to have a key on the field you are using for the update/delete 2) memory depends on cache size and number of threads. Changing the cache size depends on expected size of the database. I would not expect need to change default memory size for the number of threads you had before (ie. 1 to 10). I am not going to guess at cache size without knowing database table and index size. 3) batching the inserts in units of 1000 may help, definitely 100 is better than 10. 4) not sure what application mix you are trying to model. I believe the app you had before always had all threads inserting, deleting and updating the same key value. If that is what you are looking to test, then so be it. Usually multi-user tests assume some percentage of different keys per thread, so that it can measure multi-thread scalability which is not totally bound by each thread locking the other out based on logical locks. Usually the same key is less than 20% of the total update operations. 5) you may have to alter lock timeout default, depending on the above changes and size of database. Especially if you don't add indexes on the key'd deletes and updates and depending on total size of database. Peter Kovgan (JIRA) wrote: > [ > http://issues.apache.org/jira/browse/DERBY-465?page=comments#action_12316752 > ] > > Peter Kovgan commented on DERBY-465: > ------------------------------------ > > Mike and other Derby hackers! > > Could you suggest me best parameters to run "multithreading access write" > test on Derby? > > What lock,memory and other parameters are optimal for such test? > > > I have already some results, but I must complete all tests to show whole > picture. > > Please, advice me how to improve Derby performance in "multithreading access > write" mode. > > I mean 2 and more threads try to insert/update/delete simultaneously. > > Many thanks! > > > > > >>Embedded Derby-PointBase comparison >>----------------------------------- >> >> Key: DERBY-465 >> URL: http://issues.apache.org/jira/browse/DERBY-465 >> Project: Derby >> Type: Wish >> Components: Test >> Versions: 10.0.2.1, 10.0.2.0 >> Environment: Windows Server 2003, 4 processors, summary CPU 3.00 Ghz, RAM 1 >> Gb >> Reporter: Peter Kovgan >> Attachments: Benchmarks_info_independent.doc, DBOperations.java, >> Multithreading-access read.doc, User.java, derby-optimization.doc, >> derby-pb1.doc >> >>I have tested 4 major embedded DB. >>I have found that major disadvantage of Derby is >>1)low insert speed and >>2)significant performance degradation in select, update, delete operation >>speed starting from some table size. >>PointBase in comparison has not such degradation. >>It will be better if you improve your product. >>Good luck and thank you. > >
