On 9/20/11 1:32 PM, Kathey Marsden wrote:
On 9/20/2011 12:20 PM, Rick Hillegas wrote:
at sounds like a good idea.
Sadly, I think it is something that users are not likely to hit until they are in a heavily loaded production environment. Is there a way to acheive the prior behavior where there is not a risk of the error occurring?
Here are some options:

1) It's easy to achieve the prior behavior by backing out the fix for DERBY-4437. That would give up the concurrency boost provided by that work. I believe the user experience of the prior behavior is that access to the identity column goes down to single-threaded usage even at low contention levels. This could be done for 10.8.2.

I was thinking more in terms of whether there is some safe value for a user setting derby.language.sequence.preallocator, if an application is not prepared for the new error, or some way for Derby to dynamically slow things down to single threaded if there are more than 20 simultaneous inserts. (Is that the limitation is if the value is set to 20?)
Hi Kathey,

How much contention is too much depends on a lot of variables, including transaction workload and how the VM schedules threads. I don't think there is any straightforward mapping of threadcount to optimal value for this property. In general, higher is better. This property sets how many sequence/identity values are preallocated every time we grab a new chunk. It is the max size of the gap in the sequence/identity which will appear if the engine is not shutdown in an orderly fashion.

If you set derby.locks.waitTimeout to a negative number, then a session will never timeout trying to get a sequence/identity value. That will eliminate the "too much contention" error. If Derby is configured to have no lock timeout, then it is possible that some sessions may hang a long time before obtaining a sequence/identity column.

Thanks,
-Rick




Reply via email to