Barnet Wagman wrote:
Locking can be expensive and escalating to table lock speeds things up d_epending on the application._ I am not sure if there is one threshold level that might help , since it will depend on the application. Currently derby uses 5000 as the default and that seems to work ok. I am not sure how this threshold number was decided ? This link in the tuning guide discusses some scenarios about how lock escalation happens. http://incubator.apache.org/derby/manuals/tuning/perf47.html#HDRSII-OPTIMZ-42065Am I correct in my understanding that escalating to table locking speeds things up? If so, any idea what threshold level might help? Also, does this locking mechanism come into play with reads, or just with writes.1) you can acquire a table level lock using LOCK TABLE sql statement . ex: lock table t1 in exclusive mode 2) There is also lock escalation mechanism in derby. When locks on particular table in a transaction reaches a threshold values(default:5000). it automatically escalates the row level locks to a table level lock. Lock escalation threshold value can be changed by setting derby.locks.escalationThreshold property. I would not recommend reducing the threshold if the tables are being accessed concurrently.
The isolation level of the transaction/statement that got the lock also plays an important role. Example: even for reads, in case of REPEATABLE_READ, as the locks are held till end of the transaction - lock escalation can happen.
Sunitha.
