[ 
http://issues.apache.org/jira/browse/DERBY-1704?page=comments#action_12449899 ] 
            
Knut Anders Hatlen commented on DERBY-1704:
-------------------------------------------


Thanks. Do you think it would be better to leave SinglePool as it is
and create a new (perhaps optional) MultiPool implementation instead?


I haven't looked very closely at the deadlock detection code yet, but
I hope it won't be necessary with too many changes. One possibility is
to obtain the synchronization locks on all partitions before the
waiters graph is built. Of course, some precautions are needed in
order to avoid java-level deadlocks when are multiple synchronization
locks possibly obtained in different order.

-- 
Knut Anders


> Allow more concurrency in the lock manager
> ------------------------------------------
>
>                 Key: DERBY-1704
>                 URL: http://issues.apache.org/jira/browse/DERBY-1704
>             Project: Derby
>          Issue Type: Improvement
>          Components: Services, Performance
>    Affects Versions: 10.2.1.6
>            Reporter: Knut Anders Hatlen
>         Assigned To: Knut Anders Hatlen
>            Priority: Minor
>         Attachments: 1cpu.png, 2cpu.png, 8cpu.png, split-hashtables.diff, 
> split-hashtables.stat
>
>
> I have seen indications of severe monitor contention in SinglePool
> (the current lock manager) when multiple threads access a Derby
> database concurrently. When a thread wants to lock an object, it needs
> to obtain the monitor for both SinglePool and LockSet (both of them
> are global synchronization points). This leads to poor scalability.
> We should investigate how to allow more concurrency in the lock
> manager, and either extend SinglePool or implement a new manager.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to