Mike Matrigali <[EMAIL PROTECTED]> writes: > Having said that it would be interesting if someone had time to > implement a higher performance latch implementation and plug it in > and see how much it helps. It would decrease the total time spent > in lock manager.
Ok, a new experiment: I removed the calls to LockFactory.latchObject()
and LockFactory.unlatch() in BasePage. Instead, I let BasePage check
manually whether it was latched and use wait/notifyAll if it was. The
patch (which is very simple) is attached.
To see the effect of this change, I tested the patch on a dual-CPU
machine with the test client from DERBY-1961 running single-record
select operations. Derby was running in embedded mode, and the entire
database was in the page cache. The results for 1 to 100 concurrent
clients compared to the code in trunk are shown in the attached graph
(latch.png).
For one client, there was not much gained, but for two clients, the
throughput increased 20% compared to trunk. For three clients, the
increase was 40%, and it was 145% for 30 clients. This was a lot more
than I expected! I also ran a TPC-B like test with 20 clients and saw
a 17% increase in throughput (disk write cache was enabled).
I would guess that the improvement is mainly caused by
a) Less contention on the lock table since the latches no longer
were stored in the lock table.
b) Less context switches because the fair queue in the lock manager
wasn't used, allowing clients to process more transactions before
they needed to give the CPU to another thread.
I hadn't thought about b) before, but I think it sounds reasonable
that using a fair wait queue for latches would slow things down
considerably if there is a contention point like the root node of a
B-tree. I also think it sounds reasonable that the latching doesn't
use a fair queue, since the latches are held for such a short time
that starvation is not likely to be a problem.
--
Knut Anders
Index: java/engine/org/apache/derby/impl/store/raw/data/BasePage.java
===================================================================
--- java/engine/org/apache/derby/impl/store/raw/data/BasePage.java
(revision 477535)
+++ java/engine/org/apache/derby/impl/store/raw/data/BasePage.java
(working copy)
@@ -1774,12 +1774,24 @@
}
}
// just deadlock out ...
+
+ while (owner != null) {
+ try {
+ wait();
+ } catch (InterruptedException ie) {
+ // throw StandardException?
+ }
+ }
+
+ owner = requester;
+ preLatch = true;
+ requester.addObserver(this);
}
// Latch the page, owner is set through the Lockable call backs.
- t.getLockFactory().latchObject(
- t, this, requester, C_LockFactory.WAIT_FOREVER);
+ //t.getLockFactory().latchObject(
+ // t, this, requester, C_LockFactory.WAIT_FOREVER);
// latch granted, but cleaner may "own" the page.
@@ -1836,12 +1848,20 @@
}
}
// just deadlock out ...
+
+ if (owner == null) {
+ owner = requester;
+ preLatch = true;
+ requester.addObserver(this);
+ } else {
+ return false;
+ }
}
// Latch the page, owner is set through the Lockable call backs.
- boolean gotLatch = t.getLockFactory().latchObject(t, this,
requester, C_LockFactory.NO_WAIT);
- if (!gotLatch)
- return false;
+ //boolean gotLatch = t.getLockFactory().latchObject(t, this,
requester, C_LockFactory.NO_WAIT);
+ //if (!gotLatch)
+ // return false;
synchronized (this)
{
@@ -1899,8 +1919,14 @@
return;
}
- RawTransaction t = owner.getTransaction();
- t.getLockFactory().unlatch(myLatch);
+ //RawTransaction t = owner.getTransaction();
+ //t.getLockFactory().unlatch(myLatch);
+ synchronized (this) {
+ owner.deleteObserver(this);
+ owner = null;
+ myLatch = null;
+ notifyAll();
+ }
}
/*
latch.png
Description: PNG image
