Knut Anders Hatlen <[EMAIL PROTECTED]> writes:

> For one client, there was not much gained, but for two clients, the
> throughput increased 20% compared to trunk. For three clients, the
> increase was 40%, and it was 145% for 30 clients. This was a lot more
> than I expected! I also ran a TPC-B like test with 20 clients and saw
> a 17% increase in throughput (disk write cache was enabled).

Wow!! :)

>
> I would guess that the improvement is mainly caused by
>
>   a) Less contention on the lock table since the latches no longer
>      were stored in the lock table.
>
>   b) Less context switches because the fair queue in the lock manager
>      wasn't used, allowing clients to process more transactions before
>      they needed to give the CPU to another thread.
>
> I hadn't thought about b) before, but I think it sounds reasonable
> that using a fair wait queue for latches would slow things down
> considerably if there is a contention point like the root node of a
> B-tree. I also think it sounds reasonable that the latching doesn't
> use a fair queue, since the latches are held for such a short time
> that starvation is not likely to be a problem.

>From your patch it seems like moving latching out of the lock manager
makes the code _more_ readable AND faster. Seems like a win-win
situation :)

-- 
dt

Reply via email to