On Mon, Nov 7, 2011 at 2:45 PM, Robert Haas <robertmh...@gmail.com> wrote:

>> 2. Improve CLOG concurrency or performance in some way so that
>> consulting it repeatedly doesn't slow us down so much.

We should also ask what makes the clog slow. I think it shows physical
contention as well as logical contention on the lwlock. Since we have
2 bits per transaction that means we will see at least 256
transactions fitting in each cacheline in the clog. Consecutive
transactions are currently stored next to each other in the clog, so
that the "current" cacheline needs to be passed around between 256
transactions, one at a time. That is a problem if they all finish near
enough the same time.

My proposal is to stripe the clog, so that consecutive xids are not
adjacent in the clog, such that xids are always at least 64 bytes
apart on a 8192 byte clog page. That allows 128 commits with
consecutive xids to complete concurrently, with respect to physical
access to memory.

That's just a "one line" change in the defines at top of clog.c, so
easy enough to play with.

#define CACHELINE_SZ   64
#define CACHELINES_PER_BLOCK (BLCKSZ / CACHELINE_SZ)
#define CLOG_XACTS_PER_CACHELINE (CLOG_XACTS_PER_BYTE * CACHELINE_SZ)
#define TransactionIdToByte(xid)        \
    (CACHELINES_PER_BLOCK * \
        (TransactionIdToPgIndex(xid) /CLOG_XACTS_PER_CACHELINE)) \
+  (TransactionIdToPgIndex(xid) % CLOG_XACTS_PER_CACHELINE)

plus few extra lines to fix the other defines.


> 5. Make the WAL writer more responsive, maybe using latches, so that
> it doesn't take as long for the commit record to make it out to disk.

I'm working on this already as part of the update for power
reduction/group commit/replication performance.

-- 
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to