On Wed, Sep 14, 2016 at 10:25 AM, Dilip Kumar <dilipbal...@gmail.com> wrote:
> I have tested performance with approach 1 and approach 2.
>
> 1. Transaction (script.sql): I have used below transaction to run my
> bench mark, We can argue that this may not be an ideal workload, but I
> tested this to put more load on ClogControlLock during commit
> transaction.
>
> -----------
> \set aid random (1,30000000)
> \set tid random (1,3000)
>
> BEGIN;
> SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
> SAVEPOINT s1;
> SELECT tbalance FROM pgbench_tellers WHERE tid = :tid for UPDATE;
> SAVEPOINT s2;
> SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
> END;
> -----------
>
> 2. Results
> ./pgbench -c $threads -j $threads -T 10 -M prepared postgres -f script.sql
> scale factor: 300
> Clients   head(tps)        grouplock(tps)          granular(tps)
> -------      ---------               ----------                   -------
> 128        29367                 39326                    37421
> 180        29777                 37810                    36469
> 256        28523                 37418                    35882
>
>
> grouplock --> 1) Group mode to reduce CLOGControlLock contention
> granular  --> 2) Use granular locking model
>
> I will test with 3rd approach also, whenever I get time.
>
> 3. Summary:
> 1. I can see on head we are gaining almost ~30 % performance at higher
> client count (128 and beyond).
> 2. group lock is ~5% better compared to granular lock.

Forgot to mention that, this test is on unlogged tables.


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to