On Fri, Jul 7, 2017 at 3:45 AM, Alik Khilazhev <a.khilaz...@postgrespro.ru> wrote: > PostgreSQL shows very bad results in YCSB Workload A (50% SELECT and 50% > UPDATE of random row by PK) on benchmarking with big number of clients using > Zipfian distribution. MySQL also has decline but it is not significant as it > is in PostgreSQL. MongoDB does not have decline at all.
How is that possible? In a Zipfian distribution, no matter how big the table is, almost all of the updates will be concentrated on a handful of rows - and updates to any given row are necessarily serialized, or so I would think. Maybe MongoDB can be fast there since there are no transactions, so it can just lock the row slam in the new value and unlock the row, all (I suppose) without writing WAL or doing anything hard. But MySQL is going to have to hold the row lock until transaction commit just like we do, or so I would think. Is it just that their row locking is way faster than ours? I'm more curious about why we're performing badly than I am about a general-purpose random_zipfian function. :-) -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers