On Wed, Nov 14, 2018 at 5:13 PM Joshua Yanovski
<joshua.yanov...@gmail.com> wrote:
>
> This is only a personal anecdote, but from my own experience with 
> serializability, this sort of blind update isn't often contended in realistic 
> workloads.  The reason is that (again, IME), most blind writes are either 
> insertions, or "read-writes in disguise" (the client read an old value in a 
> different transaction); in the latter case, the data in question are often 
> logically "owned" by the client, and will therefore rarely be contended.  I 
> think there are two major exceptions to this: transactions that perform 
> certain kinds of monotonic updates (for instance, marking a row complete in a 
> worklist irrespective of whether it was already completed), and automatic 
> bulk updates.  However, these were exactly the classes of transactions that 
> we already ran under a lower isolation level than serializability, since they 
> have tightly constrained shapes and don't benefit much from the additional 
> guarantees.
>
> So, if this only affects transactions with blind updates, I doubt it will 
> cause much pain in real workloads (even though it might look bad in 
> benchmarks which include a mix of blind writes and rmw operations).  
> Particularly if it only happens if you explicitly opt into zheap storage.
>
Thanks Joshua for sharing your input on this. I'm not aware of any
realistic workloads for serializable transactions. So, it is really
helpful.


-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com

Reply via email to