[
https://issues.apache.org/jira/browse/CASSANDRA-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951342#comment-13951342
]
Benedict commented on CASSANDRA-6880:
-------------------------------------
If we're doing per-cell locking, the likelihood of collision across different
ostensible uncontended updates increases. So, I wonder if it mightn't be
sensible to have a global set of locks with a larger domain? e.g. 1024 *
concurrent writers, but shared across all tables (or even all keyspaces). A
shared but larger address space should reduce the likelihood of collision,
whilst also bounding the amount of per-table memory we use (128 * concurrent
writers could be very expensive in a batch CLE environment with a lot of
tables). Probably worth capping the number of stripes as well for the same
reason.
> counters++ lock on cells, not partitions
> ----------------------------------------
>
> Key: CASSANDRA-6880
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6880
> Project: Cassandra
> Issue Type: Improvement
> Reporter: Aleksey Yeschenko
> Assignee: Aleksey Yeschenko
> Fix For: 2.1 beta2
>
>
> I'm starting to think that we should switch to locking by cells, not by
> partitions, when updating counters.
> With the current 2.1 counters, if nothing changes, the new recommendation
> would become "use smaller partitions, batch updates to the same partition",
> and that goes against what we usually recommend:
> 1. Prefer wide partitions to narrow partitions
> 2. Don't batch counter updates (because you risk to exaggerate
> undercounting/overcounting in case of a timeout)
> Locking on cells would cause C* to have to grab more locks for batch counter
> updates, but would give us generally more predictable performance
> (independent of partition wideness), and won't force people to remodel their
> data model if they have often concurrently-updated counters in the same few
> wide partitions.
> (It's a small change, code-wise)
--
This message was sent by Atlassian JIRA
(v6.2#6252)