How "hot" are your partition keys in these counters?
I would think, theoretically, if specific partition keys are getting
thousands of counter increments/mutations updates, then compaction won't
"compact" those together into the final value, and you'll start
experiencing the problems people get
Hi Javier,
Glad to hear it is solved now. Cassandra 3.11.1 should be a more stable
version and 3.11 a better series.
Excuse my misunderstanding, your table seems to be better designed than
thought.
Welcome to the Apache Cassandra community!
C*heers ;-)
---
Alain Rodriguez -
Hi,
Thank you for your reply.
As I was bothered by this problem, last night I upgraded the cluster to
version 3.11.1 and everything is working now. As far as I can tell the
counter table can be read now. I will be doing more testing today with this
version but it is looking good.
To answer your
Hello,
This table has 6 partition keys, 4 primary keys and 5 counters.
I think the root issue is this ^. There might be some inefficiency or
issues with counter, but this design, makes Cassandra relatively
inefficient in most cases and using standard columns or counters
indifferently.
Hello everyone,
I get a timeout error when reading a particular row from a large counters
table.
I have a storm topology that inserts data into a Cassandra counter table.
This table has 6 partition keys, 4 primary keys and 5 counters.
When data starts to be inserted, I can query the counters