Hi,
What is the accuracy improvement of counter in 2.1 over 2.0?
This below post, it mentioned 2.0.x issues fixed in 2.1 and perfomance
improvement.
http://www.datastax.com/dev/blog/whats-new-in-cassandra-2-1-a-better-implementation-of-counters
But how accurate are the counter 2.1.x or any known
One thing to note is that the exception you get... in this case, you'll get a
timeout, not a failure. i.e. as far as Cassandra is concerned, the write is
still ongoing - it hasn't failed; but from the client's perspective, it's timed
out. In this case (i.e. timeout), the application would usuall
On Wed, Jul 8, 2015 at 2:07 PM, Saladi Naidu wrote:
> Suppose I have a row of existing data with set of values for attributes I
> call this State1, and issue an update to some columns with Quorum
> consistency. If the write is succeeded in one node, Node1 and and failed
> on remaining nodes. As
Suppose I have a row of existing data with set of values for attributes I call
this State1, and issue an update to some columns with Quorum consistency. If
the write is succeeded in one node, Node1 and and failed on remaining nodes. As
there is no Rollback, Node1 row attributes will remain new
If going by Month as partition key then you need to duplicate the data. I dont
think going with name as partition key is good datamodel practice as it will
create a hotspot. Also I believe your queries will be mostly by employee not by
month.
You can create employee id as partition key and mont
Hi there,
Im having some trouble here trying to execute the datastax agent on my
cassandra nodes on ubuntu 14.04.
I have a cassandra 2.0.15 cluster running on docker containers , so i guess
my main problem must be related to the container port exposure.
The cluster is ok. The nodetool status show
Hi John,
The general answer: Each cell in a CQL table has a corresponding timestamp
which is taken from the clock on the Cassandra node that orchestrates the
write. When you are reading from a Cassandra cluster the node that
coordinates the read will compare the timestamps of the values it fetches
Hi,
After executing `nodetool cleanup` on some nodes they are all showing lots
(123, 97, 64) of pending compaction tasks, but not a single active task.
I'm running Cassandra 2.0.14 with Leveled Compaction Strategy on most of
our tables. Anyone experienced this before? Also, is there any way for me