Yes sorry I missed the double parenthesis in the first case.
I may be a bit off here, but I don't think the coordinator pinpoints the row
but just the node it needs to go to.
It's more a case of creating smaller partitions, which makes for more even load
among the cluster and the node will
Thank you Oskar. I think you may be missing the double parentheses in the
first example - difference is between partition key of (key1, key2, key3)
and (key1, key2). With that in mind, I believe your answer would be that
the first example is more efficient?
Is this essentially a case of the
The second one will be the most efficient.
How much depends on how unique key1 is.
In the first case everything for the same key1 will be on the same partition.
If it's not unique at all that will be very bad.
In the second case the combo of key1 and key2 will decide what partition.
If you
Wondering if there's a difference when querying by primary key between the
two definitions below:
primary key ((key1, key2, key3))
primary key ((key1, key2), key3)
In terms of read speed/efficiency... I don't have much of a reason
otherwise to prefer one setup over the other, so would prefer the
Hi Romain,
Thanks for the input!
We currently use the Kilo release of Openstack. Are you aware of any known
bugs/issues with this release?
We definitely defined anti-affinity rules regarding spreading C* on
different hosts. (I surely don't want to be woken up at night due to a
failed host ;-) )
There is nothing wrong with general purpose EBS volumes if we are talking
about gp2 (SSD backed ones). With bigger volumes you get more IOPs and
3.4TB volume gives you 10.000 IOPs which, in your case, is an overkill (you
are probably looking at 1TB).
Take a look at TWCS since you are inserting