Hi all,
Out of the blue, I started receiving a strange error message when inserting
a new row that seems related to guava (see below). I really don't
understand what is going on. I didn't change anything on the server and
everything works fine if the query is an update (the row already exists)
to engineer around the 900+
> tables, no matter which GC you use.
>
>
>
> Sean Durity – Staff Systems Engineer, Cassandra
>
>
>
> *From:* Luca Rondanini
> *Sent:* Monday, July 19, 2021 11:34 AM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] R/W timeouts V
Thanks Yakir,
I can already experience slow repairs and startups but I'd like to
stabilize the system before jumping into refactoring (columns are not a
problem, max 10/cols per table). Do you believe it's a GC problem to cause
the timeouts and crashes? I'll give it a try and update this post.
Hi all,
I have a keyspace with almost 900 tables.
Lately I started receiving lots of w/r timeouts (eg
com.datastax.driver.core.exceptions.Read/WriteTimeoutException: Cassandra
timeout during write query at consistency LOCAL_ONE (1 replica were
required but only 0 acknowledged the write).
*I'm
; Hannu
>
> > On 15. Jun 2022, at 10.08, Luca Rondanini
> wrote:
> >
> > Hi all,
> >
> > I'm just trying to understand better how cassandra works.
> >
> > My understanding is that, once set, the number of vnodes does not change
> in a clu
es would be changed, not all 25600 when a
> new node is joining.
>
> So you see that for each node it’s only 30mb to replicate to the new node.
> Not very expensive, right?
>
> In real life, it’s not so precise and all but the basic idea is the same.
>
> Cheers,
> Hannu
>
Hi all,
I'm just trying to understand better how cassandra works.
My understanding is that, once set, the number of vnodes does not change in
a cluster. The partitioner allocates vnodes to nodes ensuring replication
data are not stored on the same node.
But what happens if there are more nodes
load on the cluster, it should be somewhat
> evenly distributed among other nodes. If you have just a single token per
> node then scaling up or down has a bit different effects due to balancing
> issues etc. So there is a reason why default num_tokens is 16 currently.
>
> Cheers,
> Han