I am not sure I fully understand the question, because nodetool repair is
one of the three ways for Cassandra to ensure consistency. If by "affect"
you mean "make your data consistent and ensure all replicas are
up-to-date", then yes, that's what I think it does.
And yes, I would expect nodetool
I saw an average 10% cpu usage on each node when the cassandra cluster has no
load at all.
I checked which thread was using the cpu, and I got the following 2 metric
threads each occupying 5% cpu.
jstack output:
"metrics-meter-tick-thread-2" daemon prio=10 tic=...
The cassandra version is 2.0.12. We have 1500 tables in the cluster of 6
nodes, with a total 2.5 billion rows.
在2015年10月24 20时52分, "Xu Zhongxing"写道:
I saw an average 10% cpu usage on each node when the cassandra cluster has no
load at all.
I checked which thread was
Max hint window is only part of the equation. If it is down longer than
Max hint window, a repair will still fix up the node for you.
The max time a node can be down before it must be re built is determined by
the lowest gc grace setting on your various tables. By default gc grace is
10 days,
Ideas please, on what I may be doing wrong?
On Sat, Oct 24, 2015 at 5:48 PM, Ajay Garg wrote:
> Hi All.
>
> I have been doing extensive testing, and replication works fine, even if
> any permuatation of CAS11, CAS12, CAS21, CAS22 are downed and brought up.
> Syncing
Never mind Vasileios, you have been a great help !!
Thanks a ton again !!!
Thanks and Regards,
Ajay
On Sat, Oct 24, 2015 at 10:17 PM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:
> I am not sure I fully understand the question, because nodetool repair is
> one of the three ways for
Thanks a ton Vasileios !!
Just one last question ::
Does running "nodetool repair" affect the functionality of cluster for
current-live data?
It's ok if the insertions/deletions of current-live data become a little
slow during the process, but data-consistency must be maintained. If that
is the
I would imagine you are running on fairly slow machines (given the CPU usage),
but 2.0.12 and 2.1 use a fairly old version of the yammer/codehale metrics
library.
It is waking up every 5 seconds, and updating Meters… there are a bunch of
these Meters per table (embedded in Timers), so your
On Sat, Oct 24, 2015 at 9:47 AM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:
> I am not sure I fully understand the question, because nodetool repair is
> one of the three ways for Cassandra to ensure consistency. If by "affect"
> you mean "make your data consistent and ensure all
>
>
> All other means of repair are optimizations which require a certain amount
> of luck to happen to result in consistency.
>
Is that true regardless of the CL one uses? So, for example if writing
QUORUM and reading QUORUM, wouldn't an increased read_repair_chance
probability be sufficient? If
Hi All.
We have a scenario, where the Application-Server (APP), Node-1 (CAS11), and
Node-2 (CAS12) are hosted in DC1.
Node-3 (CAS21) and Node-4 (CAS22) are in DC2.
The intention is that we provide 4-way redundancy to APP, by specifying
CAS11, CAS12, CAS21 and CAS22 as the addresses via
Hello Ajay,
Here is a good link:
http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsRepairNodesManualRepair.html
Generally, I find the DataStax docs to be OK. You could consult them for
all usual operations etc. Ofc there are occasions where a given concept is
not as clear, but
If a node in the cluster goes down and comes up, the data gets synced up on
this downed node.
Is there a limit on the interval for which the node can remain down? Or the
data will be synced up even if the node remains down for weeks/months/years?
--
Regards,
Ajay
Thanks Vasileios for the reply !!!
That makes sense !!!
I will be grateful if you could point me to the node-repair command for
Cassandra-2.1.10.
I don't want to get stuck in a wrong-versioned documentation (already
bitten once hard when setting up replication).
Thanks again...
Thanks and
Hello Ajay,
Have a look in the *max_hint_window_in_ms* :
http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configCassandra_yaml_r.html
My understanding is that if a node remains down for more than
*max_hint_window_in_ms*, then you will need to repair that node.
Thanks,
Vasilis
Hi All.
I have been doing extensive testing, and replication works fine, even if
any permuatation of CAS11, CAS12, CAS21, CAS22 are downed and brought up.
Syncing always takes place (obviously, as long as continuous-downtime-value
does not exceed *max_hint_window_in_ms*).
However, things behave
16 matches
Mail list logo