Valid suggestion. Stick to the plan, avoid downtime of a node more than hinted
handoff window. OR increase window to a larger value, if you know it is going
to take longer than current setting
Regards,
Nitan
Cell: 510 449 9629
> On Apr 8, 2019, at 8:43 PM, Soumya Jena wrote:
>
> Cassandra
Cassandra tracks it and no new hints will be created once the default 3
hours window is passed . However , cassandra will not automatically
trigger a repair if your node is down for more than 3 hours .Default
settings of 3 hours for hints is defined in cassandra.yaml file . Look for
Ah I see it is the default for hinted handoffs. I was somehow thinking
its bigger figure I do not know why :)
I would say you should run repairs continuously / periodically so you
would not even have to do some thinking about that and it should run
in the background in a scheduled manner if
Hi Kunal,
where do you have that "more than 3 hours" from?
Regards
On Tue, 9 Apr 2019 at 04:19, Kunal wrote:
>
> Hello everyone..
>
>
>
> I have a 6 node Cassandra datacenter, 3 nodes on each datacenter. If one of
> the node goes down and remain down for more than 3 hr, I have to run nodetool
If it were me, I'd look at raw request rates (in terms of requests /
second as well as request latency), network throughput and then some
flame graphs of both the server and your application:
https://github.com/jvm-profiling-tools/async-profiler.
I've created an issue in tlp-stress to add
Hello everyone..
I have a 6 node Cassandra datacenter, 3 nodes on each datacenter. If one of
the node goes down and remain down for more than 3 hr, I have to run
nodetool repair. Just wanted to ask if Cassandra automatically tracks the
time when one of the Cassandra node goes down or do I need
Hi, I'm trying to test if adding driver compression will bring me any
benefit.
I understand that the trade-off is less bandwidth but increased CPU usage
in both cassandra nodes (compression) and client nodes (decompression) but
I want to know what are the key metrics and how to monitor them to