Hi All,
I am facing a situation in my 3 nodes cassandra wherein one of the
cassandra nodes is going down after around 5-10mins.
Below messages are seen in debug.log of node which is going down:
===
No Title
INFO [ScheduledTasks:1] 2019-05-30 14:39:25,179 StatusLogger.java:101 -
Have you checked system log for GC messages on the node that’s going down?
On Thu, May 30, 2019 at 1:53 PM Kunal wrote:
> Hi All,
>
> I am facing a situation in my 3 nodes cassandra wherein one of the
> cassandra nodes is going down after around 5-10mins.
>
> Below messages are seen in
Thank you Anthony and Jonathan. To add new ring it doesn't have to be same
version of Cassandra right. For ex dse 5.12 which is 3.11.0 has stables
with mc name and apache 3.11.3 also uses sstables name with mc . We should
be still able to add it to the ring correct
On Wed, May 29, 2019, 9:55 PM
It appears you have two goals you are trying to accomplish at the same time.
My recommendation is to break it into two different steps. You need to decide
if you are going to upgrade DSE or OSS.
* Upgrade DSE then migrate to OSS
* Upgrade DSE to version that matches OSS 3.11.3
Thanks for your replies guys. I really appreciate it.
@Alain, I use Graphite for backend on top of Grafana. But the goal is to
move from Graphite to Prometheus eventually.
I tried to find a direct way of getting a specific Latency metric in
average and as Chris pointed out, then Mean value isn't
For what it is worth, generally I would recommend just using the mean vs
calculating it yourself. It's a lot easier and averages are meaningless for
anything besides trending anyway (which is really what this is useful for,
finding issues on the larger scale), especially with high volume clusters
Sorry for the duplicated emails but I just want to make sure I'm doing
it correctly:
To summarize, are both ways accurate or one is better than the other?
>
> org.apache.cassandra.metrics.ClientRequest.Latency.Read these measure the
> latency in milliseconds
>
Its actually in microseconds, unless calling the values() operation which
gives the histogram in nanoseconds
On Wed, May 29, 2019 at 4:34 PM Paul Chandler wrote:
> There are various
Yep. I would *never* use mean when it comes to performance to make any
sort of decisions. I prefer to graph all the p99 latencies as well as the
max.
Some good reading on the topic:
https://bravenewgeek.com/everything-you-know-about-latency-is-wrong/
On Thu, May 30, 2019 at 7:35 AM Chris