Hi,
I've recently upgraded our Cassandra cluster from 2.1 to 3.9. By default(?)
3.9 creates a debug.log file containing a ton of lines (a new one every
second) with:
DEBUG [GossipTasks:1] 2016-10-12 14:43:38,761 Gossiper.java:337 -
> Convicting /172.31.137.65 with status hibernate - alive false
information similar to that requested on the above ticket as well
> as what operation you were performing on the node (was it a failed attempt
> at replacing? etc) on a JIRA ticket, someone might have a chance to look
> into this further.
>
> On Wed, Oct 12, 2016 at 9:48 AM, Kasper
gt; state SEVERITY: 0.5102040767669678
> TRACE [GossipStage:1] 2016-10-17 11:17:06,597 Gossiper.java:889 - local
> heartbeat version 1906 greater than 1905 for /172.31.150.151
> TRACE [GossipStage:1] 2016-10-17 11:17:06,597
> GossipDigestAckVerbHandler.java:84 - Sending a GossipDigestAck
Hi,
I have a large amount (can be >100 million) of (id uuid, score int) entries
in Cassandra. I need to, at regular intervals of lets say 30-60 minutes,
find the cut-off points for the score needed to be in the top 0.1%, 33% and
66% of all scores.
What would a good approach be to this problem?
A