. We
still don't fully understand why this kernel bug didn't affect *all *our
nodes (in the end we had three nodes with that kernel, only two of them
exhibited this issue), but there we go.
Thanks everyone for your help
Cheers,
Griff
On 14 January 2016 at 15:14, James Griffin <james.grif...@idi
d??promotion failures
> or concurrent mode failures?
>
> If you are on CMS, you need to fine tune your heap options to address full
> gc.
>
>
>
> Thanks
> Anuj
>
> Sent from Yahoo Mail on Android
> <https://overview.mail.yahoo.com/mobile/?.src=Android>
&g
93bf2b28a88cc4b38c554?ytl=http%3A%2F%2Fidioplatform.com%2F>
for
more information.
On 14 January 2016 at 14:22, Kai Wang <dep...@gmail.com> wrote:
> James,
>
> Can you post the result of "nodetool netstats" on the bad node?
>
> On Thu, Jan 14, 2016 at 9:09
2016 at 15:08, Kai Wang <dep...@gmail.com> wrote:
> James,
>
> I may miss something. You mentioned your cluster had RF=3. Then why does
> "nodetool status" show each node owns 1/3 of the data especially after a
> full repair?
>
> On Thu, Jan 14, 2016 at 9:56 AM, Jam
28.8%
> faa5b073-6af4-4c80-b280-e7fdd61924d3 rack1
>
>
>
> Thanks
> Anuj
>
> Sent from Yahoo Mail on Android
> <https://overview.mail.yahoo.com/mobile/?.src=Android>
>
> On Wed, 13 Jan, 2016 at 10:34 pm, James Griffin
> <james.grif...@idioplatform.com&g
od)252.37 GB 256 23.0%
>> 9cd2e58c-a062-48a4-8d3f-b7bd9ee0576f rack1
>> UN B (Good)245.91 GB 256 24.4%
>> 6f0cfff2-babe-4de2-a1e3-6201228dee44 rack1
>> UN C (Good)254.79 GB 256 23.7%
>> f4891729-9179-4f19-ab2c-50d387d
Hi all,
We’ve spent a few days running things but are in the same position. To add
some more flavour:
- We have a 3-node ring, replication factor = 3. We’ve been running in
this configuration for a few years without any real issues
- Nodes 2 & 3 are much newer than node 1. These two