Thanks for the quick response. It's the existing node where the cleanup failed.
It has a larger volume than other nodes.
From: Akhil Mehra
Date: 2017-06-19 14:56
Subject: Re: Cleaning up related issue
Is the node with the large volume a new node or an existing node. If it is an
existing node is this the one where the node tool cleanup failed.
On 19/06/2017, at 6:40 PM, wxn...@zjqunshuo.com wrote:
After adding a new node, I started cleaning up task to remove the old data on
the other 4 nodes. All went well except one node. The cleanup takes hours and
the Cassandra daemon crashed in the third node. I checked the node and found
the crash was because of OOM. The Cassandra data volume has zero space left. I
removed the temporary files which I believe created during the cleaning up
process and started Cassanndra.
The node joined the cluster successfully, but one thing I found. From the
"nodetool status" output, the node takes much data than other nodes. Nomally
the load should be 700GB. But actually it's 1000GB. Why it is larger? Please
see the output below.
UN 10.253.44.149 705.98 GB 256 40.4%
UN 10.253.106.218 691.07 GB 256 39.9%
UN 10.253.42.113 623.73 GB 256 39.3%
UN 10.253.41.165 779.38 GB 256 40.1%
UN 10.253.106.210 1022.7 GB 256 40.3%