hi Laxmi;
  what's the size of data per node? If the data is really huge, then let
the decommission process continue. Else; stop the cassandra process on the
decommissioning node, and from another node in the datacenter, do a
"nodetool removenode host-id". This might speed up the decommissioning
process since the streaming will be from 2 replicas rather than just one.
See if unthrottling the streamthroughput might help.

   Make sure there are no tcp sessions in hung state. If you any TCP
sessions in hung state, alter the tcp parameters.

sudo sysctl -w net.core.wmem_max = 16777216
sudo sysctl -w net.core.rmem_max = 16777216
sudo sysctl -w net.ipv4.tcp_window_scaling = 1
sudo sysctl -w net.ipv4.tcp_keepalive_time = 1800
sudo sysctl -w net.ipv4.tcp_keepalive_probes = 9
sudo sysctl -w net.ipv4.tcp_keepalive_intvl = 75


thanks

On Thu, Sep 15, 2016 at 9:28 AM, laxmikanth sadula <laxmikanth...@gmail.com>
wrote:

> I started decommssioned a node in our cassandra cluster.
> But its taking too long time (more than 12 hrs) , so I would like to
> restart(stop/kill the node & restart 'node decommission' again)..
>
> Does killing node/stopping decommission and restarting decommission will
> cause any issues to cluster?
>
> Using c*-2.0.17 , 2 Data centers, each DC with 3 groups each , each group
> with 3 nodes with RF-3
>
> --
> Thanks...!
>

Reply via email to