Also changing the compaction throughput on the fly while removing nodes is
not scalable as we have 100s of nodes.
I can try and test though
On Thursday, July 13, 2017, Jai Bheemsen Rao Dhanwada
> Yes i did removenode and removenode force, and ecnountered same issue
Yes i did removenode and removenode force, and ecnountered same issue in
both the cases.
On Thursday, July 13, 2017, Subroto Barua
> set streamthroughput higher than 200 on the source side and lower on the
> target node
> just curious, have you tried
How is your Percent Repaired when you run " nodetool info" ?
Search for :
"reduced num_token = improved performance ??" topic.
The people were discussing that.
How is your compaction is configured?
Could you run the same process in command line to have a measurement?
set streamthroughput higher than 200 on the source side and lower on the target
just curious, have you tried removenode force?
On Thursday, July 13, 2017, 8:35:38 AM PDT, Jai Bheemsen Rao Dhanwada
Thank you Sean,
you mean setstreamthroughput to a lower value
Thank you Sean,
you mean setstreamthroughput to a lower value on the node where we are
doing a "nodetool removenode "?
On Thu, Jul 13, 2017 at 8:07 AM, Durity, Sean R wrote:
> Late to this party, but Jeff is talking about nodetool
> setstreamthroughput. The
I want to extend my cluster (C* 3.9) from three nodes with RF 2 to
seven nodes with RF 3.
Is there a preferable way to do this?
setting "auto_bootstrap: true" and bootstrapping each new node at a time?
setting "auto_bootstrap: false" , starting all new nodes at once and
Late to this party, but Jeff is talking about nodetool setstreamthroughput. The
default in most versions is 200 Mb/s (set in yaml file as
stream_throughput_outbound_megabits_per_sec). This is outbound throttle only.
So, if streams from multiple nodes are going to one, it can get inundated.
I like Bryan’s terminology of an “antagonistic use case.” If I am reading this
correctly, you are putting 5 (or 10) million records in a partition and then
trying to delete them in the same order they are stored. This is not a good
data model for Cassandra, in fact a dangerous data model. That
I have a Cassandra 2.1 cluster running on AWS that receives high read
loads, jumping from 100k requests to 400k requests, for example. Then it
normalizes and later cames another high throughput.
To the application, it appears that Cassandra is slow. However, cpu and
disk use is ok in every
I am very new to cassandra , today I try to view result of bootstrap from
other nodes *nodetool -h other_node_IP_address -u cassandra -pw cassandra
bootstrap resume* command geting below exception
*nodetool: Failed to connect to 'other_node_IP_address:7199' -
NoSuchObjectException: 'no such
Mail list logo