I also expanded on a script originally written by Matt Stump @ Datastax.
The readme has the reasoning behind requiring sub-range repairs.
https://github.com/hancockks/cassandra_range_repair
On Mon, Jun 30, 2014 at 10:20 PM, Phil Burress philburress...@gmail.com
wrote:
@Paulo, this is very
Thanks! We retrieved all the ranges and started running repair on them. We
ran through all of them but found one single range which brought the ENTIRE
cluster down. All of the other ranges ran quickly and smoothly. This one
problematic range reliably brings it down every time we try to run repair
On Tue, Jul 1, 2014 at 3:53 PM, Phil Burress philburress...@gmail.com
wrote:
Thanks! We retrieved all the ranges and started running repair on them. We
ran through all of them but found one single range which brought the ENTIRE
cluster down. All of the other ranges ran quickly and smoothly.
We are running into an issue with nodetool repair. One or more of our nodes
will die with OOM errors when running nodetool repair on a single node. Was
reading this http://www.datastax.com/dev/blog/advanced-repair-techniques
and it mentioned using the -snapshot option, however, that doesn't appear
Repair uses snapshot option by default since 2.0.2 (see NEWS.txt).
So you don't have to specify in your version.
Do you have stacktrace when OOMed?
On Mon, Jun 30, 2014 at 4:54 PM, Phil Burress philburress...@gmail.com wrote:
We are running into an issue with nodetool repair. One or more of our
The stack won't help a ton since the memory leak will occur elsewhere… the
stack will just have the point where the memory allocation failed :-(
On Mon, Jun 30, 2014 at 3:08 PM, Yuki Morishita mor.y...@gmail.com wrote:
Repair uses snapshot option by default since 2.0.2 (see NEWS.txt).
So you
On Mon, Jun 30, 2014 at 3:08 PM, Yuki Morishita mor.y...@gmail.com wrote:
Repair uses snapshot option by default since 2.0.2 (see NEWS.txt).
As a general meta comment, the process by which operationally important
defaults change in Cassandra seems ad-hoc and sub-optimal.
For to record, my
We are running repair -pr. We've tried subrange manually and that seems to
work ok. I guess we'll go with that going forward. Thanks for all the info!
On Mon, Jun 30, 2014 at 6:52 PM, Jaydeep Chovatia
chovatia.jayd...@gmail.com wrote:
Are you running full repair or on subset? If you are
One last question. Any tips on scripting a subrange repair?
On Mon, Jun 30, 2014 at 7:12 PM, Phil Burress philburress...@gmail.com
wrote:
We are running repair -pr. We've tried subrange manually and that seems to
work ok. I guess we'll go with that going forward. Thanks for all the info!
If you find it useful, I created a tool where you input the node IP,
keyspace, column family, and optionally the number of partitions (default:
32K), and it outputs the list of subranges for that node, CF, partition
size: https://github.com/pauloricardomg/cassandra-list-subranges
So you can
@Paulo, this is very cool! Thanks very much for the link!
On Mon, Jun 30, 2014 at 9:37 PM, Paulo Ricardo Motta Gomes
paulo.mo...@chaordicsystems.com wrote:
If you find it useful, I created a tool where you input the node IP,
keyspace, column family, and optionally the number of partitions
11 matches
Mail list logo