Running Cassandra 1.2.9 in AWS with a 12 host cluster, I am getting lots of
CorruptSSTableException in system.log on one of my hosts.
Is it possible to find out which SSTable(s) is/are corrupt?
I'm currently running "nodetool scrub" on the relevant host, but that
doesn't seem like an efficient wa
Datstax recently started offering free virtual training. You may want to
try that first:
http://www.datastax.com/what-we-offer/products-services/training/virtual-training
There are also many Cassandra meetups around the world:
http://cassandra.meetup.com/
Datstax also offers classroom training,
table doesn't actually delete the data, and I end up with a
problem like this:
https://issues.apache.org/jira/browse/CASSANDRA-4857
What's a good workaround for this, assuming I don;t want to change the name of
my table? Should I just truncate the table, then drop it and recreate it?
Hi Paulo,
Yes, that is expected. Now that you are using virtual nodes you should
use "nodetool status" to see an output similar to what you saw with "nodetool
ring" before you enabled virtual nodes.
-Ike Walker
On Oct 15, 2013, at 1
The restart worked.
Thanks, Rob!
After the restart I ran 'nodetool move' again, used 'nodetool netstats | grep
-v "0%"' to verify that data was actively streaming, and the move completed
successfully.
-Ike
On Sep 10, 2013, at 11:04 AM, Ike Walker wrote:
>
wrote:
> On Mon, Sep 9, 2013 at 7:08 PM, Ike Walker wrote:
> I've been using nodetool move to rebalance my cluster. Most of the moves take
> under an hour, or a few hours at most. The current move has taken 4+ days so
> I'm afraid it will never complete. What's the be
I've been using nodetool move to rebalance my cluster. Most of the moves take
under an hour, or a few hours at most. The current move has taken 4+ days so
I'm afraid it will never complete. What's the best way to cancel it and try
again?
I'm running a cluster of 12 nodes at AWS. Each node runs
of too many or too few
seeds.
Any advice is appreciated.
Thanks!
-Ike Walker