+1 and run nodetool compactionstats so you can see 2Is in progress.
On Mon, Nov 13, 2017 at 7:00 AM, kurt greaves wrote:
> bootstrap will wait for secondary indexes and MV's to build before
> completing. if either are still shown in compactions then it will wait for
> them
Hi Team,
I have a new requirement , where I need to copy all the rows from one table
to another table in Cassandra, where the second table contains one extra
column.
I have written a python script, which reads each row and inserts . But the
problem is in stage environment I'm observing the
What's wrong with just detaching the EBS volume and then attaching it to
the new node? Assuming you have a separate mount for your C* data (which
you probably should).
Hi All ,
There was a node failure in one of production cluster due to disk failure.
After h/w recovery that node is noew ready be part of cluster, but it
doesn't has any data due to disk crash.
I can think of following option :
1. replace the node with same. using replace_address
2. Set
nodetool describecluster will show (a) if there are multiple schema
versions, and/or (b) unreachable nodes. Cheers!
On Fri, Nov 10, 2017 at 6:39 PM, Romain Hardouin <
romainh...@yahoo.fr.invalid> wrote:
> Does "nodetool describecluster" shows an actual schema disagreement?
> You can try
Use the replace_address method with its own IP address. Make sure you
delete the contents of the following directories:
- data/
- commitlog/
- saved_caches/
Forget rejoining with repair -- it will just cause more problems. Cheers!
On Mon, Nov 13, 2017 at 2:54 PM, Anshu Vajpayee
Hi,
I'm trying to understand some of the details of the
batch_size_warn_threshold_in_kb/batch_size_fail_threshold_in_kb settings.
Specifically, why are the thresholds measured in kb rather than the number
of partitions affected?
We have run into the limit in a situation where there is a batch
I'm trying to add a new node to a small existing cluster. During the
bootstrap one of the nodes went down. I'm not sure at what point in the
process the node went down, all files may have been sent before that
happened. Currently:
nodetool netstats says that all files are received 100%
nodetool
Cleanup, very simply, throws away data no longer owned by the instance
because of range movements.
Repair only repairs data owned by the instance (it ignores data that would
be cleared by cleanup).
I don't see any reason why you can't run cleanup before repair.
On Sun, Nov 12, 2017 at 9:35 AM,
So, I have a cluster which grew too large data-wise so that compactions no
longer worked (because of full disk). I have now added new nodes so that
data is spread more thin. However, I know there are incosistencies in the
cluster and I need to run a repair but those also fail because of out of
Yeah, sounds right. What I'm worried about is the following:
I used to have only 2 nodes with RF 2 so both nodes had a copy of all data.
There were incosistencies since I was unable to run repair, so some parts
of the data may only exist on one node. I have now added two nodes, thus
changing which
Hi,
We have a unique requirement to replace C* (3.0.x on RHEL) nodeswith a new AWS
AMI image periodically. The current process (add node/decommissionetc) is a
very manual and time consuming process. We currently use EBS andexploring EFS
as an option to speed up the process.
Does anybody have
Any reason you think EFS would be better than EBS?
--
Jeff Jirsa
> On Nov 12, 2017, at 1:38 PM, Subroto Barua
> wrote:
>
> Hi,
>
> We have a unique requirement to replace C* (3.0.x on RHEL) nodes with a new
> AWS AMI image periodically. The current process
when we replace node, we can dynamically reassign the IPs to the new node,
reducing downtime. But we are not sure about the performance in terms of
latency, throughput etc (we are heavy on reads).
On Sunday, November 12, 2017, 2:03:14 PM PST, Jeff Jirsa
wrote:
Any
bootstrap will wait for secondary indexes and MV's to build before
completing. if either are still shown in compactions then it will wait for
them to complete before finishing joining. If not you can try nodetool
bootstrap resume if it's available on your version.
On 12 Nov. 2017 19:19, "Joel
That is: bootstrap will maintain whatever consistency guarantees you had when
you started.
--
Jeff Jirsa
> On Nov 12, 2017, at 12:41 PM, kurt greaves wrote:
>
> By default, bootstrap will stream from the primary replica of the range it is
> taking ownership of. So
By default, bootstrap will stream from the primary replica of the range it
is taking ownership of. So Node 3 would have to stream from Node 2 if it
was taking ownership of Node 2's tokens.
On 13 Nov. 2017 05:00, "Joel Samuelsson" wrote:
> Yeah, sounds right. What I'm
Great, thanks for your replies.
2017-11-12 21:44 GMT+01:00 Jeff Jirsa :
> That is: bootstrap will maintain whatever consistency guarantees you had
> when you started.
>
> --
> Jeff Jirsa
>
>
> On Nov 12, 2017, at 12:41 PM, kurt greaves wrote:
>
> By
18 matches
Mail list logo