Hi Jonathan,
That makes sense. Thank you for the explanation.
Another quick question, as the cluster is still operative and the data for
the past 2 weeks (since updating replication factor) is present in both the
data centres, should I run "nodetool rebuild" or "nodetool repair"?
I read that nod
Pretty cool!
Dinesh
> On Oct 30, 2018, at 6:31 PM, Jonathan Haddad wrote:
>
> Very cool Ben, thanks for sharing!
>
>> On Tue, Oct 30, 2018 at 6:14 PM Ben Slater
>> wrote:
>> For anyone who is interested, we’ve published a blog with some more
>> background on this and some more detail of o
Very cool Ben, thanks for sharing!
On Tue, Oct 30, 2018 at 6:14 PM Ben Slater
wrote:
> For anyone who is interested, we’ve published a blog with some more
> background on this and some more detail of our ongoing plans:
> https://www.instaclustr.com/instaclustr-support-cassandra-lucene-index/
>
>
For anyone who is interested, we’ve published a blog with some more
background on this and some more detail of our ongoing plans:
https://www.instaclustr.com/instaclustr-support-cassandra-lucene-index/
Cheers
Ben
On Fri, 19 Oct 2018 at 09:42 kurt greaves wrote:
> Hi all,
>
> We've had confirmat
You need to run "nodetool rebuild -- " on each node in
the new DC to get the old data to replicate. It doesn't do it
automatically because Cassandra has no way of knowing if you're done adding
nodes and if it were to migrate automatically, it could cause a lot of
problems. Imagine streaming 100 no
Hi Experts,
I previously had 1 Cassandra data centre in AWS Singapore region with 5
nodes, with my keyspace's replication factor as 3 in Network topology.
After this cluster has been running smoothly for 4 months (500 GB of data
on each node's disk), I added 2nd data centre in AWS Mumbai region w
To add to your excellent list:
- no topology change (joining/leaving/decommissioning) nodes
- no rebuild of index/MV under way
On Tue, Oct 30, 2018 at 4:35 PM Carl Mueller
wrote:
> Does anyone have a pretty comprehensive list of these? Many that I don't
> currently know how to check but I'm res
Just to pile on:
I agree. On our upgrades, I always aim to get the binary part done on all nodes
before worrying about upgradesstables. Upgrade is one node at a time
(precautionary). Upgradesstables depends on cluster size, data size,
compactionthroughput, etc. I usually start with running upgr
Pretty much any version including the most current.
On Mon, Oct 29, 2018 at 4:29 PM, Amit Plaha wrote:
> Hi Mike,
>
> Thanks for the response. Can you let me know which version of the driver
> can build with C++98?
>
> Regards,
> Amit
>
> On Fri, Oct 26, 2018 at 8:53 AM Michael Penick <
> michae
Thank you very much. I couldn't find any definitive answer on that on the
list or stackoverflow.
It's clear that the safest for a prod cluster is rolling version upgrade of
the binary, then the upgradesstables.
I will strongly consider cstar for the upgradesstables
On Tue, Oct 30, 2018 at 10:39
Yes, as the new version can read both the old and the new sstables format.
Restrictions only apply when the cluster is in mixed versions.
On Tue, Oct 30, 2018 at 4:37 PM Carl Mueller
wrote:
> But the topology change restrictions are only in place while there are
> heterogenous versions in the c
But the topology change restrictions are only in place while there are
heterogenous versions in the cluster? All the nodes at the upgraded version
with "degraded" sstables does NOT preclude topology changes or node
replacement/addition?
On Tue, Oct 30, 2018 at 10:33 AM Jeff Jirsa wrote:
> Wait
It seems that "nodetool listsnapshots" is unreliable?
1. when issued, nodetool listsnapshots reports there are no snapshops.
2. when navigating through the filesystem, one can see clearly that there
are snapshots
3. when issued, nodetool clearsnapshot removes them!
Some sanitized evidence bel
Does anyone have a pretty comprehensive list of these? Many that I don't
currently know how to check but I'm researching...
I've seen:
- verify disk space available for snapshot + sstablerewrite
- gossip state agreement, all nodes are healthy
- schema state agreement
- ability to access all the n
Wait for 3.11.4 to be cut
I also vote for doing all the binary bounces and upgradesstables after the
fact, largely because normal writes/compactions are going to naturally start
upgrading sstables anyway, and there are some hard restrictions on mixed mode
(e.g. schema changes won’t cross versio
Hi Carl,
the safest way is indeed (as suggested by Jon) to upgrade the whole cluster
as quick as possible, and stop all operations that could generate streaming
until all nodes are using the target version.
That includes repair, topology changes (bootstraps, decommissions) and
rebuilds.
You should
We are about to finally embark on some version upgrades for lots of
clusters, 2.1.x and 2.2.x targetting eventually 3.11.x
I have seen recipes that do the full binary upgrade + upgrade sstables for
1 node before moving forward, while I've seen a 2016 vote by Jon Haddad (a
TLP guy) that backs doing
Thanks!
On Mon, Oct 29, 2018 at 10:03 AM Horia Mocioi
wrote:
> Hello,
>
> Instead of parsing the output from nodetool (running nodetool is quite
> intensive) maybe you could have a java program that would monitor via JMX
> (org.apache.cassandra.net.FailureDetector).
>
> You have less burden comp
Thanks!
On Tue, Oct 30, 2018 at 1:53 AM Max C. wrote:
> Agree - avoid parsing nodetool, if you can. I’d add that if anyone out
> there is interested in JMX but doesn’t want to deal with Java, you should
> install Jolokia so you can interact with Cassandra’s JMX data via a
> language independent
19 matches
Mail list logo