ant. Repairs themselves can
generate substantial memory load, and you could have a node or two drop out on
you if they OOM. I’d definitely take Jeff’s advice about switching your reads
to LOCAL_QUORUM until you’re done to buffer yourself from that risk.
From: Leena Ghatpande
Reply-To: &q
From: Leena Ghatpande
Sent: Friday, May 22, 2020 11:51 AM
To: cassandra cassandra
Subject: any risks with changing replication factor on live production cluster
without downtime and service interruption?
We are on Cassandra 3.7 and have a 12 node cluster , 2DC, with 6 nodes
We are on Cassandra 3.7 and have a 12 node cluster , 2DC, with 6 nodes in each
DC. RF=3
We have around 150M rows across tables.
We are planning to add more nodes to the cluster, and thinking of changing the
replication factor to 5 for each DC.
Our application uses the below consistency level
Ok. that could be a possiblity, as this table has several static columns.
We have seen corrupt SStable errors before related to static columns, when we
dropped and recreated the column in this table.
We have an upgrade to 3.11 planned for later this year. so hoping these issues
will be
$RunnableAdapter.call(Executors.java:511)
~[na:1.8.0_202]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
~[na:1.8.0_202]
... 3 common frames omitted
From: Leena Ghatpande
Sent: Monday, May 18, 2020 12:54 PM
To: cassandra cassandra
Subject: TEST
Running cassandra 3.7
our TEST cluster has 6 nodes, 3 in each data center
replication factor 2 for keyspaces.
we added 1 new node in each data center for testing making it 8 node cluster.
We decided to remove the 2 new nodes from cluster, but instead of decommission,
the admin just deleted the
trace to know what's actually
going on here. 3.7 is pretty old, I'd be inclined to upgrade to the latest 3.11
branch to hope that you either get a better stack or an outright fix, but that
stack doesn't ring any bells for me.
On Mon, Sep 9, 2019 at 10:20 AM Leena Ghatpande
mailto:lghatpa
We are on Cassandra 3.7 and have a 8 node cluster , 2DC, with 4 nodes in each
DC. RF=3
The below Compaction Error message is being logged to the system.log exactly
every Minute.
ERROR [CompactionExecutor:5751] 2019-06-09 03:24:50,585
CassandraDaemon.java:217 - Exception in thread
Constant Error in the log - Exception in thread
Thread[CompactionExecutor:3903,1,main] java.lang.NullPointerException: null
We are on Cassandra 3.7 and have a 8 node cluster , 2DC, with 4 nodes in each
DC. RF=3
The below error message is being logged to the system.log exactly every Minute.
itself from bad queries.
From: Leena Ghatpande mailto:lghatpa...@hotmail.com>>
Sent: Tuesday, March 12, 2019 9:02 AM
To: Stefan Miklosovic
mailto:stefan.mikloso...@instaclustr.com>>;
user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: [EXTERNAL] Re: Migrate
ould just read table and as you read it you would write to another one. That
is imho the fastest approach and the least error prone. You can do that on live
production data and you can just make a "switch" afterwards. Not sure about
ttls but that should be transparent while copying that.
We have a table with over 70M rows with a partition key that is unique. We
have a created datetime stamp on each record, and we have a need to select all
rows created for a date range. Secondary index is not an option as its high
cardinality and could slow performance doing a full scan on 70M
We are on cassandra 3.7 version
We have a 8 node production cluster, with 4 nodes each across 2 DC
The RF is set to 3 currently, and we have 2 large tables with upto 70Million
rows
We just upgraded our Production cluster from 4CPU , 12 GB RAM to 8 CPU 32 GB
Memory. Accordingly we increased our
In context with this earlier post I had
https://www.mail-archive.com/user@cassandra.apache.org/msg56122.html
We run repair on each node in the cluster with the -pr option on every table
within each keyspace individually. Repairs are run sequentially on each node
Repairs fail for all nodes for
e some issues in this
version.
Considering your scenario, it is highly recommended that you should upgrade to
3.11.1.
Although, you have mentioned that upgrading is not an option, I would like to
tell you that
On 19 April 2018 at 23:19, Leena Ghatpande
<lghatpa...@hotmail.com<ma
we have 8 node prod cluster running on cassandra 3.7. Our 2 largest tables have
around 100M and 30M rows respectively while all others are relatively smaller.
we have been running repairs on alternate days on 2 of our keyspaces.
We run repair on each node in the cluster with the -pr option on
so.com
At some point everyone using Cassandra faces the situation of having to replace
nodes. Either because the cluster needs to scale and some nodes are too small
or ...
>>
>> Regards,
>> Kyrill
>>
>>
>> Fr
Best approach to replace existing 8 smaller 8 nodes in production cluster with
New 8 nodes that are bigger in capacity without a downtime
We have 4 nodes each in 2 DC, and we want to replace these 8 nodes with new 8
nodes that are bigger in capacity in terms of RAM,CPU and Diskspace without a
range
repairs to repair their clusters. But such large deployments are on older
Cassandra versions and these deployments generally dont use vnodes. So people
know easily which nodes hold which token range.
Thanks
Anuj
From: Leena Ghatpande <lghatpa...@ho
Please advice. Cannot find any clear documentation on what is the best strategy
for repairing nodes on a regular basis with multiple datacenters involved.
We are running cassandra 3.7 in multi datacenter with 4 nodes in each data
center. We are trying to run repairs every other night to keep
Has anyone seen this error on Cassandra 1.2.9? We have not done any upgrades or
changes to column families since we went live in feb 2014.
we are getting the following error when we run the nodetool cleanup or nodetool
repair on one of our Production Nodes.
We have 2 data ceners with 2
21 matches
Mail list logo