I'm using 4.1.0-1.
I've been doing a lot of truncates lately before the drive failed
(research project). Current drives have about 100GBytes of data each,
although the actual amount of data in Cassandra is much less (because of
truncates and snapshots). The cluster is not homo-genius; some
Prior to cassandra-6696 you’d have to treat one missing disk as a failed
machine, wipe all the data and re-stream it, as a tombstone for a given value
may be on one disk and data on another (effectively redirecting data)
So the answer has to be version dependent, too - which version were you
Hi Joe,
Reading it back I realized I misunderstood that part of your email, so
you must be using data_file_directories with 16 drives? That's a lot
of drives! I imagine this may happen from time to time given that
disks like to fail.
That's a bit of an interesting scenario that I would have to
Thank you Andy.
Is there a way to just remove the drive from the cluster and replace it
later? Ordering replacement drives isn't a fast process...
What I've done so far is:
Stop node
Remove drive reference from /etc/cassandra/conf/cassandra.yaml
Restart node
Run repair
Will that work? Right
Hi Joe,
I'd recommend just doing a replacement, bringing up a new node with
-Dcassandra.replace_address_first_boot=ip.you.are.replacing as
described here:
https://cassandra.apache.org/doc/4.1/cassandra/operating/topo_changes.html#replacing-a-dead-node
Before you do that, you will want to make
Hi,
On Mon, Jan 16, 2023 at 3:07 PM Loïc CHANEL via user <
user@cassandra.apache.org> wrote:
> So my question here is : am I missing a Cassandra internal process that is
> triggered on a daily basis at 0:00 and 2:00 ?
>
I bet, it's not a Cassandra issue. Have you any other metrics about your
My general advice for any time you see hints accumulating, consider that
smoke for the more pressing fire happening somewhere else. You correctly
identified the right path to consider, which is some sort of scheduled
activity. Cassandra doesn't have any scheduled internal jobs. Compactions
happen
Hi all - what is the correct procedure when handling a failed disk?
Have a node in a 15 node cluster. This node has 16 drives and cassandra
data is split across them. One drive is failing. Can I just remove it
from the list and cassandra will then replicate? If not - what?
Thank you!
-Joe
Check if you see packet loss at this time
On Mon, Jan 16, 2023 at 4:08 PM Loïc CHANEL via user <
user@cassandra.apache.org> wrote:
> Hi team,
>
> I am currently running a 2-nodes Cassandra database. Although that's not
> the best setup, the cluster is doing pretty fine.
> Still, I noticed that
Hi team,
I am currently running a 2-nodes Cassandra database. Although that's not
the best setup, the cluster is doing pretty fine.
Still, I noticed that for (at least) 5 days now, one of my two nodes is
writing hints during the night, and then it recovers the data-sync with the
other node in the
Hi all,
is upgrading Cassandra 3.11.14 → 4.1 supported, or is it better to
follow the 3.11.14 → 4.0 → 4.1 path?
(I think it is okay as i found no record of deprecated old SSTable
formats, but I couldn't manage to find any official documentation
regarding upgrade paths… forgive me if it
11 matches
Mail list logo