It depends on what consistency level you use for reads/writes, and whether you
do deletes
The real danger is that there may have been a tombstone on the drive the failed
covering data on the disks that remain, where the delete happened older than
gc-grace - if you simple yank the disk, that
Hi All,
I have a 7 node cluster (Version 3.10) consisting of 5 disks each in JBOD.
A few hours ago I had a disk failure on a node. I am wondering if I can:
- stop Cassandra on that node
- remove the disk, physically and from cassandra.yaml
- start Cassandra on that node
- run repair
I mean, is
I just want to add that we use vnodes=16 if that helps with my questions..
On Mon, Jul 31, 2017 at 9:41 AM, Ioannis Zafiropoulos
wrote:
> Thank you Jeff for your answer,
>
> I use RF=3 and our client connect always with QUORUM. So I guess I will be
> alright after a repair
Sigh, I've tried to reply to this three times and none are in the archives, so
I don't think they're making it through. Apologies if this is the fourth time
someone's seen it:
The problem is JNA jar that was upgraded recently and bumped the glibc
requirement
Tremendous! I already suspected it to be a JNA issue but didn't know how to
solve it. I'll try this in my setup; I am experimenting what configuration
to use anyways...
Thanks a lot!
On Mon, Jul 31, 2017 at 5:19 PM, Jeff Jirsa wrote:
> Sigh, I've tried to reply to this three
Excellent! Thank you Jeff.
On Mon, Jul 31, 2017 at 10:26 AM, Jeff Jirsa wrote:
> 3.10 has 6696 in it, so my understanding is you'll probably be fine just
> running repair
>
>
> Yes, same risks if you swap drives - before 6696, you want to replace a
> whole node if any
Thank you Jeff for your answer,
I use RF=3 and our client connect always with QUORUM. So I guess I will be
alright after a repair (?)
Follow up questions,
- It seems that the risks you describing would be the same as if I had
replaced the drive with an new fresh one and run repair, is that
3.10 has 6696 in it, so my understanding is you'll probably be fine just
running repair
Yes, same risks if you swap drives - before 6696, you want to replace a whole
node if any sstables are damaged or lost (if you do deletes, and if it hurts
you if deleted data comes back to life).
--
Thanks Ryan, I couldn't find that version but tried with the 3.0.14
version, to no avail. I ended up configuring the VM's in my cloud with
RHEL7 and that includes glib2_17...
Best regards,
Piet
On Fri, Jul 28, 2017 at 6:29 PM, ruijian.lee wrote:
> Hi Piet,
>
> I have also
On Cassandra 2.2.11, I have a table that uses LeveledCompactionStrategy and
that gets written to continuously. If I list the files in its data directory, I
see something like this
-rw-r--r-- 1 acassy agroup 161733811 Jul 31 18:46 lb-135346-big-Data.db
-rw-r--r-- 1 acassy agroup 159626222 Jul 31
How long is your ttl and how much data do you write per day (ie, what is
the difference in disk usage over a day)? Did you always TTL?
I'd say it's likely there is live data in those older sstables but you're
not generating enough data to push new data to the highest level before it
expires.
On 2017-07-31 15:00 (-0700), kurt greaves wrote:
> How long is your ttl and how much data do you write per day (ie, what is
> the difference in disk usage over a day)? Did you always TTL?
> I'd say it's likely there is live data in those older sstables but you're
> not
I don't want to go down the TTL path because this behaviour is also occurring
for tables without a TTL. I don't have hard numbers about the amount of writes,
but there's definitely been enough to trigger compaction in the ~year since.
We've never changed the topology of this cluster. Ranges have
Yea, it means they're effecitvely invalid files, and would not be loaded at
startup.
On Mon, Jul 31, 2017 at 9:07 PM, Sotirios Delimanolis <
sotodel...@yahoo.com.invalid> wrote:
> I don't want to go down the TTL path because this behaviour is also
> occurring for tables without a TTL. I don't
14 matches
Mail list logo