panies take a week or two to replace a failed disk.)
>
> JBOD is easy to setup, but hard to manage.
>
> Thanks, James.
>
>
>
> --
> *From:* kurt greaves
> *To:* User
> *Sent:* Friday, August 17, 2018 5:42 AM
> *Subject:* Re: JBOD disk fa
kurt greaves
To: User
Sent: Friday, August 17, 2018 5:42 AM
Subject: Re: JBOD disk failure
As far as I'm aware, yes. I recall hearing someone mention tying system tables
to a particular disk but at the moment that doesn't exist.
On Fri., 17 Aug. 2018, 01:04 Eric Evans, wrote:
As far as I'm aware, yes. I recall hearing someone mention tying system
tables to a particular disk but at the moment that doesn't exist.
On Fri., 17 Aug. 2018, 01:04 Eric Evans, wrote:
> On Wed, Aug 15, 2018 at 3:23 AM kurt greaves wrote:
> > Yep. It might require a full node replace depending
On Wed, Aug 15, 2018 at 3:23 AM kurt greaves wrote:
> Yep. It might require a full node replace depending on what data is lost from
> the system tables. In some cases you might be able to recover from partially
> lost system info, but it's not a sure thing.
Ugh, does it really just boil down to
t; Christian
>
>
>
>
>
> *Von: *kurt greaves
> *Antworten an: *"user@cassandra.apache.org"
> *Datum: *Mittwoch, 15. August 2018 um 04:53
> *An: *User
> *Betreff: *Re: JBOD disk failure
>
>
>
> If that disk had important data in the system tables how
;user@cassandra.apache.org"
Datum: Mittwoch, 15. August 2018 um 04:53
An: User
Betreff: Re: JBOD disk failure
If that disk had important data in the system tables however you might have
some trouble and need to replace the entire instance anyway.
On 15 August 2018 at 12:20, Jeff Jirsa
If that disk had important data in the system tables however you might have
some trouble and need to replace the entire instance anyway.
On 15 August 2018 at 12:20, Jeff Jirsa wrote:
> Depends on version
>
> For versions without the fix from Cassandra-6696, the only safe option on
> single disk
Depends on version
For versions without the fix from Cassandra-6696, the only safe option on
single disk failure is to stop and replace the whole instance - this is
important because in older versions of Cassandra, you could have data in one
sstable, a tombstone shadowing it in another disk, an
you have to explain what you mean by "JBOD". All in one large vdisk?
Separate drives?
At the end of the day, if a device fails in a way that the data housed on
that device (or array) is no longer available, that HDFS storage is marked
down. HDFS now needs to create a 3rd replicant. Various timers
Hi,
given a cluster with RF=3 and CL=LOCAL_ONE and application is deleting data,
what happens if the nodes are setup with JBOD and one disk fails? Do I get
consistent results while the broken drive is replaced and a nodetool repair is
running on the node with the replaced drive?
Kind regards,
10 matches
Mail list logo