Stefan K posted on Tue, 11 Sep 2018 13:29:38 +0200 as excerpted:
> wow, holy shit, thanks for this extended answer!
>
>> The first thing to point out here again is that it's not
>> btrfs-specific.
> so that mean that every RAID implemantation (with parity) has such Bug?
> I'm looking a bit, it lo
wow, holy shit, thanks for this extended answer!
> The first thing to point out here again is that it's not btrfs-specific.
so that mean that every RAID implemantation (with parity) has such Bug? I'm
looking a bit, it looks like that ZFS doesn't have a write hole. And it _only_
happens when th
Stefan K posted on Fri, 07 Sep 2018 15:58:36 +0200 as excerpted:
> sorry for disturb this discussion,
>
> are there any plans/dates to fix the raid5/6 issue? Is somebody working
> on this issue? Cause this is for me one of the most important things for
> a fileserver, with a raid1 config I loose
sorry for disturb this discussion,
are there any plans/dates to fix the raid5/6 issue? Is somebody working on this
issue? Cause this is for me one of the most important things for a fileserver,
with a raid1 config I loose to much diskspace.
best regards
Stefan
Ok, but I cannot guarantee that I don't need to cancel scrub during the process
As said, this is a domestic storage, and when scrub is running the
performance hit is big enough to prevent smooth streaming of HD and 4k
movies
Il giorno gio 16 ago 2018 alle ore 21:38 ha scritto:
>
> Could you show s
Could you show scrub status -d, then start a new scrub (all drives) and show
scrub status -d again? This may help us diagnose the problem.
Am 15-Aug-2018 09:27:40 +0200 schrieb men...@gmail.com:
> I needed to resume scrub two times after an unclear shutdown (I was
> cooking and using too much el
I needed to resume scrub two times after an unclear shutdown (I was
cooking and using too much electricity) and two times after a manual
cancel, because I wanted to watch a 4k movie and the array
performances were not enough with scrub active.
Each time I resumed it, I checked also the status, and
On Tue, Aug 14, 2018 at 09:32:51AM +0200, Menion wrote:
> Hi
> Well, I think it is worth to give more details on the array.
> the array is built with 5x8TB HDD in an esternal USB3.0 to SATAIII enclosure
> The enclosure is a cheap JMicron based chinese stuff (from Orico).
> There is one USB3.0 link
Hi
Well, I think it is worth to give more details on the array.
the array is built with 5x8TB HDD in an esternal USB3.0 to SATAIII enclosure
The enclosure is a cheap JMicron based chinese stuff (from Orico).
There is one USB3.0 link for all the 5 HDD with a SATAIII 3.0Gb
multiplexer behind it. So y
On Mon, Aug 13, 2018 at 11:56:05PM +0200, erentheti...@mail.de wrote:
> Running time of 55:06:35 indicates that the counter is right, it is
> not enough time to scrub the entire array using hdd.
>
> 2TiB might be right if you only scrubbed one disc, "sudo btrfs scrub
> start /dev/sdx1" only scrubs
On Mon, Aug 13, 2018 at 09:20:22AM +0200, Menion wrote:
> Hi
> I have a BTRFS RAID5 array built on 5x8TB HDD filled with, well :),
> there are contradicting opinions by the, well, "several" ways to check
> the used space on a BTRFS RAID5 array, but I should be aroud 8TB of
> data.
> This array is r
Running time of 55:06:35 indicates that the counter is right, it is not enough
time to scrub the entire array using hdd.
2TiB might be right if you only scrubbed one disc, "sudo btrfs scrub start
/dev/sdx1" only scrubs the selected partition,
whereas "sudo btrfs scrub start /media/storage/das1"
Hi
I have a BTRFS RAID5 array built on 5x8TB HDD filled with, well :),
there are contradicting opinions by the, well, "several" ways to check
the used space on a BTRFS RAID5 array, but I should be aroud 8TB of
data.
This array is running on kernel 4.17.3 and it definitely experienced
power loss whi
On Sat, Aug 11, 2018 at 08:27:04AM +0200, erentheti...@mail.de wrote:
> I guess that covers most topics, two last questions:
>
> Will the write hole behave differently on Raid 6 compared to Raid 5 ?
Not really. It changes the probability distribution (you get an extra
chance to recover using a p
I guess that covers most topics, two last questions:
Will the write hole behave differently on Raid 6 compared to Raid 5 ?
Is there any benefit of running Raid 5 Metadata compared to Raid 1 ?
-
FreeMai
On Sat, Aug 11, 2018 at 04:18:35AM +0200, erentheti...@mail.de wrote:
> Write hole:
>
>
> > The data will be readable until one of the data blocks becomes
> > inaccessible (bad sector or failed disk). This is because it is only the
> > parity block that is corrupted (old data blocks are still not
Write hole:
> The data will be readable until one of the data blocks becomes
> inaccessible (bad sector or failed disk). This is because it is only the
> parity block that is corrupted (old data blocks are still not modified
> due to btrfs CoW), and the parity block is only required when recoveri
On Fri, Aug 10, 2018 at 06:55:58PM +0200, erentheti...@mail.de wrote:
> Did i get you right?
> Please correct me if i am wrong:
>
> Scrubbing seems to have been fixed, you only have to run it once.
Yes.
There is one minor bug remaining here: when scrub detects an error
on any disk in a raid5/6
Did i get you right?
Please correct me if i am wrong:
Scrubbing seems to have been fixed, you only have to run it once.
Hotplugging (temporary connection loss) is affected by the write hole bug, and
will create undetectable errors every 16 TB (crc32 limitation).
The write Hole Bug can affect bo
On Fri, Aug 10, 2018 at 03:40:23AM +0200, erentheti...@mail.de wrote:
> I am searching for more information regarding possible bugs related to
> BTRFS Raid 5/6. All sites i could find are incomplete and information
> contradicts itself:
>
> The Wiki Raid 5/6 Page (https://btrfs.wiki.kernel.org/inde
20 matches
Mail list logo