Re: BTRFS RAID filesystem unmountable

2018-12-06 Thread Qu Wenruo
On 2018/12/7 上午7:15, Michael Wade wrote: > Hi Qu, > > Me again! Having formatted the drives and rebuilt the RAID array I > seem to have be having the same problem as before (no power cut this > time [I bought a UPS]). But strangely, your super block shows it has log tree, which means either

Re: List of known BTRFS Raid 5/6 Bugs?

2018-09-11 Thread Duncan
Stefan K posted on Tue, 11 Sep 2018 13:29:38 +0200 as excerpted: > wow, holy shit, thanks for this extended answer! > >> The first thing to point out here again is that it's not >> btrfs-specific. > so that mean that every RAID implemantation (with parity) has such Bug? > I'm looking a bit, it

Re: List of known BTRFS Raid 5/6 Bugs?

2018-09-11 Thread Stefan K
wow, holy shit, thanks for this extended answer! > The first thing to point out here again is that it's not btrfs-specific. so that mean that every RAID implemantation (with parity) has such Bug? I'm looking a bit, it looks like that ZFS doesn't have a write hole. And it _only_ happens when

Re: List of known BTRFS Raid 5/6 Bugs?

2018-09-08 Thread Duncan
Stefan K posted on Fri, 07 Sep 2018 15:58:36 +0200 as excerpted: > sorry for disturb this discussion, > > are there any plans/dates to fix the raid5/6 issue? Is somebody working > on this issue? Cause this is for me one of the most important things for > a fileserver, with a raid1 config I loose

Re: List of known BTRFS Raid 5/6 Bugs?

2018-09-07 Thread Stefan K
sorry for disturb this discussion, are there any plans/dates to fix the raid5/6 issue? Is somebody working on this issue? Cause this is for me one of the most important things for a fileserver, with a raid1 config I loose to much diskspace. best regards Stefan

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-17 Thread Menion
Ok, but I cannot guarantee that I don't need to cancel scrub during the process As said, this is a domestic storage, and when scrub is running the performance hit is big enough to prevent smooth streaming of HD and 4k movies Il giorno gio 16 ago 2018 alle ore 21:38 ha scritto: > > Could you show

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-16 Thread erenthetitan
Could you show scrub status -d, then start a new scrub (all drives) and show scrub status -d again? This may help us diagnose the problem. Am 15-Aug-2018 09:27:40 +0200 schrieb men...@gmail.com: > I needed to resume scrub two times after an unclear shutdown (I was > cooking and using too much

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-15 Thread Menion
I needed to resume scrub two times after an unclear shutdown (I was cooking and using too much electricity) and two times after a manual cancel, because I wanted to watch a 4k movie and the array performances were not enough with scrub active. Each time I resumed it, I checked also the status, and

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-14 Thread Zygo Blaxell
On Tue, Aug 14, 2018 at 09:32:51AM +0200, Menion wrote: > Hi > Well, I think it is worth to give more details on the array. > the array is built with 5x8TB HDD in an esternal USB3.0 to SATAIII enclosure > The enclosure is a cheap JMicron based chinese stuff (from Orico). > There is one USB3.0 link

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-14 Thread Menion
Hi Well, I think it is worth to give more details on the array. the array is built with 5x8TB HDD in an esternal USB3.0 to SATAIII enclosure The enclosure is a cheap JMicron based chinese stuff (from Orico). There is one USB3.0 link for all the 5 HDD with a SATAIII 3.0Gb multiplexer behind it. So

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-13 Thread Zygo Blaxell
blocks, so they won't read the parity block and won't detect wrong parity. I did a couple of order-of-magnitude estimations of how likely a power failure is to trash a btrfs RAID system and got a probability between 3% and 30% per power failure if there were writes active at the time, and a dis

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-13 Thread Zygo Blaxell
On Mon, Aug 13, 2018 at 09:20:22AM +0200, Menion wrote: > Hi > I have a BTRFS RAID5 array built on 5x8TB HDD filled with, well :), > there are contradicting opinions by the, well, "several" ways to check > the used space on a BTRFS RAID5 array, but I should be aroud 8TB of > data. > This array is

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-13 Thread erenthetitan
Running time of 55:06:35 indicates that the counter is right, it is not enough time to scrub the entire array using hdd. 2TiB might be right if you only scrubbed one disc, "sudo btrfs scrub start /dev/sdx1" only scrubs the selected partition, whereas "sudo btrfs scrub start

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-13 Thread Menion
Hi I have a BTRFS RAID5 array built on 5x8TB HDD filled with, well :), there are contradicting opinions by the, well, "several" ways to check the used space on a BTRFS RAID5 array, but I should be aroud 8TB of data. This array is running on kernel 4.17.3 and it definitely experienced power loss

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-11 Thread Zygo Blaxell
On Sat, Aug 11, 2018 at 08:27:04AM +0200, erentheti...@mail.de wrote: > I guess that covers most topics, two last questions: > > Will the write hole behave differently on Raid 6 compared to Raid 5 ? Not really. It changes the probability distribution (you get an extra chance to recover using a

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-11 Thread erenthetitan
I guess that covers most topics, two last questions: Will the write hole behave differently on Raid 6 compared to Raid 5 ? Is there any benefit of running Raid 5 Metadata compared to Raid 1 ? -

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-10 Thread Zygo Blaxell
On Sat, Aug 11, 2018 at 04:18:35AM +0200, erentheti...@mail.de wrote: > Write hole: > > > > The data will be readable until one of the data blocks becomes > > inaccessible (bad sector or failed disk). This is because it is only the > > parity block that is corrupted (old data blocks are still

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-10 Thread erenthetitan
Write hole: > The data will be readable until one of the data blocks becomes > inaccessible (bad sector or failed disk). This is because it is only the > parity block that is corrupted (old data blocks are still not modified > due to btrfs CoW), and the parity block is only required when

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-10 Thread Zygo Blaxell
e, all metadata pages have transid stamps and checksums to detect errors in the disk layer, and btrfs verifies metadata and refuses to process data that it does not deem to be entirely correct. > Am 10-Aug-2018 09:12:21 +0200 schrieb ce3g8...@umail.furryterror.org: > > On Fri, Aug 10, 2018

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-10 Thread erenthetitan
...@umail.furryterror.org: > > On Fri, Aug 10, 2018 at 03:40:23AM +0200, erentheti...@mail.de wrote: > > > I am searching for more information regarding possible bugs related to > > > BTRFS Raid 5/6. All sites i could find are incomplete and information > > > contradicts itself: > >

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-10 Thread Zygo Blaxell
On Fri, Aug 10, 2018 at 03:40:23AM +0200, erentheti...@mail.de wrote: > I am searching for more information regarding possible bugs related to > BTRFS Raid 5/6. All sites i could find are incomplete and information > contradicts itself: > > The Wiki Raid 5/6 Page (https://btrfs.

List of known BTRFS Raid 5/6 Bugs?

2018-08-09 Thread erenthetitan
I am searching for more information regarding possible bugs related to BTRFS Raid 5/6. All sites i could find are incomplete and information contradicts itself: The Wiki Raid 5/6 Page (https://btrfs.wiki.kernel.org/index.php/RAID56) warns of the write hole bug, stating that your data remains

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-07-02 Thread Austin S. Hemmelgarn
On 2018-06-30 02:33, Duncan wrote: Austin S. Hemmelgarn posted on Fri, 29 Jun 2018 14:31:04 -0400 as excerpted: On 2018-06-29 13:58, james harvey wrote: On Fri, Jun 29, 2018 at 1:09 PM, Austin S. Hemmelgarn wrote: On 2018-06-29 11:15, james harvey wrote: On Thu, Jun 28, 2018 at 6:27 PM,

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-30 Thread Duncan
Austin S. Hemmelgarn posted on Fri, 29 Jun 2018 14:31:04 -0400 as excerpted: > On 2018-06-29 13:58, james harvey wrote: >> On Fri, Jun 29, 2018 at 1:09 PM, Austin S. Hemmelgarn >> wrote: >>> On 2018-06-29 11:15, james harvey wrote: On Thu, Jun 28, 2018 at 6:27 PM, Chris Murphy

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-29 Thread Chris Murphy
On Fri, Jun 29, 2018 at 9:15 AM, james harvey wrote: > On Thu, Jun 28, 2018 at 6:27 PM, Chris Murphy wrote: >> And an open question I have about scrub is weather it only ever is >> checking csums, meaning nodatacow files are never scrubbed, or if the >> copies are at least compared to each

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-29 Thread Austin S. Hemmelgarn
On 2018-06-29 13:58, james harvey wrote: On Fri, Jun 29, 2018 at 1:09 PM, Austin S. Hemmelgarn wrote: On 2018-06-29 11:15, james harvey wrote: On Thu, Jun 28, 2018 at 6:27 PM, Chris Murphy wrote: And an open question I have about scrub is weather it only ever is checking csums, meaning

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-29 Thread james harvey
On Fri, Jun 29, 2018 at 1:09 PM, Austin S. Hemmelgarn wrote: > On 2018-06-29 11:15, james harvey wrote: >> >> On Thu, Jun 28, 2018 at 6:27 PM, Chris Murphy >> wrote: >>> >>> And an open question I have about scrub is weather it only ever is >>> checking csums, meaning nodatacow files are never

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-29 Thread Austin S. Hemmelgarn
On 2018-06-29 11:15, james harvey wrote: On Thu, Jun 28, 2018 at 6:27 PM, Chris Murphy wrote: And an open question I have about scrub is weather it only ever is checking csums, meaning nodatacow files are never scrubbed, or if the copies are at least compared to each other? Scrub never looks

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-29 Thread james harvey
On Thu, Jun 28, 2018 at 6:27 PM, Chris Murphy wrote: > And an open question I have about scrub is weather it only ever is > checking csums, meaning nodatacow files are never scrubbed, or if the > copies are at least compared to each other? Scrub never looks at nodatacow files. It does not

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Qu Wenruo
On 2018年06月29日 01:10, Andrei Borzenkov wrote: > 28.06.2018 12:15, Qu Wenruo пишет: >> >> >> On 2018年06月28日 16:16, Andrei Borzenkov wrote: >>> On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo wrote: On 2018年06月28日 11:14, r...@georgianit.com wrote: > > > On Wed, Jun 27, 2018,

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Chris Murphy
On Thu, Jun 28, 2018 at 11:37 AM, Goffredo Baroncelli wrote: > Regarding your point 3), it must be point out that in case of NOCOW files, > even having the same transid it is not enough. It still be possible that a > copy is update before a power failure preventing the super-block update. > I

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Chris Murphy
On Thu, Jun 28, 2018 at 9:37 AM, Remi Gauvin wrote: > On 2018-06-28 10:17 AM, Chris Murphy wrote: > >> 2. The new data goes in a single chunk; even if the user does a manual >> balance (resync) their data isn't replicated. They must know to do a >> -dconvert balance to replicate the new data.

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Remi Gauvin
of any resynchronization, (no matter how the drives got out of sync, doesn't matter.). I think NoDataCow should just be ignored in the case of RAID, just like the data blocks would get copied if there was a snapshot. In the current implementation of RAID on btrfs, RAID and nodatacow are effectively mu

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Goffredo Baroncelli
On 06/28/2018 04:17 PM, Chris Murphy wrote: > Btrfs does two, maybe three, bad things: > 1. No automatic resync. This is a net worse behavior than mdadm and > lvm, putting data at risk. > 2. The new data goes in a single chunk; even if the user does a manual > balance (resync) their data isn't

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Andrei Borzenkov
28.06.2018 12:15, Qu Wenruo пишет: > > > On 2018年06月28日 16:16, Andrei Borzenkov wrote: >> On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo wrote: >>> >>> >>> On 2018年06月28日 11:14, r...@georgianit.com wrote: On Wed, Jun 27, 2018, at 10:55 PM, Qu Wenruo wrote: > > Please get

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Remi Gauvin
On 2018-06-28 10:17 AM, Chris Murphy wrote: > 2. The new data goes in a single chunk; even if the user does a manual > balance (resync) their data isn't replicated. They must know to do a > -dconvert balance to replicate the new data. Again this is a net worse > behavior than mdadm out of the

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Chris Murphy
The problems are known with Btrfs raid1, but I think they bear repeating because they are really not OK. In the exact same described scenario: a simple clear cut drop off of a member device, which then later clearly reappears (no transient failure). Both mdadm and LVM based raid1 would have

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Anand Jain
On 06/28/2018 09:42 AM, Remi Gauvin wrote: There seems to be a major design flaw with BTRFS that needs to be better documented, to avoid massive data loss. Tested with Raid 1 on Ubuntu Kernel 4.15 The use case being tested was a Virtualbox VDI file created with NODATACOW attribute, (as is

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Austin S. Hemmelgarn
On 2018-06-28 07:46, Qu Wenruo wrote: On 2018年06月28日 19:12, Austin S. Hemmelgarn wrote: On 2018-06-28 05:15, Qu Wenruo wrote: On 2018年06月28日 16:16, Andrei Borzenkov wrote: On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo wrote: On 2018年06月28日 11:14, r...@georgianit.com wrote: On Wed, Jun

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Qu Wenruo
On 2018年06月28日 19:12, Austin S. Hemmelgarn wrote: > On 2018-06-28 05:15, Qu Wenruo wrote: >> >> >> On 2018年06月28日 16:16, Andrei Borzenkov wrote: >>> On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo >>> wrote: On 2018年06月28日 11:14, r...@georgianit.com wrote: > > > On Wed,

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Austin S. Hemmelgarn
On 2018-06-28 05:15, Qu Wenruo wrote: On 2018年06月28日 16:16, Andrei Borzenkov wrote: On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo wrote: On 2018年06月28日 11:14, r...@georgianit.com wrote: On Wed, Jun 27, 2018, at 10:55 PM, Qu Wenruo wrote: Please get yourself clear of what other raid1 is

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Qu Wenruo
On 2018年06月28日 16:16, Andrei Borzenkov wrote: > On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo wrote: >> >> >> On 2018年06月28日 11:14, r...@georgianit.com wrote: >>> >>> >>> On Wed, Jun 27, 2018, at 10:55 PM, Qu Wenruo wrote: >>> Please get yourself clear of what other raid1 is doing. >>>

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Andrei Borzenkov
On Thu, Jun 28, 2018 at 11:16 AM, Andrei Borzenkov wrote: > On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo wrote: >> >> >> On 2018年06月28日 11:14, r...@georgianit.com wrote: >>> >>> >>> On Wed, Jun 27, 2018, at 10:55 PM, Qu Wenruo wrote: >>> Please get yourself clear of what other raid1 is

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-28 Thread Andrei Borzenkov
On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo wrote: > > > On 2018年06月28日 11:14, r...@georgianit.com wrote: >> >> >> On Wed, Jun 27, 2018, at 10:55 PM, Qu Wenruo wrote: >> >>> >>> Please get yourself clear of what other raid1 is doing. >> >> A drive failure, where the drive is still there when the

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-27 Thread Qu Wenruo
On 2018年06月28日 11:14, r...@georgianit.com wrote: > > > On Wed, Jun 27, 2018, at 10:55 PM, Qu Wenruo wrote: > >> >> Please get yourself clear of what other raid1 is doing. > > A drive failure, where the drive is still there when the computer reboots, is > a situation that *any* raid 1, (or

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-27 Thread remi
On Wed, Jun 27, 2018, at 10:55 PM, Qu Wenruo wrote: > > Please get yourself clear of what other raid1 is doing. A drive failure, where the drive is still there when the computer reboots, is a situation that *any* raid 1, (or for that matter, raid 5, raid 6, anything but raid 0) will

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-27 Thread Qu Wenruo
>> NODATACOW implies NODATASUM. >> > > yes yes,, none of which changes the simple fact that if you use this > option, which is often touted as outright necessary for some types of > files, BTRFS raid is worse than useless,, not only will it not protect > your data at all f

Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-27 Thread Remi Gauvin
act that if you use this option, which is often touted as outright necessary for some types of files, BTRFS raid is worse than useless,, not only will it not protect your data at all from bitrot, (as expected), it will actively go out of it's way to corrupt it! This is not expected behaviour from 'Raid

Re: Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-27 Thread Qu Wenruo
On 2018年06月28日 09:42, Remi Gauvin wrote: > There seems to be a major design flaw with BTRFS that needs to be better > documented, to avoid massive data loss. > > Tested with Raid 1 on Ubuntu Kernel 4.15 > > The use case being tested was a Virtualbox VDI file created with > NODATACOW attribute,

Major design flaw with BTRFS Raid, temporary device drop will corrupt nodatacow files

2018-06-27 Thread Remi Gauvin
There seems to be a major design flaw with BTRFS that needs to be better documented, to avoid massive data loss. Tested with Raid 1 on Ubuntu Kernel 4.15 The use case being tested was a Virtualbox VDI file created with NODATACOW attribute, (as is often suggested, due to the painful performance

Re: BTRFS RAID filesystem unmountable

2018-05-19 Thread Michael Wade
I have let the find root command run for 14+ days, its produced a pretty huge log file 1.6 GB but still hasn't completed. I think I will start the process of reformatting my drives and starting over. Thanks for your help anyway. Kind regards Michael On 5 May 2018 at 01:43, Qu Wenruo

Re: BTRFS RAID filesystem unmountable

2018-05-04 Thread Qu Wenruo
On 2018年05月05日 00:18, Michael Wade wrote: > Hi Qu, > > The tool is still running and the log file is now ~300mb. I guess it > shouldn't normally take this long.. Is there anything else worth > trying? I'm afraid not much. Although there is a possibility to modify btrfs-find-root to do much

Re: BTRFS RAID filesystem unmountable

2018-05-04 Thread Michael Wade
Hi Qu, The tool is still running and the log file is now ~300mb. I guess it shouldn't normally take this long.. Is there anything else worth trying? Kind regards Michael On 2 May 2018 at 06:29, Michael Wade wrote: > Thanks Qu, > > I actually aborted the run with the old

Re: BTRFS RAID filesystem unmountable

2018-05-01 Thread Michael Wade
Thanks Qu, I actually aborted the run with the old btrfs tools once I saw its output. The new btrfs tools is still running and has produced a log file of ~85mb filled with that content so far. Kind regards Michael On 2 May 2018 at 02:31, Qu Wenruo wrote: > > > On

Re: BTRFS RAID filesystem unmountable

2018-05-01 Thread Qu Wenruo
On 2018年05月01日 23:50, Michael Wade wrote: > Hi Qu, > > Oh dear that is not good news! > > I have been running the find root command since yesterday but it only > seems to be only be outputting the following message: > > ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 It's mostly

Re: BTRFS RAID filesystem unmountable

2018-05-01 Thread Michael Wade
Hi Qu, Oh dear that is not good news! I have been running the find root command since yesterday but it only seems to be only be outputting the following message: ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 ERROR: tree

Re: BTRFS RAID filesystem unmountable

2018-04-29 Thread Qu Wenruo
On 2018年04月29日 22:08, Michael Wade wrote: > Hi Qu, > > Got this error message: > > ./btrfs inspect dump-tree -b 20800943685632 /dev/md127 > btrfs-progs v4.16.1 > bytenr mismatch, want=20800943685632, have=3118598835113619663 > ERROR: cannot read chunk root > ERROR: unable to open /dev/md127 >

Re: BTRFS RAID filesystem unmountable

2018-04-29 Thread Qu Wenruo
On 2018年04月29日 22:08, Michael Wade wrote: > Hi Qu, > > Got this error message: > > ./btrfs inspect dump-tree -b 20800943685632 /dev/md127 > btrfs-progs v4.16.1 > bytenr mismatch, want=20800943685632, have=3118598835113619663 > ERROR: cannot read chunk root > ERROR: unable to open /dev/md127 >

Re: BTRFS RAID filesystem unmountable

2018-04-29 Thread Qu Wenruo
On 2018年04月29日 16:59, Michael Wade wrote: > Ok, will it be possible for me to install the new version of the tools > on my current kernel without overriding the existing install? Hesitant > to update kernel/btrfs as it might break the ReadyNAS interface / > future firmware upgrades. > > Perhaps

Re: BTRFS RAID filesystem unmountable

2018-04-29 Thread Michael Wade
Ok, will it be possible for me to install the new version of the tools on my current kernel without overriding the existing install? Hesitant to update kernel/btrfs as it might break the ReadyNAS interface / future firmware upgrades. Perhaps I could grab this:

Re: BTRFS RAID filesystem unmountable

2018-04-29 Thread Qu Wenruo
On 2018年04月29日 16:11, Michael Wade wrote: > Thanks Qu, > > Please find attached the log file for the chunk recover command. Strangely, btrfs chunk recovery found no extra chunk beyond current system chunk range. Which means, it's chunk tree corrupted. Please dump the chunk tree with latest

Re: BTRFS RAID filesystem unmountable

2018-04-28 Thread Qu Wenruo
On 2018年04月28日 17:37, Michael Wade wrote: > Hi Qu, > > Thanks for your reply. I will investigate upgrading the kernel, > however I worry that future ReadyNAS firmware upgrades would fail on a > newer kernel version (I don't have much linux experience so maybe my > concerns are unfounded!?). >

Re: BTRFS RAID filesystem unmountable

2018-04-28 Thread Michael Wade
Hi Qu, Thanks for your reply. I will investigate upgrading the kernel, however I worry that future ReadyNAS firmware upgrades would fail on a newer kernel version (I don't have much linux experience so maybe my concerns are unfounded!?). I have attached the output of the dump super command. I

Re: BTRFS RAID filesystem unmountable

2018-04-28 Thread Qu Wenruo
On 2018年04月28日 16:30, Michael Wade wrote: > Hi all, > > I was hoping that someone would be able to help me resolve the issues > I am having with my ReadyNAS BTRFS volume. Basically my trouble > started after a power cut, subsequently the volume would not mount. > Here are the details of my

BTRFS RAID filesystem unmountable

2018-04-28 Thread Michael Wade
Hi all, I was hoping that someone would be able to help me resolve the issues I am having with my ReadyNAS BTRFS volume. Basically my trouble started after a power cut, subsequently the volume would not mount. Here are the details of my setup as it is at the moment: uname -a Linux QAI

Re: How to replace a failed drive in btrfs RAID 1 filesystem

2018-03-10 Thread Duncan
Andrei Borzenkov posted on Sat, 10 Mar 2018 13:27:03 +0300 as excerpted: > And "missing" is not the answer because I obviously may have more than > one missing device. "missing" is indeed the answer when using btrfs device remove. See the btrfs-device manpage, which explains that if there's

Re: How to replace a failed drive in btrfs RAID 1 filesystem

2018-03-10 Thread Andrei Borzenkov
09.03.2018 19:43, Austin S. Hemmelgarn пишет: > > If the answer to either one or two is no but the answer to three is yes, > pull out the failed disk, put in a new one, mount the volume degraded, > and use `btrfs replace` as well (you will need to specify the device ID > for the now missing

Re: How to replace a failed drive in btrfs RAID 1 filesystem

2018-03-10 Thread waxhead
Austin S. Hemmelgarn wrote: On 2018-03-09 11:02, Paul Richards wrote: Hello there, I have a 3 disk btrfs RAID 1 filesystem, with a single failed drive. Before I attempt any recovery I’d like to ask what is the recommended approach?  (The wiki docs suggest consulting here before attempting

Re: How to replace a failed drive in btrfs RAID 1 filesystem

2018-03-09 Thread Austin S. Hemmelgarn
com <mailto:ahferro...@gmail.com>> wrote: On 2018-03-09 11:02, Paul Richards wrote: > Hello there, > > I have a 3 disk btrfs RAID 1 filesystem, with a single failed drive. > Before I attempt any recovery I’d like to ask what is the recommended > approa

Re: How to replace a failed drive in btrfs RAID 1 filesystem

2018-03-09 Thread Austin S. Hemmelgarn
On 2018-03-09 11:02, Paul Richards wrote: Hello there, I have a 3 disk btrfs RAID 1 filesystem, with a single failed drive. Before I attempt any recovery I’d like to ask what is the recommended approach? (The wiki docs suggest consulting here before attempting recovery[1].) The system

How to replace a failed drive in btrfs RAID 1 filesystem

2018-03-09 Thread Paul Richards
Hello there, I have a 3 disk btrfs RAID 1 filesystem, with a single failed drive. Before I attempt any recovery I’d like to ask what is the recommended approach? (The wiki docs suggest consulting here before attempting recovery[1].) The system is powered down currently and a replacement drive

Re: Fatal failure, btrfs raid with duplicated metadata

2017-10-11 Thread Ian Kumlien
On Wed, Oct 11, 2017 at 2:42 PM, Jeff Mahoney wrote: > On 10/11/17 2:20 PM, Ian Kumlien wrote: >> >> >> On Wed, Oct 11, 2017 at 2:10 PM Jeff Mahoney > > wrote: >> >> On 10/11/17 12:41 PM, Ian Kumlien wrote: >> >> [--8<--] >> >> >

Re: Fatal failure, btrfs raid with duplicated metadata

2017-10-11 Thread Jeff Mahoney
On 10/11/17 2:20 PM, Ian Kumlien wrote: > > > On Wed, Oct 11, 2017 at 2:10 PM Jeff Mahoney > wrote: > > On 10/11/17 12:41 PM, Ian Kumlien wrote: > > [--8<--]  > > > Eventually the filesystem becomes read-only and everything is odd... > >

Re: Fatal failure, btrfs raid with duplicated metadata

2017-10-11 Thread Ian Kumlien
Resent since google inbox is still not doing clear-text emails... On Wed, Oct 11, 2017 at 2:09 PM, Jeff Mahoney wrote: > On 10/11/17 12:41 PM, Ian Kumlien wrote: [--8<---] >> Eventually the filesystem becomes read-only and everything is odd... > > Are you still able to mount

Re: Fatal failure, btrfs raid with duplicated metadata

2017-10-11 Thread Jeff Mahoney
On 10/11/17 12:41 PM, Ian Kumlien wrote: > Hi, > > I was running a btrfs raid with 6 disks, metadata: dup and data: raid 6 > > Two of the disks started behaving oddly: > [436823.570296] sd 3:1:0:4: [sdf] Unaligned partial completion > (resid=244, sector_sz=512) > [436823.

Fatal failure, btrfs raid with duplicated metadata

2017-10-11 Thread Ian Kumlien
Hi, I was running a btrfs raid with 6 disks, metadata: dup and data: raid 6 Two of the disks started behaving oddly: [436823.570296] sd 3:1:0:4: [sdf] Unaligned partial completion (resid=244, sector_sz=512) [436823.578604] sd 3:1:0:4: [sdf] Unaligned partial completion (resid=52, sector_sz=512

Re: BTRFS RAID 1 not mountable: open_ctree failed, super_num_devices 3 mismatch with num_devices 2 found here

2017-08-24 Thread Dmitrii Tcvetkov
> I rebootet with HWE K4.11 > > and took a pic of the error message (see attachment). > > It seems btrfs still sees the removed NVME. > There is a mismatch from super_num_devices (3) to num_devices (2) > with indicates something strage is going on here, imho. > > Then i returned and booted

Re: degraded BTRFS RAID 1 not mountable: open_ctree failed, unable to find block group for 0

2017-08-23 Thread g6094199
> -Ursprüngliche Nachricht- > Von: Dmitrii Tcvetkov > Gesendet: Di. 22.08.2017 12:28 > An: g6094...@freenet.de > Kopie: linux-btrfs@vger.kernel.org > Betreff: Re: degraded BTRFS RAID 1 not mountable: open_ctree failed, unable > to find block group for 0 > > O

Re: degraded BTRFS RAID 1 not mountable: open_ctree failed, unable to find block group for 0

2017-08-22 Thread Dmitrii Tcvetkov
On Tue, 22 Aug 2017 11:31:23 +0200 g6094...@freenet.de wrote: > So 1st should be investigating why did the disk not get removed > correctly? Btrfs dev del should remove the device corretly, right? Is > there a bug? It should and probably did. To check that we need to see output of btrfs

Re: degraded BTRFS RAID 1 not mountable: open_ctree failed, unable to find block group for 0

2017-08-22 Thread g6094199
add) and removed the missing/dead device (btrfs dev del). Everything worked well. BUT as i rebooted i ran into the "BTRFS RAID 1 not mountable: open_ctree failed, unable to find block group for 0" because of a MISSING disk?! I checked the btrfs list and found that there was a patch th

Re: btrfs raid assurance

2017-07-26 Thread Hugo Mills
On Wed, Jul 26, 2017 at 08:36:54AM -0400, Austin S. Hemmelgarn wrote: > On 2017-07-26 08:27, Hugo Mills wrote: > >On Wed, Jul 26, 2017 at 08:12:19AM -0400, Austin S. Hemmelgarn wrote: > >>On 2017-07-25 17:45, Hugo Mills wrote: > >>>On Tue, Jul 25, 2017 at 11:29:13PM +0200, waxhead wrote: > >

Re: btrfs raid assurance

2017-07-26 Thread Austin S. Hemmelgarn
On 2017-07-26 08:27, Hugo Mills wrote: On Wed, Jul 26, 2017 at 08:12:19AM -0400, Austin S. Hemmelgarn wrote: On 2017-07-25 17:45, Hugo Mills wrote: On Tue, Jul 25, 2017 at 11:29:13PM +0200, waxhead wrote: Hugo Mills wrote: You can see about the disk usage in different scenarios with

Re: btrfs raid assurance

2017-07-26 Thread Hugo Mills
On Wed, Jul 26, 2017 at 12:27:20PM +, Hugo Mills wrote: > On Wed, Jul 26, 2017 at 08:12:19AM -0400, Austin S. Hemmelgarn wrote: > > On 2017-07-25 17:45, Hugo Mills wrote: > > >On Tue, Jul 25, 2017 at 11:29:13PM +0200, waxhead wrote: > > >> > > >> > > >>Hugo Mills wrote: > > >>> > > >

Re: btrfs raid assurance

2017-07-26 Thread Hugo Mills
On Wed, Jul 26, 2017 at 08:12:19AM -0400, Austin S. Hemmelgarn wrote: > On 2017-07-25 17:45, Hugo Mills wrote: > >On Tue, Jul 25, 2017 at 11:29:13PM +0200, waxhead wrote: > >> > >> > >>Hugo Mills wrote: > >>> > >You can see about the disk usage in different scenarios with the > >online

Re: btrfs raid assurance

2017-07-26 Thread Austin S. Hemmelgarn
On 2017-07-25 17:45, Hugo Mills wrote: On Tue, Jul 25, 2017 at 11:29:13PM +0200, waxhead wrote: Hugo Mills wrote: You can see about the disk usage in different scenarios with the online tool at: http://carfax.org.uk/btrfs-usage/ Hugo. As a side note, have you ever considered

Re: btrfs raid assurance

2017-07-25 Thread Hugo Mills
On Tue, Jul 25, 2017 at 11:29:13PM +0200, waxhead wrote: > > > Hugo Mills wrote: > > > >>>You can see about the disk usage in different scenarios with the > >>>online tool at: > >>> > >>>http://carfax.org.uk/btrfs-usage/ > >>> > >>>Hugo. > >>> > As a side note, have you ever considered

Re: btrfs raid assurance

2017-07-25 Thread waxhead
Hugo Mills wrote: You can see about the disk usage in different scenarios with the online tool at: http://carfax.org.uk/btrfs-usage/ Hugo. As a side note, have you ever considered making this online tool (that should never go away just for the record) part of btrfs-progs e.g. a

Re: btrfs raid assurance

2017-07-25 Thread Hugo Mills
On Tue, Jul 25, 2017 at 10:55:18AM -0300, Hérikz Nawarro wrote: > And btw, my current disk conf is a 1x 500GB, 2x3TB and a 5TB. OK, so by my mental arithmetic(*), you'd get: - 9.5 TB usable in RAID-0 - 11.5 TB usable in single mode - 5.75 TB usable in RAID-1 Hugo. (*) Which may be

Re: btrfs raid assurance

2017-07-25 Thread Hérikz Nawarro
And btw, my current disk conf is a 1x 500GB, 2x3TB and a 5TB. 2017-07-25 10:51 GMT-03:00 Hugo Mills : > On Tue, Jul 25, 2017 at 01:46:56PM +, Hugo Mills wrote: >> On Tue, Jul 25, 2017 at 09:55:37AM -0300, Hérikz Nawarro wrote: >> > Hello everyone, >> > >> > I'm migrating

Re: btrfs raid assurance

2017-07-25 Thread Hugo Mills
On Tue, Jul 25, 2017 at 01:46:56PM +, Hugo Mills wrote: > On Tue, Jul 25, 2017 at 09:55:37AM -0300, Hérikz Nawarro wrote: > > Hello everyone, > > > > I'm migrating to btrfs and i would like to know, in a btrfs filesystem > > with 4 disks (multiple sizes) with -d raid0 & -m raid1, how many > >

Re: btrfs raid assurance

2017-07-25 Thread Hérikz Nawarro
Thanks everyone, I'll stick with raid 1. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: btrfs raid assurance

2017-07-25 Thread Hugo Mills
On Tue, Jul 25, 2017 at 09:55:37AM -0300, Hérikz Nawarro wrote: > Hello everyone, > > I'm migrating to btrfs and i would like to know, in a btrfs filesystem > with 4 disks (multiple sizes) with -d raid0 & -m raid1, how many > drives can i lost without losing the entire array? You can lose one

Re: btrfs raid assurance

2017-07-25 Thread Austin S. Hemmelgarn
On 2017-07-25 08:55, Hérikz Nawarro wrote: Hello everyone, I'm migrating to btrfs and i would like to know, in a btrfs filesystem with 4 disks (multiple sizes) with -d raid0 & -m raid1, how many drives can i lost without losing the entire array? Exactly one, but you will lose data if you lose

btrfs raid assurance

2017-07-25 Thread Hérikz Nawarro
Hello everyone, I'm migrating to btrfs and i would like to know, in a btrfs filesystem with 4 disks (multiple sizes) with -d raid0 & -m raid1, how many drives can i lost without losing the entire array? Cheers. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body

Re: Creating btrfs RAID on LUKS devs makes devices disappear

2017-05-13 Thread Andrei Borzenkov
13.05.2017 18:28, Ochi пишет: > Hello, > > okay, I think I now have a repro that is stupidly simple, I'm not even > sure if I overlook something here. No multi-device btrfs involved, but > notably it does happen with btrfs, but not with e.g. ext4. > I could not reproduce it with single device

Re: Creating btrfs RAID on LUKS devs makes devices disappear

2017-05-13 Thread Ochi
Hello, okay, I think I now have a repro that is stupidly simple, I'm not even sure if I overlook something here. No multi-device btrfs involved, but notably it does happen with btrfs, but not with e.g. ext4. [Sidenote: At first I thought it had to do with systemd-cryptsetup opening multiple

Re: Creating btrfs RAID on LUKS devs makes devices disappear

2017-05-13 Thread Andrei Borzenkov
12.05.2017 20:07, Chris Murphy пишет: > On Thu, May 11, 2017 at 5:24 PM, Ochi wrote: >> Hello, >> >> here is the journal.log (I hope). It's quite interesting. I rebooted the >> machine, performed a mkfs.btrfs on dm-{2,3,4} and dm-3 was missing >> afterwards (around timestamp 66.*).

Re: Creating btrfs RAID on LUKS devs makes devices disappear

2017-05-12 Thread Chris Murphy
On Thu, May 11, 2017 at 5:24 PM, Ochi wrote: > Hello, > > here is the journal.log (I hope). It's quite interesting. I rebooted the > machine, performed a mkfs.btrfs on dm-{2,3,4} and dm-3 was missing > afterwards (around timestamp 66.*). However, I then logged into the machine >

Re: Creating btrfs RAID on LUKS devs makes devices disappear

2017-05-12 Thread Austin S. Hemmelgarn
On 2017-05-12 09:54, Ochi wrote: On 12.05.2017 13:25, Austin S. Hemmelgarn wrote: On 2017-05-11 19:24, Ochi wrote: Hello, here is the journal.log (I hope). It's quite interesting. I rebooted the machine, performed a mkfs.btrfs on dm-{2,3,4} and dm-3 was missing afterwards (around timestamp

Re: Creating btrfs RAID on LUKS devs makes devices disappear

2017-05-12 Thread Ochi
On 12.05.2017 13:25, Austin S. Hemmelgarn wrote: On 2017-05-11 19:24, Ochi wrote: Hello, here is the journal.log (I hope). It's quite interesting. I rebooted the machine, performed a mkfs.btrfs on dm-{2,3,4} and dm-3 was missing afterwards (around timestamp 66.*). However, I then logged into

  1   2   3   4   >