Re: [PATCH V7] Btrfs: enhance raid1/10 balance heuristic

2018-11-13 Thread Anand Jain
of the files are compressible. IMO heuristic are good only for a set of types of workload. Giving an option to move away from it for the manual tuning is desired. Thanks, Anand On 11/12/2018 07:58 PM, Timofey Titovets wrote: From: Timofey Titovets Currently btrfs raid1/10 balancer bаlance

[PATCH V8] Btrfs: enhance raid1/10 balance heuristic

2018-11-13 Thread Timofey Titovets
From: Timofey Titovets Currently btrfs raid1/10 balancer bаlance requests to mirrors, based on pid % num of mirrors. Make logic understood: - if one of underline devices are non rotational - Queue length to underline devices By default try use pid % num_mirrors guessing, but: - If one

Re: [PATCH V7] Btrfs: enhance raid1/10 balance heuristic

2018-11-13 Thread Goffredo Baroncelli
On 12/11/2018 12.58, Timofey Titovets wrote: > From: Timofey Titovets > > Currently btrfs raid1/10 balancer bаlance requests to mirrors, > based on pid % num of mirrors. [...] > v6 -> v7: > - Fixes based on Nikolay Borisov review: > * Assume num == 2 >

[PATCH V7] Btrfs: enhance raid1/10 balance heuristic

2018-11-12 Thread Timofey Titovets
From: Timofey Titovets Currently btrfs raid1/10 balancer bаlance requests to mirrors, based on pid % num of mirrors. Make logic understood: - if one of underline devices are non rotational - Queue length to underline devices By default try use pid % num_mirrors guessing, but: - If one

Re: [PATCH V6] Btrfs: enhance raid1/10 balance heuristic

2018-11-12 Thread Timofey Titovets
пн, 12 нояб. 2018 г. в 10:28, Nikolay Borisov : > > > > On 25.09.18 г. 21:38 ч., Timofey Titovets wrote: > > Currently btrfs raid1/10 balancer bаlance requests to mirrors, > > based on pid % num of mirrors. > > > > Make logic understood: > > - if o

Re: [PATCH V6] Btrfs: enhance raid1/10 balance heuristic

2018-11-11 Thread Nikolay Borisov
On 25.09.18 г. 21:38 ч., Timofey Titovets wrote: > Currently btrfs raid1/10 balancer bаlance requests to mirrors, > based on pid % num of mirrors. > > Make logic understood: > - if one of underline devices are non rotational > - Queue length to underline devices > >

Re: [PATCH V6] Btrfs: enhance raid1/10 balance heuristic

2018-11-11 Thread Timofey Titovets
Gentle ping. вт, 25 сент. 2018 г. в 21:38, Timofey Titovets : > > Currently btrfs raid1/10 balancer bаlance requests to mirrors, > based on pid % num of mirrors. > > Make logic understood: > - if one of underline devices are non rotational > - Queue length to underline devic

Re: Conversion to btrfs raid1 profile on added ext device renders some systems unable to boot into converted rootfs

2018-10-23 Thread Tony Prokott
. Boot loader grub2 all along still has no trouble accessing -- presumably it's not able to leverage raid1 redundancy in btrfs but does have access to the ext mirror device and takes notice in passing of matching UUID's. > By default, btrfs must see *all* devices to mount RAID1/10/5/6

Re: Conversion to btrfs raid1 profile on added ext device renders some systems unable to boot into converted rootfs

2018-10-18 Thread Qu Wenruo
s (external-raid) had to rely on the usb channel; By default, btrfs must see *all* devices to mount RAID1/10/5/6/0. Unless you're using "degraded" mount option. You could argue it's a bad decision, but still you have the choice. > in busybox, ext drives/partitions are all missing

Re: Conversion to btrfs raid1 profile on added ext device renders some systems unable to boot into converted rootfs

2018-10-18 Thread Tony Prokott
wing things/abilities to boot: > > 1) usb and sata drivers >Means you could see both devices in the busybox environment under /dev > > 2) "Btrfs" command >Mostly for scan > Then you could try the following commands under busybox environment:

Re: Conversion to btrfs raid1 profile on added ext device renders some systems unable to boot into converted rootfs

2018-10-17 Thread Qu Wenruo
p] > > Total devices 2 FS bytes used 24.07GiB > > devid1 size 401.59GiB used 26.03GiB path /dev/sda2 > > devid2 size 401.76GiB used 26.03GiB path /dev/sdc1 > > > > / # btrfs fi df / > > Data, RAID1: total=24.00GiB, used=23.27GiB > > System,

Conversion to btrfs raid1 profile on added ext device renders some systems unable to boot into converted rootfs

2018-10-17 Thread Tony Prokott
iB used 26.03GiB path /dev/sdc1 > > / # btrfs fi df / > Data, RAID1: total=24.00GiB, used=23.27GiB > System, RAID1: total=32.00MiB, used=16.00KiB > Metadata, RAID1: total=2.00GiB, used=820.00MiB > GlobalReserve, single: total=69.17MiB, used=0.00B

[PATCH V6] Btrfs: enhance raid1/10 balance heuristic

2018-09-25 Thread Timofey Titovets
Currently btrfs raid1/10 balancer bаlance requests to mirrors, based on pid % num of mirrors. Make logic understood: - if one of underline devices are non rotational - Queue length to underline devices By default try use pid % num_mirrors guessing, but: - If one of mirrors are non rotational

Re: [PATCH V5 RESEND] Btrfs: enchanse raid1/10 balance heuristic

2018-09-20 Thread Timofey Titovets
mean of 3 runs): > Mainline Patch > -- > RAID1 | 18.9 MiB/s | 26.5 MiB/s > RAID10 | 30.7 MiB/s | 30.7 MiB/s > fio configuration: > [global] > ioengine=libaio > buffered=0 > direct=1 > bssplit=32k/100 > size=

Re: [PATCH V5 RESEND] Btrfs: enchanse raid1/10 balance heuristic

2018-09-20 Thread Peter Becker
i like the idea. do you have any benchmarks for this change? the general logic looks good for me.

[PATCH V5 RESEND] Btrfs: enchanse raid1/10 balance heuristic

2018-09-18 Thread Timofey Titovets
From: Timofey Titovets Currently btrfs raid1/10 balancer bаlance requests to mirrors, based on pid % num of mirrors. Make logic understood: - if one of underline devices are non rotational - Queue leght to underline devices By default try use pid % num_mirrors guessing, but: - If one

Re: [PATCH V5] Btrfs: enchanse raid1/10 balance heuristic

2018-09-13 Thread Timofey Titovets
сб, 7 июл. 2018 г. в 18:24, Timofey Titovets : > > From: Timofey Titovets > > Currently btrfs raid1/10 balancer bаlance requests to mirrors, > based on pid % num of mirrors. > > Make logic understood: > - if one of underline devices are non rotational > - Queue

Re: RAID1 & BTRFS critical (device sda2): corrupt leaf, bad key order

2018-09-04 Thread Qu Wenruo
On 2018/9/5 上午4:37, Chris Murphy wrote: > On Tue, Sep 4, 2018 at 10:22 AM, Etienne Champetier > wrote: > >> Do you have a procedure to copy all subvolumes & skip error ? (I have >> ~200 snapshots) > > If they're already read-only snapshots, then script an iteration of > btrfs send receive to

Re: RAID1 & BTRFS critical (device sda2): corrupt leaf, bad key order

2018-09-04 Thread Chris Murphy
On Tue, Sep 4, 2018 at 10:22 AM, Etienne Champetier wrote: > Do you have a procedure to copy all subvolumes & skip error ? (I have > ~200 snapshots) If they're already read-only snapshots, then script an iteration of btrfs send receive to a new volume. Btrfs seed-sprout would be ideal, however

Re: RAID1 & BTRFS critical (device sda2): corrupt leaf, bad key order

2018-09-04 Thread Etienne Champetier
hampetier wrote: > >>> Hello btfrs hackers, > >>> > >>> I have a computer acting as backup server with BTRFS RAID1, and I > >>> would like to know the different options to rebuild this RAID > >>> (I saw this thread > >>> https:/

Re: RAID1 & BTRFS critical (device sda2): corrupt leaf, bad key order

2018-09-04 Thread Qu Wenruo
On 2018/9/4 下午7:53, Etienne Champetier wrote: > Hi Qu, > > Le lun. 3 sept. 2018 à 20:27, Qu Wenruo a écrit : >> >> On 2018/9/3 下午10:18, Etienne Champetier wrote: >>> Hello btfrs hackers, >>> >>> I have a computer acting as backup serve

Re: RAID1 & BTRFS critical (device sda2): corrupt leaf, bad key order

2018-09-03 Thread Qu Wenruo
On 2018/9/3 下午10:18, Etienne Champetier wrote: > Hello btfrs hackers, > > I have a computer acting as backup server with BTRFS RAID1, and I > would like to know the different options to rebuild this RAID > (I saw this thread > https://www.spinics.net/lists/linux-b

Re: IO errors when building RAID1.... ?

2018-09-03 Thread Chris Murphy
On Mon, Sep 3, 2018 at 4:23 AM, Adam Borowski wrote: > On Sun, Sep 02, 2018 at 09:15:25PM -0600, Chris Murphy wrote: >> For > 10 years drive firmware handles bad sector remapping internally. >> It remaps the sector logical address to a reserve physical sector. >> >> NTFS and ext[234] have a means

Re: RAID1 & BTRFS critical (device sda2): corrupt leaf, bad key order

2018-09-03 Thread Chris Murphy
On Mon, Sep 3, 2018 at 7:52 AM, Etienne Champetier wrote: > Hello linux-btfrs, > > I have a computer acting as backup server with BTRFS RAID1, and I > would like to know the different options to rebuild this RAID > (I saw this thread > https://www.spinics.net/lists/linux-b

RAID1 & BTRFS critical (device sda2): corrupt leaf, bad key order

2018-09-03 Thread Etienne Champetier
Hello btfrs hackers, I have a computer acting as backup server with BTRFS RAID1, and I would like to know the different options to rebuild this RAID (I saw this thread https://www.spinics.net/lists/linux-btrfs/msg68679.html but there was no raid 1) # uname -a Linux servmaison 4.4.0-134-generic

RAID1 & BTRFS critical (device sda2): corrupt leaf, bad key order

2018-09-03 Thread Etienne Champetier
Hello linux-btfrs, I have a computer acting as backup server with BTRFS RAID1, and I would like to know the different options to rebuild this RAID (I saw this thread https://www.spinics.net/lists/linux-btrfs/msg68679.html but there was no raid 1) # uname -a Linux servmaison 4.4.0-134-generic

Re: IO errors when building RAID1.... ?

2018-09-03 Thread Adam Borowski
On Sun, Sep 02, 2018 at 09:15:25PM -0600, Chris Murphy wrote: > For > 10 years drive firmware handles bad sector remapping internally. > It remaps the sector logical address to a reserve physical sector. > > NTFS and ext[234] have a means of accepting a list of bad sectors, and > will avoid using

Re: IO errors when building RAID1.... ?

2018-09-03 Thread Pierre Couderc
On 09/03/2018 05:15 AM, Chris Murphy wrote: On Sat, Sep 1, 2018 at 1:03 AM, Pierre Couderc wrote: On 08/31/2018 08:52 PM, Chris Murphy wrote: Bad sector which is failing write. This is fatal, there isn't anything the block layer or Btrfs (or ext4 or XFS) can do about it. Well, ext234 do

Re: IO errors when building RAID1.... ?

2018-09-02 Thread Chris Murphy
On Sat, Sep 1, 2018 at 1:03 AM, Pierre Couderc wrote: > > > On 08/31/2018 08:52 PM, Chris Murphy wrote: >> >> >> Bad sector which is failing write. This is fatal, there isn't anything >> the block layer or Btrfs (or ext4 or XFS) can do about it. Well, >> ext234 do have an option to scan for bad

Re: IO errors when building RAID1.... ?

2018-09-01 Thread Pierre Couderc
On 09/01/2018 03:35 AM, Duncan wrote: Chris Murphy posted on Fri, 31 Aug 2018 13:02:16 -0600 as excerpted: If you want you can post the output from 'sudo smartctl -x /dev/sda' which will contain more information... but this is in some sense superfluous. The problem is very clearly a bad

Re: IO errors when building RAID1.... ?

2018-09-01 Thread Pierre Couderc
On 08/31/2018 08:52 PM, Chris Murphy wrote: Bad sector which is failing write. This is fatal, there isn't anything the block layer or Btrfs (or ext4 or XFS) can do about it. Well, ext234 do have an option to scan for bad sectors and create a bad sector map which then can be used at mkfs

Re: IO errors when building RAID1.... ?

2018-08-31 Thread Duncan
Chris Murphy posted on Fri, 31 Aug 2018 13:02:16 -0600 as excerpted: > If you want you can post the output from 'sudo smartctl -x /dev/sda' > which will contain more information... but this is in some sense > superfluous. The problem is very clearly a bad drive, the drive > explicitly report to

Re: IO errors when building RAID1.... ?

2018-08-31 Thread Chris Murphy
If you want you can post the output from 'sudo smartctl -x /dev/sda' which will contain more information... but this is in some sense superfluous. The problem is very clearly a bad drive, the drive explicitly report to libata a write error, and included the sector LBA affected, and only the drive

Re: IO errors when building RAID1.... ?

2018-08-31 Thread Chris Murphy
On Fri, Aug 31, 2018 at 10:35 AM, Pierre Couderc wrote: > > Aug 31 17:34:55 server su[559]: Successful su for root by nous > Aug 31 17:34:55 server su[559]: + /dev/pts/1 nous:root > Aug 31 17:34:55 server su[559]: pam_unix(su:session): session opened for > user root by nous(uid=1000) > Aug 31

Re: How to erase a RAID1 (+++)?

2018-08-31 Thread Duncan
backup the last step is changing the symlink to point to the appropriate fstab for that backup, so it's correct if I end up booting from it. Meanwhile, each root, working and two backups, is its own set of two device partitions in btrfs raid1 mode. (One set of backups is on separate physical de

IO errors when building RAID1.... ?

2018-08-31 Thread Pierre Couderc
When trying to build a RAID1 on main fs. After  normal debian stretch install : root@server:/home/nous# btrfs device add /dev/sdb1 / root@server:/home/nous# btrfs fi show Label: none  uuid: ef0b9dad-c0eb-4a3b-9b41-e5e249363abc     Total devices 2 FS bytes used 824.60MiB     devid    1

Re: How to erase a RAID1 (+++)?

2018-08-31 Thread Pierre Couderc
o, I shall mount my RAID1 very standard, and I  shall expect the disaster, hoping it does not occur Now, I shall try to absorb all that... Thank you very much ! I just keep around a USB drive with a full Linux system on it, to act as "recovery". If the btrfs raid fails I boot in

Re: How to erase a RAID1 (+++)?

2018-08-31 Thread Alberto Bursi
On 8/31/2018 8:53 AM, Pierre Couderc wrote: > > OK, I have understood the message... I was planning that as you said > "semi-routinely", and I understand btrfs is not soon enough ready, and > I am very very far to be a specialist as you are. > So, I shall mount my R

Re: How to erase a RAID1 (+++)?

2018-08-31 Thread Pierre Couderc
On 08/31/2018 04:29 AM, Duncan wrote: Chris Murphy posted on Thu, 30 Aug 2018 11:08:28 -0600 as excerpted: My purpose is a simple RAID1 main fs, with bootable flag on the 2 disks in prder to start in degraded mode Good luck with this. The Btrfs archives are full of various limitations

Re: How to erase a RAID1 (+++)?

2018-08-31 Thread Pierre Couderc
On 08/30/2018 07:08 PM, Chris Murphy wrote: On Thu, Aug 30, 2018 at 3:13 AM, Pierre Couderc wrote: Trying to install a RAID1 on a debian stretch, I made some mistake and got this, after installing on disk1 and trying to add second disk : root@server:~# fdisk -l Disk /dev/sda: 1.8 TiB

Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Duncan
Chris Murphy posted on Thu, 30 Aug 2018 11:08:28 -0600 as excerpted: >> My purpose is a simple RAID1 main fs, with bootable flag on the 2 disks >> in prder to start in degraded mode > > Good luck with this. The Btrfs archives are full of various limitation

Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Chris Murphy
And also, I'll argue this might have been a btrfs-progs bug as well, depending on what version was used and the command. Both mkfs and dev add should not be able to add type code 0x05. At least libblkid correctly shows that it's 1KiB in size, so really Btrfs should not succeed at adding this

Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Chris Murphy
On Thu, Aug 30, 2018 at 9:21 AM, Alberto Bursi wrote: > > On 8/30/2018 11:13 AM, Pierre Couderc wrote: >> Trying to install a RAID1 on a debian stretch, I made some mistake and >> got this, after installing on disk1 and trying to add second disk : >> >> >> root

Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Chris Murphy
On Thu, Aug 30, 2018 at 3:13 AM, Pierre Couderc wrote: > Trying to install a RAID1 on a debian stretch, I made some mistake and got > this, after installing on disk1 and trying to add second disk : > > > root@server:~# fdisk -l > Disk /dev/sda: 1.8 TiB, 2000398934016 bytes,

Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Alberto Bursi
On 8/30/2018 11:13 AM, Pierre Couderc wrote: > Trying to install a RAID1 on a debian stretch, I made some mistake and > got this, after installing on disk1 and trying to add second disk : > > > root@server:~# fdisk -l > Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 39070291

Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Kai Stian Olstad
On Thursday, 30 August 2018 12:01:55 CEST Pierre Couderc wrote: > > On 08/30/2018 11:35 AM, Qu Wenruo wrote: > > > > On 2018/8/30 下午5:13, Pierre Couderc wrote: > >> Trying to install a RAID1 on a debian stretch, I made some mistake and > >> got this, after i

Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Qu Wenruo
On 2018/8/30 下午6:01, Pierre Couderc wrote: > > > On 08/30/2018 11:35 AM, Qu Wenruo wrote: >> >> On 2018/8/30 下午5:13, Pierre Couderc wrote: >>> Trying to install a RAID1 on a debian stretch, I made some mistake and >>> got this, after installing

Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Pierre Couderc
On 08/30/2018 11:35 AM, Qu Wenruo wrote: On 2018/8/30 下午5:13, Pierre Couderc wrote: Trying to install a RAID1 on a debian stretch, I made some mistake and got this, after installing on disk1 and trying to add second disk  : root@server:~# fdisk -l Disk /dev/sda: 1.8 TiB, 2000398934016

Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Qu Wenruo
On 2018/8/30 下午5:13, Pierre Couderc wrote: > Trying to install a RAID1 on a debian stretch, I made some mistake and > got this, after installing on disk1 and trying to add second disk  : > > > root@server:~# fdisk -l > Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 39070291

Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Pierre Couderc
Trying to install a RAID1 on a debian stretch, I made some mistake and got this, after installing on disk1 and trying to add second disk  : root@server:~# fdisk -l Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical

Re: How to erase a RAID1 ?

2018-08-29 Thread Qu Wenruo
[Forgot to Cc the list] On 2018/8/29 下午10:04, Pierre Couderc wrote: > > On 08/29/2018 02:52 PM, Qu Wenruo wrote: >> >> On 2018/8/29 下午8:49, Pierre Couderc wrote: >>> I want to reinstall a RAID1 btrfs system (wchis is now under debian >>> stretch, and will b

Re: How to erase a RAID1 ?

2018-08-29 Thread Pierre Couderc
On 08/29/2018 02:52 PM, Qu Wenruo wrote: On 2018/8/29 下午8:49, Pierre Couderc wrote: I want to reinstall a RAID1 btrfs system (wchis is now under debian stretch, and will be reinstalled in stretch). If you still want to use btrfs, just umount the original fs, and # mkfs.btrfs -f

Re: How to erase a RAID1 ?

2018-08-29 Thread Qu Wenruo
On 2018/8/29 下午8:49, Pierre Couderc wrote: > I want to reinstall a RAID1 btrfs system (wchis is now under debian > stretch, and will be reinstalled in stretch). If you still want to use btrfs, just umount the original fs, and # mkfs.btrfs -f Then a completely new btrfs. &g

How to erase a RAID1 ?

2018-08-29 Thread Pierre Couderc
I want to reinstall a RAID1 btrfs system (wchis is now under debian stretch, and will be reinstalled in stretch). How to correctly "erase" it ? Not truly hard erase it, but so that old data does not appear... It is not clear in the wiki. Thanks PC

Re: csum failed on raid1 even after clean scrub?

2018-08-01 Thread Duncan
Sterling Windmill posted on Mon, 30 Jul 2018 21:06:54 -0400 as excerpted: > Both drives are identical, Seagate 8TB external drives Are those the "shingled" SMR drives, normally sold as archive drives and first commonly available in the 8TB size, and often bought for their generally better

Re: csum failed on raid1 even after clean scrub?

2018-07-30 Thread Sterling Windmill
. Anything else I can collect that might be helpful in understanding what's happening here? On Mon, Jul 30, 2018 at 8:56 PM Qu Wenruo wrote: > > > On 2018年07月31日 08:43, Sterling Windmill wrote: > > I am using a two disk raid1 btrfs filesystem spanning two external hard > >

Re: csum failed on raid1 even after clean scrub?

2018-07-30 Thread Qu Wenruo
On 2018年07月31日 08:43, Sterling Windmill wrote: > I am using a two disk raid1 btrfs filesystem spanning two external hard > drives connected via USB 3.0. Is there any speed difference between the two device? And are these 2 devices under the same USB3.0 root hub or different root hubs?

csum failed on raid1 even after clean scrub?

2018-07-30 Thread Sterling Windmill
I am using a two disk raid1 btrfs filesystem spanning two external hard drives connected via USB 3.0. While copying ~6TB of data from this filesystem to local disk via rsync I am seeing messages like the following in dmesg output: [ 2213.406267] BTRFS warning (device sdj1): csum failed root 5

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-20 Thread Austin S. Hemmelgarn
On 2018-07-20 14:41, Hugo Mills wrote: On Fri, Jul 20, 2018 at 09:38:14PM +0300, Andrei Borzenkov wrote: 20.07.2018 20:16, Goffredo Baroncelli пишет: [snip] Limiting the number of disk per raid, in BTRFS would be quite simple to implement in the "chunk allocator" You mean that currently

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-20 Thread Hugo Mills
On Fri, Jul 20, 2018 at 09:38:14PM +0300, Andrei Borzenkov wrote: > 20.07.2018 20:16, Goffredo Baroncelli пишет: [snip] > > Limiting the number of disk per raid, in BTRFS would be quite simple to > > implement in the "chunk allocator" > > > > You mean that currently RAID5 stripe size is equal

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-20 Thread Andrei Borzenkov
>>>> orthogonal today, Hugo's whole point was that btrfs is theoretically >>>>>> flexible enough to allow both together and the feature may at some >>>>>> point be added, so it makes sense to have a layout notation format >>>>>> flexi

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-20 Thread Austin S. Hemmelgarn
talking about striping plus parity. What I'm referring to is different though. Just like RAID10 used to be implemented as RAID1 on top of RAID0, RAID05 is RAID0 on top of RAID5. That is, you're striping your data across multiple RAID5 arrays instead of using one big RAID5 array to store it all

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-20 Thread Goffredo Baroncelli
ough to allow both together and the feature may at some >>>>> point be added, so it makes sense to have a layout notation format >>>>> flexible enough to allow it as well. >>>> >>>> When I say orthogonal, It means that these can be combined: i.e. you can >>

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-20 Thread Goffredo Baroncelli
t; RAID15 and RAID16 are a similar case to RAID51 and RAID61, except they >>> might actually make sense in BTRFS to provide a backup means of rebuilding >>> blocks that fail checksum validation if both copies fail. >> If you need further redundancy, it is easy to implement a p

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-20 Thread David Sterba
On Thu, Jul 19, 2018 at 07:47:23AM -0400, Austin S. Hemmelgarn wrote: > > So this special level will be used for RAID56 for now? > > Or it will also be possible for metadata usage just like current RAID1? > > > > If the latter, the metadata scrub problem will ne

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-20 Thread David Sterba
ion, > > I've added a 4-copy replication, that would allow triple copy raid (that > > does not have a standardized name). > > So this special level will be used for RAID56 for now? > Or it will also be possible for metadata usage just like current RAID1? It's a new profile usable i

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-19 Thread Andrei Borzenkov
le striping and mirroring/pairing are >>>> orthogonal today, Hugo's whole point was that btrfs is theoretically >>>> flexible enough to allow both together and the feature may at some >>>> point be added, so it makes sense to have a layout notation format >&g

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-19 Thread waxhead
Hugo Mills wrote: On Wed, Jul 18, 2018 at 08:39:48AM +, Duncan wrote: Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted: Perhaps it's a case of coder's view (no code doing it that way, it's just a coincidental oddity conditional on equal sizes), vs. sysadmin's view (code or

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-19 Thread Austin S. Hemmelgarn
, Duncan wrote: Goffredo Baroncelli posted on Mon, 16 Jul 2018 20:29:46 +0200 as excerpted: [...] When I say orthogonal, It means that these can be combined: i.e. you can have - striping (RAID0) - parity  (?) - striping + parity  (e.g. RAID5/6) - mirroring  (RAID1) - mirroring + striping  (RAID10

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-19 Thread Austin S. Hemmelgarn
be possible for metadata usage just like current RAID1? If the latter, the metadata scrub problem will need to be considered more. For more copies RAID1, it's will have higher possibility one or two devices missing, and then being scrubbed. For metadata scrub, inlined csum can't ensure it's

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-19 Thread Austin S. Hemmelgarn
say orthogonal, It means that these can be combined: i.e. you can have - striping (RAID0) - parity (?) - striping + parity (e.g. RAID5/6) - mirroring (RAID1) - mirroring + striping (RAID10) However you can't have mirroring+parity; this means that a notation where both 'C' ( = number of copy

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-19 Thread Qu Wenruo
level will be used for RAID56 for now? Or it will also be possible for metadata usage just like current RAID1? If the latter, the metadata scrub problem will need to be considered more. For more copies RAID1, it's will have higher possibility one or two devices missing, and then being scrubbed. For met

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-18 Thread Goffredo Baroncelli
t;>> flexible enough to allow both together and the feature may at some >>> point be added, so it makes sense to have a layout notation format >>> flexible enough to allow it as well. >> >> When I say orthogonal, It means that these can be combined: i.e. you can &g

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-18 Thread Hugo Mills
On Wed, Jul 18, 2018 at 08:39:48AM +, Duncan wrote: > Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted: > > >> As implemented in BTRFS, raid1 doesn't have striping. > > > > The argument is that because there's only two copies, on multi-device >

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-18 Thread Austin S. Hemmelgarn
: i.e. you can have - striping (RAID0) - parity (?) - striping + parity (e.g. RAID5/6) - mirroring (RAID1) - mirroring + striping (RAID10) However you can't have mirroring+parity; this means that a notation where both 'C' ( = number of copy) and 'P' ( = number of parities) is too verbose. Yes

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-18 Thread Austin S. Hemmelgarn
On 2018-07-18 04:39, Duncan wrote: Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted: As implemented in BTRFS, raid1 doesn't have striping. The argument is that because there's only two copies, on multi-device btrfs raid1 with 4+ devices of equal size so chunk allocations tend

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-18 Thread Duncan
Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted: >> As implemented in BTRFS, raid1 doesn't have striping. > > The argument is that because there's only two copies, on multi-device > btrfs raid1 with 4+ devices of equal size so chunk allocations tend to > alte

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-18 Thread Duncan
ense to have a layout notation format >> flexible enough to allow it as well. > > When I say orthogonal, It means that these can be combined: i.e. you can > have - striping (RAID0) > - parity (?) > - striping + parity (e.g. RAID5/6) > - mirroring (RAID1) > - mirroring + stripi

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-18 Thread Goffredo Baroncelli
e - striping (RAID0) - parity (?) - striping + parity (e.g. RAID5/6) - mirroring (RAID1) - mirroring + striping (RAID10) However you can't have mirroring+parity; this means that a notation where both 'C' ( = number of copy) and 'P' ( = number of parities) is too verbose. [...] > >&g

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-17 Thread Duncan
ome point be added, so it makes sense to have a layout notation format flexible enough to allow it as well. In the global context, just to complete things and mostly for others reading as I feel a bit like a simpleton explaining to the expert here, just as raid10 is shorthand for raid1+0, aka raid

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-16 Thread waxhead
some aliases) would be much better for the commoners (such as myself). ...snip... > Which would make the above table look like so: Old format / My Format / My suggested alias SINGLE  / R0.S0.P0 / SINGLE DUP / R1.S1.P0 / DUP (or even MIRRORLOCAL1) RAID0   / R0.Sm.P0 / STRIPE RAID1   / R1.S0

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-16 Thread Austin S. Hemmelgarn
RAID1   / 2C / MIRROR1 RAID1c3 / 3C / MIRROR2 RAID1c4 / 4C / MIRROR3 RAID10  / 2CmS   / STRIPE.MIRROR1 Striping and mirroring/pairing are orthogonal properties; mirror and parity are mutually exclusive. What about RAID1 -> MIRROR1 RAID10 -> MIRROR1S RAID1c3 -> MIRROR

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-16 Thread Goffredo Baroncelli
/ SINGLE > DUP / 2CD    / DUP (or even MIRRORLOCAL1) > RAID0   / 1CmS   / STRIPE > RAID1   / 2C / MIRROR1 > RAID1c3 / 3C / MIRROR2 > RAID1c4 / 4C / MIRROR3 > RAID10  / 2CmS   / STRIPE.MIRROR1 Striping and mirroring/pairing are orthogonal properties; mirror and pari

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-15 Thread Hugo Mills
On Fri, Jul 13, 2018 at 08:46:28PM +0200, David Sterba wrote: [snip] > An interesting question is the naming of the extended profiles. I picked > something that can be easily understood but it's not a final proposal. > Years ago, Hugo proposed a naming scheme that described the > non-standard raid

Re: [PATCH 0/4] 3- and 4- copy RAID1

2018-07-15 Thread waxhead
be much better for the commoners (such as myself). For example: Old format / New Format / My suggested alias SINGLE / 1C / SINGLE DUP / 2CD/ DUP (or even MIRRORLOCAL1) RAID0 / 1CmS / STRIPE RAID1 / 2C / MIRROR1 RAID1c3 / 3C / MIRROR2 RAID1c4 / 4C / MIRROR3 RAID10

[PATCH 0/4] 3- and 4- copy RAID1

2018-07-13 Thread David Sterba
replication for a small bribe. The new raid profiles and covered by an incompatibility bit, called extended_raid, the (idealistic) plan is to stuff as many new raid-related features as possible. The patch 4/4 mentions the 3- 4- copy raid1, configurable stripe length, write hole log and triple

[PATCH V5] Btrfs: enchanse raid1/10 balance heuristic

2018-07-07 Thread Timofey Titovets
From: Timofey Titovets Currently btrfs raid1/10 balancer bаlance requests to mirrors, based on pid % num of mirrors. Make logic understood: - if one of underline devices are non rotational - Queue leght to underline devices By default try use pid % num_mirrors guessing, but: - If one

[PATCH RESEND V4] Btrfs: enchanse raid1/10 balance heuristic

2018-07-07 Thread Timofey Titovets
From: Timofey Titovets Currently btrfs raid1/10 balancer bаlance requests to mirrors, based on pid % num of mirrors. Make logic understood: - if one of underline devices are non rotational - Queue leght to underline devices By default try use pid % num_mirrors guessing, but: - If one

[PATCH] fstests: btrfs/161: test raid1 missing writes

2018-05-16 Thread Anand Jain
Test to make sure that a raid1 device with missed write reads good data when reassembled. Signed-off-by: Anand Jain <anand.j...@oracle.com> --- This test case fails as of now. I am sending this to btrfs ML only as it depends on the read_mirror_policy kernel patches which is in the ML.

Re: [PATCH V4] Btrfs: enchanse raid1/10 balance heuristic

2018-04-25 Thread Misono Tomohiro
On 2018/04/25 17:15, Timofey Titovets wrote: > 2018-04-25 10:54 GMT+03:00 Misono Tomohiro <misono.tomoh...@jp.fujitsu.com>: >> On 2018/04/25 9:20, Timofey Titovets wrote: >>> Currently btrfs raid1/10 balancer bаlance requests to mirrors, >>> based on pid % n

Re: [PATCH V4] Btrfs: enchanse raid1/10 balance heuristic

2018-04-25 Thread Timofey Titovets
2018-04-25 10:54 GMT+03:00 Misono Tomohiro <misono.tomoh...@jp.fujitsu.com>: > On 2018/04/25 9:20, Timofey Titovets wrote: >> Currently btrfs raid1/10 balancer bаlance requests to mirrors, >> based on pid % num of mirrors. >> >> Make logic understood: >>

Re: [PATCH V4] Btrfs: enchanse raid1/10 balance heuristic

2018-04-25 Thread Misono Tomohiro
On 2018/04/25 9:20, Timofey Titovets wrote: > Currently btrfs raid1/10 balancer bаlance requests to mirrors, > based on pid % num of mirrors. > > Make logic understood: > - if one of underline devices are non rotational > - Queue leght to underline devices > >

[PATCH V4] Btrfs: enchanse raid1/10 balance heuristic

2018-04-24 Thread Timofey Titovets
Currently btrfs raid1/10 balancer bаlance requests to mirrors, based on pid % num of mirrors. Make logic understood: - if one of underline devices are non rotational - Queue leght to underline devices By default try use pid % num_mirrors guessing, but: - If one of mirrors are non rotational

Re: [PATCH 3/3] btrfs-progs: tests/misc: Test if btrfs-image can handle RAID1 missing device

2018-03-30 Thread David Sterba
s/misc-tests/030-missing-device-image/test.sh > @@ -0,0 +1,57 @@ > +#!/bin/bash > +# Test that btrfs-image can dump image correctly for missing device (RAID1) > +# > +# At least for RAID1, btrfs-image should be able to handle one missing device > +# without any problem &

[PATCH 3/3] btrfs-progs: tests/misc: Test if btrfs-image can handle RAID1 missing device

2018-03-30 Thread Qu Wenruo
.sh b/tests/misc-tests/030-missing-device-image/test.sh new file mode 100755 index ..b8ae3a950cc9 --- /dev/null +++ b/tests/misc-tests/030-missing-device-image/test.sh @@ -0,0 +1,57 @@ +#!/bin/bash +# Test that btrfs-image can dump image correctly for missing device (RAID1) +# +# At

Re: grub_probe/grub-mkimage does not find all drives in BTRFS RAID1

2018-03-22 Thread Duncan
Matthew Hawn posted on Thu, 22 Mar 2018 00:13:38 + as excerpted: > This is almost definitely a bug in GRUB, but I wanted to get the btrfs > mailing list opinion first. > > Symptoms: > I have a btrfs raid1 /boot and root filesystem. Ever since I replaced a > drive, w

grub_probe/grub-mkimage does not find all drives in BTRFS RAID1

2018-03-21 Thread Matthew Hawn
This is almost definitely a bug in GRUB, but I wanted to get the btrfs mailing list opinion first. Symptoms: I have a btrfs raid1 /boot and root filesystem. Ever since I replaced a drive, when I run the grub utilities to create my grub.cfg and install to boot sector, it only recognizes one

Re: Raid1 volume stuck as read-only: How to dump, recreate and restore its content?

2018-03-15 Thread Duncan
isk your data on bugs >> that were after all discovered and fixed over a year ago? > > It is also missing newly introduced bugs. Right now I'm dealing with > btrfs raid1 server that had the fs getting stuck and kernel oopses due > to a regression: > > https://bugzilla.kernel.or

Re: Raid1 volume stuck as read-only: How to dump, recreate and restore its content?

2018-03-13 Thread Piotr Pawłow
It is also missing newly introduced bugs. Right now I'm dealing with btrfs raid1 server that had the fs getting stuck and kernel oopses due to a regression: https://bugzilla.kernel.org/show_bug.cgi?id=198861 I had to cherry-pick commit 3be8828fc507cdafe7040a3dcf361a2bcd8e305b and recompil

Re: Raid1 volume stuck as read-only: How to dump, recreate and restore its content?

2018-03-12 Thread Duncan
Adam Borowski posted on Sun, 11 Mar 2018 18:47:13 +0100 as excerpted: > On Sun, Mar 11, 2018 at 11:28:08PM +0700, Andreas Hild wrote: >> Following a physical disk failure of a RAID1 array, I tried to mount >> the remaining volume of a root partition with "-o degraded". Fo

  1   2   3   4   5   6   7   8   9   10   >