of the files are compressible.
IMO heuristic are good only for a set of types of workload. Giving
an option to move away from it for the manual tuning is desired.
Thanks, Anand
On 11/12/2018 07:58 PM, Timofey Titovets wrote:
From: Timofey Titovets
Currently btrfs raid1/10 balancer bаlance
From: Timofey Titovets
Currently btrfs raid1/10 balancer bаlance requests to mirrors,
based on pid % num of mirrors.
Make logic understood:
- if one of underline devices are non rotational
- Queue length to underline devices
By default try use pid % num_mirrors guessing, but:
- If one
On 12/11/2018 12.58, Timofey Titovets wrote:
> From: Timofey Titovets
>
> Currently btrfs raid1/10 balancer bаlance requests to mirrors,
> based on pid % num of mirrors.
[...]
> v6 -> v7:
> - Fixes based on Nikolay Borisov review:
> * Assume num == 2
>
From: Timofey Titovets
Currently btrfs raid1/10 balancer bаlance requests to mirrors,
based on pid % num of mirrors.
Make logic understood:
- if one of underline devices are non rotational
- Queue length to underline devices
By default try use pid % num_mirrors guessing, but:
- If one
пн, 12 нояб. 2018 г. в 10:28, Nikolay Borisov :
>
>
>
> On 25.09.18 г. 21:38 ч., Timofey Titovets wrote:
> > Currently btrfs raid1/10 balancer bаlance requests to mirrors,
> > based on pid % num of mirrors.
> >
> > Make logic understood:
> > - if o
On 25.09.18 г. 21:38 ч., Timofey Titovets wrote:
> Currently btrfs raid1/10 balancer bаlance requests to mirrors,
> based on pid % num of mirrors.
>
> Make logic understood:
> - if one of underline devices are non rotational
> - Queue length to underline devices
>
>
Gentle ping.
вт, 25 сент. 2018 г. в 21:38, Timofey Titovets :
>
> Currently btrfs raid1/10 balancer bаlance requests to mirrors,
> based on pid % num of mirrors.
>
> Make logic understood:
> - if one of underline devices are non rotational
> - Queue length to underline devic
. Boot loader grub2
all along still has no trouble accessing -- presumably it's not able to
leverage raid1 redundancy in btrfs but does have access to the ext mirror
device and takes notice in passing of matching UUID's.
> By default, btrfs must see *all* devices to mount RAID1/10/5/6
s (external-raid) had to rely on the usb channel;
By default, btrfs must see *all* devices to mount RAID1/10/5/6/0.
Unless you're using "degraded" mount option.
You could argue it's a bad decision, but still you have the choice.
> in busybox, ext drives/partitions are all missing
wing things/abilities to boot:
>
> 1) usb and sata drivers
>Means you could see both devices in the busybox environment under /dev
>
> 2) "Btrfs" command
>Mostly for scan
> Then you could try the following commands under busybox environment:
p]
> > Total devices 2 FS bytes used 24.07GiB
> > devid1 size 401.59GiB used 26.03GiB path /dev/sda2
> > devid2 size 401.76GiB used 26.03GiB path /dev/sdc1
> >
> > / # btrfs fi df /
> > Data, RAID1: total=24.00GiB, used=23.27GiB
> > System,
iB used 26.03GiB path /dev/sdc1
>
> / # btrfs fi df /
> Data, RAID1: total=24.00GiB, used=23.27GiB
> System, RAID1: total=32.00MiB, used=16.00KiB
> Metadata, RAID1: total=2.00GiB, used=820.00MiB
> GlobalReserve, single: total=69.17MiB, used=0.00B
Currently btrfs raid1/10 balancer bаlance requests to mirrors,
based on pid % num of mirrors.
Make logic understood:
- if one of underline devices are non rotational
- Queue length to underline devices
By default try use pid % num_mirrors guessing, but:
- If one of mirrors are non rotational
mean of 3 runs):
> Mainline Patch
> --
> RAID1 | 18.9 MiB/s | 26.5 MiB/s
> RAID10 | 30.7 MiB/s | 30.7 MiB/s
> fio configuration:
> [global]
> ioengine=libaio
> buffered=0
> direct=1
> bssplit=32k/100
> size=
i like the idea.
do you have any benchmarks for this change?
the general logic looks good for me.
From: Timofey Titovets
Currently btrfs raid1/10 balancer bаlance requests to mirrors,
based on pid % num of mirrors.
Make logic understood:
- if one of underline devices are non rotational
- Queue leght to underline devices
By default try use pid % num_mirrors guessing, but:
- If one
сб, 7 июл. 2018 г. в 18:24, Timofey Titovets :
>
> From: Timofey Titovets
>
> Currently btrfs raid1/10 balancer bаlance requests to mirrors,
> based on pid % num of mirrors.
>
> Make logic understood:
> - if one of underline devices are non rotational
> - Queue
On 2018/9/5 上午4:37, Chris Murphy wrote:
> On Tue, Sep 4, 2018 at 10:22 AM, Etienne Champetier
> wrote:
>
>> Do you have a procedure to copy all subvolumes & skip error ? (I have
>> ~200 snapshots)
>
> If they're already read-only snapshots, then script an iteration of
> btrfs send receive to
On Tue, Sep 4, 2018 at 10:22 AM, Etienne Champetier
wrote:
> Do you have a procedure to copy all subvolumes & skip error ? (I have
> ~200 snapshots)
If they're already read-only snapshots, then script an iteration of
btrfs send receive to a new volume.
Btrfs seed-sprout would be ideal, however
hampetier wrote:
> >>> Hello btfrs hackers,
> >>>
> >>> I have a computer acting as backup server with BTRFS RAID1, and I
> >>> would like to know the different options to rebuild this RAID
> >>> (I saw this thread
> >>> https:/
On 2018/9/4 下午7:53, Etienne Champetier wrote:
> Hi Qu,
>
> Le lun. 3 sept. 2018 à 20:27, Qu Wenruo a écrit :
>>
>> On 2018/9/3 下午10:18, Etienne Champetier wrote:
>>> Hello btfrs hackers,
>>>
>>> I have a computer acting as backup serve
On 2018/9/3 下午10:18, Etienne Champetier wrote:
> Hello btfrs hackers,
>
> I have a computer acting as backup server with BTRFS RAID1, and I
> would like to know the different options to rebuild this RAID
> (I saw this thread
> https://www.spinics.net/lists/linux-b
On Mon, Sep 3, 2018 at 4:23 AM, Adam Borowski wrote:
> On Sun, Sep 02, 2018 at 09:15:25PM -0600, Chris Murphy wrote:
>> For > 10 years drive firmware handles bad sector remapping internally.
>> It remaps the sector logical address to a reserve physical sector.
>>
>> NTFS and ext[234] have a means
On Mon, Sep 3, 2018 at 7:52 AM, Etienne Champetier
wrote:
> Hello linux-btfrs,
>
> I have a computer acting as backup server with BTRFS RAID1, and I
> would like to know the different options to rebuild this RAID
> (I saw this thread
> https://www.spinics.net/lists/linux-b
Hello btfrs hackers,
I have a computer acting as backup server with BTRFS RAID1, and I
would like to know the different options to rebuild this RAID
(I saw this thread
https://www.spinics.net/lists/linux-btrfs/msg68679.html but there was
no raid 1)
# uname -a
Linux servmaison 4.4.0-134-generic
Hello linux-btfrs,
I have a computer acting as backup server with BTRFS RAID1, and I
would like to know the different options to rebuild this RAID
(I saw this thread
https://www.spinics.net/lists/linux-btrfs/msg68679.html but there was
no raid 1)
# uname -a
Linux servmaison 4.4.0-134-generic
On Sun, Sep 02, 2018 at 09:15:25PM -0600, Chris Murphy wrote:
> For > 10 years drive firmware handles bad sector remapping internally.
> It remaps the sector logical address to a reserve physical sector.
>
> NTFS and ext[234] have a means of accepting a list of bad sectors, and
> will avoid using
On 09/03/2018 05:15 AM, Chris Murphy wrote:
On Sat, Sep 1, 2018 at 1:03 AM, Pierre Couderc wrote:
On 08/31/2018 08:52 PM, Chris Murphy wrote:
Bad sector which is failing write. This is fatal, there isn't anything
the block layer or Btrfs (or ext4 or XFS) can do about it. Well,
ext234 do
On Sat, Sep 1, 2018 at 1:03 AM, Pierre Couderc wrote:
>
>
> On 08/31/2018 08:52 PM, Chris Murphy wrote:
>>
>>
>> Bad sector which is failing write. This is fatal, there isn't anything
>> the block layer or Btrfs (or ext4 or XFS) can do about it. Well,
>> ext234 do have an option to scan for bad
On 09/01/2018 03:35 AM, Duncan wrote:
Chris Murphy posted on Fri, 31 Aug 2018 13:02:16 -0600 as excerpted:
If you want you can post the output from 'sudo smartctl -x /dev/sda'
which will contain more information... but this is in some sense
superfluous. The problem is very clearly a bad
On 08/31/2018 08:52 PM, Chris Murphy wrote:
Bad sector which is failing write. This is fatal, there isn't anything
the block layer or Btrfs (or ext4 or XFS) can do about it. Well,
ext234 do have an option to scan for bad sectors and create a bad
sector map which then can be used at mkfs
Chris Murphy posted on Fri, 31 Aug 2018 13:02:16 -0600 as excerpted:
> If you want you can post the output from 'sudo smartctl -x /dev/sda'
> which will contain more information... but this is in some sense
> superfluous. The problem is very clearly a bad drive, the drive
> explicitly report to
If you want you can post the output from 'sudo smartctl -x /dev/sda'
which will contain more information... but this is in some sense
superfluous. The problem is very clearly a bad drive, the drive
explicitly report to libata a write error, and included the sector LBA
affected, and only the drive
On Fri, Aug 31, 2018 at 10:35 AM, Pierre Couderc wrote:
>
> Aug 31 17:34:55 server su[559]: Successful su for root by nous
> Aug 31 17:34:55 server su[559]: + /dev/pts/1 nous:root
> Aug 31 17:34:55 server su[559]: pam_unix(su:session): session opened for
> user root by nous(uid=1000)
> Aug 31
backup the last step is changing
the symlink to point to the appropriate fstab for that backup, so it's
correct if I end up booting from it.
Meanwhile, each root, working and two backups, is its own set of two
device partitions in btrfs raid1 mode. (One set of backups is on
separate physical de
When trying to build a RAID1 on main fs. After normal debian stretch
install :
root@server:/home/nous# btrfs device add /dev/sdb1 /
root@server:/home/nous# btrfs fi show
Label: none uuid: ef0b9dad-c0eb-4a3b-9b41-e5e249363abc
Total devices 2 FS bytes used 824.60MiB
devid 1
o, I shall mount my RAID1 very standard, and I shall expect the
disaster, hoping it does not occur
Now, I shall try to absorb all that...
Thank you very much !
I just keep around a USB drive with a full Linux system on it, to act as
"recovery". If the btrfs raid fails I boot in
On 8/31/2018 8:53 AM, Pierre Couderc wrote:
>
> OK, I have understood the message... I was planning that as you said
> "semi-routinely", and I understand btrfs is not soon enough ready, and
> I am very very far to be a specialist as you are.
> So, I shall mount my R
On 08/31/2018 04:29 AM, Duncan wrote:
Chris Murphy posted on Thu, 30 Aug 2018 11:08:28 -0600 as excerpted:
My purpose is a simple RAID1 main fs, with bootable flag on the 2 disks
in prder to start in degraded mode
Good luck with this. The Btrfs archives are full of various limitations
On 08/30/2018 07:08 PM, Chris Murphy wrote:
On Thu, Aug 30, 2018 at 3:13 AM, Pierre Couderc wrote:
Trying to install a RAID1 on a debian stretch, I made some mistake and got
this, after installing on disk1 and trying to add second disk :
root@server:~# fdisk -l
Disk /dev/sda: 1.8 TiB
Chris Murphy posted on Thu, 30 Aug 2018 11:08:28 -0600 as excerpted:
>> My purpose is a simple RAID1 main fs, with bootable flag on the 2 disks
>> in prder to start in degraded mode
>
> Good luck with this. The Btrfs archives are full of various limitation
And also, I'll argue this might have been a btrfs-progs bug as well,
depending on what version was used and the command. Both mkfs and dev
add should not be able to add type code 0x05. At least libblkid
correctly shows that it's 1KiB in size, so really Btrfs should not
succeed at adding this
On Thu, Aug 30, 2018 at 9:21 AM, Alberto Bursi wrote:
>
> On 8/30/2018 11:13 AM, Pierre Couderc wrote:
>> Trying to install a RAID1 on a debian stretch, I made some mistake and
>> got this, after installing on disk1 and trying to add second disk :
>>
>>
>> root
On Thu, Aug 30, 2018 at 3:13 AM, Pierre Couderc wrote:
> Trying to install a RAID1 on a debian stretch, I made some mistake and got
> this, after installing on disk1 and trying to add second disk :
>
>
> root@server:~# fdisk -l
> Disk /dev/sda: 1.8 TiB, 2000398934016 bytes,
On 8/30/2018 11:13 AM, Pierre Couderc wrote:
> Trying to install a RAID1 on a debian stretch, I made some mistake and
> got this, after installing on disk1 and trying to add second disk :
>
>
> root@server:~# fdisk -l
> Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 39070291
On Thursday, 30 August 2018 12:01:55 CEST Pierre Couderc wrote:
>
> On 08/30/2018 11:35 AM, Qu Wenruo wrote:
> >
> > On 2018/8/30 下午5:13, Pierre Couderc wrote:
> >> Trying to install a RAID1 on a debian stretch, I made some mistake and
> >> got this, after i
On 2018/8/30 下午6:01, Pierre Couderc wrote:
>
>
> On 08/30/2018 11:35 AM, Qu Wenruo wrote:
>>
>> On 2018/8/30 下午5:13, Pierre Couderc wrote:
>>> Trying to install a RAID1 on a debian stretch, I made some mistake and
>>> got this, after installing
On 08/30/2018 11:35 AM, Qu Wenruo wrote:
On 2018/8/30 下午5:13, Pierre Couderc wrote:
Trying to install a RAID1 on a debian stretch, I made some mistake and
got this, after installing on disk1 and trying to add second disk :
root@server:~# fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016
On 2018/8/30 下午5:13, Pierre Couderc wrote:
> Trying to install a RAID1 on a debian stretch, I made some mistake and
> got this, after installing on disk1 and trying to add second disk :
>
>
> root@server:~# fdisk -l
> Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 39070291
Trying to install a RAID1 on a debian stretch, I made some mistake and
got this, after installing on disk1 and trying to add second disk :
root@server:~# fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical
[Forgot to Cc the list]
On 2018/8/29 下午10:04, Pierre Couderc wrote:
>
> On 08/29/2018 02:52 PM, Qu Wenruo wrote:
>>
>> On 2018/8/29 下午8:49, Pierre Couderc wrote:
>>> I want to reinstall a RAID1 btrfs system (wchis is now under debian
>>> stretch, and will b
On 08/29/2018 02:52 PM, Qu Wenruo wrote:
On 2018/8/29 下午8:49, Pierre Couderc wrote:
I want to reinstall a RAID1 btrfs system (wchis is now under debian
stretch, and will be reinstalled in stretch).
If you still want to use btrfs, just umount the original fs, and
# mkfs.btrfs -f
On 2018/8/29 下午8:49, Pierre Couderc wrote:
> I want to reinstall a RAID1 btrfs system (wchis is now under debian
> stretch, and will be reinstalled in stretch).
If you still want to use btrfs, just umount the original fs, and
# mkfs.btrfs -f
Then a completely new btrfs.
&g
I want to reinstall a RAID1 btrfs system (wchis is now under debian
stretch, and will be reinstalled in stretch).
How to correctly "erase" it ? Not truly hard erase it, but so that old
data does not appear...
It is not clear in the wiki.
Thanks
PC
Sterling Windmill posted on Mon, 30 Jul 2018 21:06:54 -0400 as excerpted:
> Both drives are identical, Seagate 8TB external drives
Are those the "shingled" SMR drives, normally sold as archive drives and
first commonly available in the 8TB size, and often bought for their
generally better
. Anything else I can collect that
might be helpful in understanding what's happening here?
On Mon, Jul 30, 2018 at 8:56 PM Qu Wenruo wrote:
>
>
> On 2018年07月31日 08:43, Sterling Windmill wrote:
> > I am using a two disk raid1 btrfs filesystem spanning two external hard
> >
On 2018年07月31日 08:43, Sterling Windmill wrote:
> I am using a two disk raid1 btrfs filesystem spanning two external hard
> drives connected via USB 3.0.
Is there any speed difference between the two device?
And are these 2 devices under the same USB3.0 root hub or different root
hubs?
I am using a two disk raid1 btrfs filesystem spanning two external hard
drives connected via USB 3.0.
While copying ~6TB of data from this filesystem to local disk via rsync I
am seeing messages like the following in dmesg output:
[ 2213.406267] BTRFS warning (device sdj1): csum failed root 5
On 2018-07-20 14:41, Hugo Mills wrote:
On Fri, Jul 20, 2018 at 09:38:14PM +0300, Andrei Borzenkov wrote:
20.07.2018 20:16, Goffredo Baroncelli пишет:
[snip]
Limiting the number of disk per raid, in BTRFS would be quite simple to implement in the
"chunk allocator"
You mean that currently
On Fri, Jul 20, 2018 at 09:38:14PM +0300, Andrei Borzenkov wrote:
> 20.07.2018 20:16, Goffredo Baroncelli пишет:
[snip]
> > Limiting the number of disk per raid, in BTRFS would be quite simple to
> > implement in the "chunk allocator"
> >
>
> You mean that currently RAID5 stripe size is equal
>>>> orthogonal today, Hugo's whole point was that btrfs is theoretically
>>>>>> flexible enough to allow both together and the feature may at some
>>>>>> point be added, so it makes sense to have a layout notation format
>>>>>> flexi
talking about striping plus parity.
What I'm referring to is different though. Just like RAID10 used to be
implemented as RAID1 on top of RAID0, RAID05 is RAID0 on top of RAID5.
That is, you're striping your data across multiple RAID5 arrays instead
of using one big RAID5 array to store it all
ough to allow both together and the feature may at some
>>>>> point be added, so it makes sense to have a layout notation format
>>>>> flexible enough to allow it as well.
>>>>
>>>> When I say orthogonal, It means that these can be combined: i.e. you can
>>
t; RAID15 and RAID16 are a similar case to RAID51 and RAID61, except they
>>> might actually make sense in BTRFS to provide a backup means of rebuilding
>>> blocks that fail checksum validation if both copies fail.
>> If you need further redundancy, it is easy to implement a p
On Thu, Jul 19, 2018 at 07:47:23AM -0400, Austin S. Hemmelgarn wrote:
> > So this special level will be used for RAID56 for now?
> > Or it will also be possible for metadata usage just like current RAID1?
> >
> > If the latter, the metadata scrub problem will ne
ion,
> > I've added a 4-copy replication, that would allow triple copy raid (that
> > does not have a standardized name).
>
> So this special level will be used for RAID56 for now?
> Or it will also be possible for metadata usage just like current RAID1?
It's a new profile usable i
le striping and mirroring/pairing are
>>>> orthogonal today, Hugo's whole point was that btrfs is theoretically
>>>> flexible enough to allow both together and the feature may at some
>>>> point be added, so it makes sense to have a layout notation format
>&g
Hugo Mills wrote:
On Wed, Jul 18, 2018 at 08:39:48AM +, Duncan wrote:
Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted:
Perhaps it's a case of coder's view (no code doing it that way, it's just
a coincidental oddity conditional on equal sizes), vs. sysadmin's view
(code or
, Duncan wrote:
Goffredo Baroncelli posted on Mon, 16 Jul 2018 20:29:46 +0200 as
excerpted:
[...]
When I say orthogonal, It means that these can be combined: i.e. you can
have - striping (RAID0)
- parity (?)
- striping + parity (e.g. RAID5/6)
- mirroring (RAID1)
- mirroring + striping (RAID10
be possible for metadata usage just like current RAID1?
If the latter, the metadata scrub problem will need to be considered more.
For more copies RAID1, it's will have higher possibility one or two
devices missing, and then being scrubbed.
For metadata scrub, inlined csum can't ensure it's
say orthogonal, It means that these can be combined: i.e. you can
have - striping (RAID0)
- parity (?)
- striping + parity (e.g. RAID5/6)
- mirroring (RAID1)
- mirroring + striping (RAID10)
However you can't have mirroring+parity; this means that a notation
where both 'C' ( = number of copy
level will be used for RAID56 for now?
Or it will also be possible for metadata usage just like current RAID1?
If the latter, the metadata scrub problem will need to be considered more.
For more copies RAID1, it's will have higher possibility one or two
devices missing, and then being scrubbed.
For met
t;>> flexible enough to allow both together and the feature may at some
>>> point be added, so it makes sense to have a layout notation format
>>> flexible enough to allow it as well.
>>
>> When I say orthogonal, It means that these can be combined: i.e. you can
&g
On Wed, Jul 18, 2018 at 08:39:48AM +, Duncan wrote:
> Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted:
>
> >> As implemented in BTRFS, raid1 doesn't have striping.
> >
> > The argument is that because there's only two copies, on multi-device
>
: i.e. you can
have - striping (RAID0)
- parity (?)
- striping + parity (e.g. RAID5/6)
- mirroring (RAID1)
- mirroring + striping (RAID10)
However you can't have mirroring+parity; this means that a notation
where both 'C' ( = number of copy) and 'P' ( = number of parities) is
too verbose.
Yes
On 2018-07-18 04:39, Duncan wrote:
Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted:
As implemented in BTRFS, raid1 doesn't have striping.
The argument is that because there's only two copies, on multi-device
btrfs raid1 with 4+ devices of equal size so chunk allocations tend
Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted:
>> As implemented in BTRFS, raid1 doesn't have striping.
>
> The argument is that because there's only two copies, on multi-device
> btrfs raid1 with 4+ devices of equal size so chunk allocations tend to
> alte
ense to have a layout notation format
>> flexible enough to allow it as well.
>
> When I say orthogonal, It means that these can be combined: i.e. you can
> have - striping (RAID0)
> - parity (?)
> - striping + parity (e.g. RAID5/6)
> - mirroring (RAID1)
> - mirroring + stripi
e
- striping (RAID0)
- parity (?)
- striping + parity (e.g. RAID5/6)
- mirroring (RAID1)
- mirroring + striping (RAID10)
However you can't have mirroring+parity; this means that a notation where both
'C' ( = number of copy) and 'P' ( = number of parities) is too verbose.
[...]
>
>&g
ome point
be added, so it makes sense to have a layout notation format flexible
enough to allow it as well.
In the global context, just to complete things and mostly for others
reading as I feel a bit like a simpleton explaining to the expert here,
just as raid10 is shorthand for raid1+0, aka raid
some aliases) would be much
better for the commoners (such as myself).
...snip... > Which would make the above table look like so:
Old format / My Format / My suggested alias
SINGLE / R0.S0.P0 / SINGLE
DUP / R1.S1.P0 / DUP (or even MIRRORLOCAL1)
RAID0 / R0.Sm.P0 / STRIPE
RAID1 / R1.S0
RAID1 / 2C / MIRROR1
RAID1c3 / 3C / MIRROR2
RAID1c4 / 4C / MIRROR3
RAID10 / 2CmS / STRIPE.MIRROR1
Striping and mirroring/pairing are orthogonal properties; mirror and parity are
mutually exclusive. What about
RAID1 -> MIRROR1
RAID10 -> MIRROR1S
RAID1c3 -> MIRROR
/ SINGLE
> DUP / 2CD / DUP (or even MIRRORLOCAL1)
> RAID0 / 1CmS / STRIPE
> RAID1 / 2C / MIRROR1
> RAID1c3 / 3C / MIRROR2
> RAID1c4 / 4C / MIRROR3
> RAID10 / 2CmS / STRIPE.MIRROR1
Striping and mirroring/pairing are orthogonal properties; mirror and pari
On Fri, Jul 13, 2018 at 08:46:28PM +0200, David Sterba wrote:
[snip]
> An interesting question is the naming of the extended profiles. I picked
> something that can be easily understood but it's not a final proposal.
> Years ago, Hugo proposed a naming scheme that described the
> non-standard raid
be much
better for the commoners (such as myself).
For example:
Old format / New Format / My suggested alias
SINGLE / 1C / SINGLE
DUP / 2CD/ DUP (or even MIRRORLOCAL1)
RAID0 / 1CmS / STRIPE
RAID1 / 2C / MIRROR1
RAID1c3 / 3C / MIRROR2
RAID1c4 / 4C / MIRROR3
RAID10
replication for a small bribe.
The new raid profiles and covered by an incompatibility bit, called
extended_raid, the (idealistic) plan is to stuff as many new
raid-related features as possible. The patch 4/4 mentions the 3- 4- copy
raid1, configurable stripe length, write hole log and triple
From: Timofey Titovets
Currently btrfs raid1/10 balancer bаlance requests to mirrors,
based on pid % num of mirrors.
Make logic understood:
- if one of underline devices are non rotational
- Queue leght to underline devices
By default try use pid % num_mirrors guessing, but:
- If one
From: Timofey Titovets
Currently btrfs raid1/10 balancer bаlance requests to mirrors,
based on pid % num of mirrors.
Make logic understood:
- if one of underline devices are non rotational
- Queue leght to underline devices
By default try use pid % num_mirrors guessing, but:
- If one
Test to make sure that a raid1 device with missed write reads
good data when reassembled.
Signed-off-by: Anand Jain <anand.j...@oracle.com>
---
This test case fails as of now.
I am sending this to btrfs ML only as it depends on the
read_mirror_policy kernel patches which is in the ML.
On 2018/04/25 17:15, Timofey Titovets wrote:
> 2018-04-25 10:54 GMT+03:00 Misono Tomohiro <misono.tomoh...@jp.fujitsu.com>:
>> On 2018/04/25 9:20, Timofey Titovets wrote:
>>> Currently btrfs raid1/10 balancer bаlance requests to mirrors,
>>> based on pid % n
2018-04-25 10:54 GMT+03:00 Misono Tomohiro <misono.tomoh...@jp.fujitsu.com>:
> On 2018/04/25 9:20, Timofey Titovets wrote:
>> Currently btrfs raid1/10 balancer bаlance requests to mirrors,
>> based on pid % num of mirrors.
>>
>> Make logic understood:
>>
On 2018/04/25 9:20, Timofey Titovets wrote:
> Currently btrfs raid1/10 balancer bаlance requests to mirrors,
> based on pid % num of mirrors.
>
> Make logic understood:
> - if one of underline devices are non rotational
> - Queue leght to underline devices
>
>
Currently btrfs raid1/10 balancer bаlance requests to mirrors,
based on pid % num of mirrors.
Make logic understood:
- if one of underline devices are non rotational
- Queue leght to underline devices
By default try use pid % num_mirrors guessing, but:
- If one of mirrors are non rotational
s/misc-tests/030-missing-device-image/test.sh
> @@ -0,0 +1,57 @@
> +#!/bin/bash
> +# Test that btrfs-image can dump image correctly for missing device (RAID1)
> +#
> +# At least for RAID1, btrfs-image should be able to handle one missing device
> +# without any problem
&
.sh
b/tests/misc-tests/030-missing-device-image/test.sh
new file mode 100755
index ..b8ae3a950cc9
--- /dev/null
+++ b/tests/misc-tests/030-missing-device-image/test.sh
@@ -0,0 +1,57 @@
+#!/bin/bash
+# Test that btrfs-image can dump image correctly for missing device (RAID1)
+#
+# At
Matthew Hawn posted on Thu, 22 Mar 2018 00:13:38 + as excerpted:
> This is almost definitely a bug in GRUB, but I wanted to get the btrfs
> mailing list opinion first.
>
> Symptoms:
> I have a btrfs raid1 /boot and root filesystem. Ever since I replaced a
> drive, w
This is almost definitely a bug in GRUB, but I wanted to get the btrfs mailing
list opinion first.
Symptoms:
I have a btrfs raid1 /boot and root filesystem. Ever since I replaced a drive,
when I run the grub utilities to create my grub.cfg and install to boot sector,
it only recognizes one
isk your data on bugs
>> that were after all discovered and fixed over a year ago?
>
> It is also missing newly introduced bugs. Right now I'm dealing with
> btrfs raid1 server that had the fs getting stuck and kernel oopses due
> to a regression:
>
> https://bugzilla.kernel.or
It is also missing newly introduced bugs. Right now I'm dealing with btrfs
raid1 server that had the fs getting stuck and kernel oopses due to a
regression:
https://bugzilla.kernel.org/show_bug.cgi?id=198861
I had to cherry-pick commit 3be8828fc507cdafe7040a3dcf361a2bcd8e305b and
recompil
Adam Borowski posted on Sun, 11 Mar 2018 18:47:13 +0100 as excerpted:
> On Sun, Mar 11, 2018 at 11:28:08PM +0700, Andreas Hild wrote:
>> Following a physical disk failure of a RAID1 array, I tried to mount
>> the remaining volume of a root partition with "-o degraded". Fo
1 - 100 of 1394 matches
Mail list logo