On 18/12/17 16:31, Austin S. Hemmelgarn wrote:
> On 2017-12-16 14:50, Dark Penguin wrote:
>> Could someone please point me towards some read about how btrfs handles
>> multiple devices? Namely, kicking faulty devices and re-adding them.
>>
>> I've been using btrfs on single devices for a while,
Tomasz Pala posted on Sat, 23 Dec 2017 03:52:47 +0100 as excerpted:
> On Fri, Dec 22, 2017 at 14:04:43 -0700, Chris Murphy wrote:
>
>> I'm pretty sure degraded boot timeout policy is handled by dracut. The
>
> Well, last time I've checked dracut on systemd-system couldn't even
> generate
Tomasz Pala posted on Sat, 23 Dec 2017 05:08:16 +0100 as excerpted:
> On Tue, Dec 19, 2017 at 17:08:28 -0700, Chris Murphy wrote:
>
> Now, if the current kernels won't toggle degraded RAID1 as ro, can I
> safely add "degraded" to the mount options? My primary concern is
> the
>>>
On Tue, Dec 19, 2017 at 17:08:28 -0700, Chris Murphy wrote:
Now, if the current kernels won't toggle degraded RAID1 as ro, can I
safely add "degraded" to the mount options? My primary concern is the
>> [...]
>
> Well it only does rw once, then the next degraded is ro - there are
>
On Fri, Dec 22, 2017 at 14:04:43 -0700, Chris Murphy wrote:
> I'm pretty sure degraded boot timeout policy is handled by dracut. The
Well, last time I've checked dracut on systemd-system couldn't even
generate systemd-less image.
> kernel doesn't just automatically assemble an md array as soon
On Fri, Dec 22, 2017 at 9:05 AM, Tomasz Pala wrote:
> On Thu, Dec 21, 2017 at 07:27:23 -0500, Austin S. Hemmelgarn wrote:
>
>> Also, it's not 'up to the filesystem', it's 'up to the underlying
>> device'. LUKS, LVM, MD, and everything else that's an actual device
>> layer is
On Thu, Dec 21, 2017 at 07:27:23 -0500, Austin S. Hemmelgarn wrote:
> No, it isn't. You can just make the damn mount call with the supplied
> options. If it succeeds, the volume was ready, if it fails, it wasn't,
> it's that simple, and there's absolutely no reason that systemd can't
> just
On 2017-12-21 06:44, Andrei Borzenkov wrote:
On Tue, Dec 19, 2017 at 11:47 PM, Austin S. Hemmelgarn
wrote:
On 2017-12-19 15:41, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 12:35:20 -0700, Chris Murphy wrote:
with a read only file system. Another reason is the kernel
On Wed, Dec 20, 2017 at 11:07 PM, Chris Murphy wrote:
>
> YaST doesn't have Btrfs raid1 or raid10 options; and also won't do
> encrypted root with Btrfs either because YaST enforces LVM to do LUKS
> encryption for some weird reason; and it also enforces NOT putting
>
On Wed, Dec 20, 2017 at 1:14 PM, Austin S. Hemmelgarn
wrote:
> On 2017-12-20 15:07, Chris Murphy wrote:
>> There is an irony here:
>>
>> YaST doesn't have Btrfs raid1 or raid10 options; and also won't do
>> encrypted root with Btrfs either because YaST enforces LVM to do
On 2017-12-20 15:07, Chris Murphy wrote:
On Wed, Dec 20, 2017 at 1:02 PM, Chris Murphy wrote:
On Wed, Dec 20, 2017 at 9:53 AM, Andrei Borzenkov wrote:
19.12.2017 22:47, Chris Murphy пишет:
BTW, doesn't SuSE use btrfs by default? Would you
On Wed, Dec 20, 2017 at 1:02 PM, Chris Murphy wrote:
> On Wed, Dec 20, 2017 at 9:53 AM, Andrei Borzenkov wrote:
>> 19.12.2017 22:47, Chris Murphy пишет:
>>>
BTW, doesn't SuSE use btrfs by default? Would you expect everyone using
this
On Wed, Dec 20, 2017 at 9:53 AM, Andrei Borzenkov wrote:
> 19.12.2017 22:47, Chris Murphy пишет:
>>
>>>
>>> BTW, doesn't SuSE use btrfs by default? Would you expect everyone using
>>> this distro to research every component used?
>>
>> As far as I'm aware, only Btrfs single
On Wed, Dec 20, 2017 at 1:34 AM, Tomasz Pala wrote:
> On Tue, Dec 19, 2017 at 16:59:39 -0700, Chris Murphy wrote:
>
>>> Sth like this? I got such problem a few months ago, my solution was
>>> accepted upstream:
>>>
Austin S. Hemmelgarn posted on Wed, 20 Dec 2017 08:33:03 -0500 as
excerpted:
>> The obvious answer is: do it via kernel command line, just like mdadm
>> does:
>> rootflags=device=/dev/sda,device=/dev/sdb
>> rootflags=device=/dev/sda,device=missing
>>
On 2017-12-20 11:53, Andrei Borzenkov wrote:
19.12.2017 22:47, Chris Murphy пишет:
BTW, doesn't SuSE use btrfs by default? Would you expect everyone using
this distro to research every component used?
As far as I'm aware, only Btrfs single device stuff is "supported".
The multiple device
19.12.2017 22:47, Chris Murphy пишет:
>
>>
>> BTW, doesn't SuSE use btrfs by default? Would you expect everyone using
>> this distro to research every component used?
>
> As far as I'm aware, only Btrfs single device stuff is "supported".
> The multiple device stuff is definitely not supported
On 2017-12-19 17:23, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 15:47:03 -0500, Austin S. Hemmelgarn wrote:
Sth like this? I got such problem a few months ago, my solution was
accepted upstream:
https://github.com/systemd/systemd/commit/0e8856d25ab71764a279c2377ae593c0f2460d8f
Rationale is in
On 2017-12-19 18:53, Chris Murphy wrote:
On Tue, Dec 19, 2017 at 1:11 PM, Austin S. Hemmelgarn
wrote:
On 2017-12-19 12:56, Tomasz Pala wrote:
BTRFS lacks all of these - there are major functional changes in current
kernels and it reaches far beyond LTS. All the
On 2017-12-19 16:58, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 15:11:22 -0500, Austin S. Hemmelgarn wrote:
Except the systems running on those ancient kernel versions are not
necessarily using a recent version of btrfs-progs.
Still much easier to update a userspace tools than kernel
Errata:
On Wed, Dec 20, 2017 at 09:34:48 +0100, Tomasz Pala wrote:
> /dev/sda -> 'not ready'
> /dev/sdb -> 'not ready'
> /dev/sdc -> 'ready', triggers /dev/sda -> 'not ready' and /dev/sdb - still
> 'not ready'
> /dev/sdc -> kernel says 'ready', triggers /dev/sda - 'ready' and /dev/sdb ->
>
On Tue, Dec 19, 2017 at 16:59:39 -0700, Chris Murphy wrote:
>> Sth like this? I got such problem a few months ago, my solution was
>> accepted upstream:
>> https://github.com/systemd/systemd/commit/0e8856d25ab71764a279c2377ae593c0f2460d8f
>
> I can't parse this commit. In particular I can't tell
On Tue, Dec 19, 2017 at 2:17 PM, Tomasz Pala wrote:
> On Tue, Dec 19, 2017 at 12:47:33 -0700, Chris Murphy wrote:
>
>> The more verbose man pages are, the more likely it is that information
>> gets stale. We already see this with the Btrfs Wiki. So are you
>
> True. The same
On Tue, Dec 19, 2017 at 1:41 PM, Tomasz Pala wrote:
> On Tue, Dec 19, 2017 at 12:35:20 -0700, Chris Murphy wrote:
>
>> with a read only file system. Another reason is the kernel code and
>> udev rule for device "readiness" means the volume is not "ready" until
>> all member
On Tue, Dec 19, 2017 at 1:11 PM, Austin S. Hemmelgarn
wrote:
> On 2017-12-19 12:56, Tomasz Pala wrote:
>> BTRFS lacks all of these - there are major functional changes in current
>> kernels and it reaches far beyond LTS. All the knowledge YOU have here,
>> on this maillist,
On Tue, Dec 19, 2017 at 15:47:03 -0500, Austin S. Hemmelgarn wrote:
>> Sth like this? I got such problem a few months ago, my solution was
>> accepted upstream:
>> https://github.com/systemd/systemd/commit/0e8856d25ab71764a279c2377ae593c0f2460d8f
>>
>> Rationale is in referred ticket, udev would
On Tue, Dec 19, 2017 at 15:11:22 -0500, Austin S. Hemmelgarn wrote:
> Except the systems running on those ancient kernel versions are not
> necessarily using a recent version of btrfs-progs.
Still much easier to update a userspace tools than kernel (consider
binary drivers for various
On Tue, Dec 19, 2017 at 12:47:33 -0700, Chris Murphy wrote:
> The more verbose man pages are, the more likely it is that information
> gets stale. We already see this with the Btrfs Wiki. So are you
True. The same applies to git documentation (3rd paragraph):
On 2017-12-19 15:41, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 12:35:20 -0700, Chris Murphy wrote:
with a read only file system. Another reason is the kernel code and
udev rule for device "readiness" means the volume is not "ready" until
all member devices are present. And while the volume is
On Tue, Dec 19, 2017 at 12:35:20 -0700, Chris Murphy wrote:
> with a read only file system. Another reason is the kernel code and
> udev rule for device "readiness" means the volume is not "ready" until
> all member devices are present. And while the volume is not "ready"
> systemd will not even
On Tue, Dec 19, 2017 at 10:31:40 -0800, George Mitchell wrote:
> I have significant experience as a user of raid1. I spent years using
> software raid1 and then more years using hardware (3ware) raid1 and now
> around 3 years using btrfs raid1. I have not found btrfs raid1 to be
> less
On 2017-12-19 12:56, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 11:35:02 -0500, Austin S. Hemmelgarn wrote:
2. printed on screen when creating/converting "RAID1" profile (by btrfs tools),
I don't agree on this one. It is in no way unreasonable to expect that
someone has read the
On Tue, Dec 19, 2017 at 10:56 AM, Tomasz Pala wrote:
> On Tue, Dec 19, 2017 at 11:35:02 -0500, Austin S. Hemmelgarn wrote:
>
>>> 2. printed on screen when creating/converting "RAID1" profile (by btrfs
>>> tools),
>> I don't agree on this one. It is in no way unreasonable to
On Tue, Dec 19, 2017 at 7:46 AM, Tomasz Pala wrote:
>Secondly - permanent failures are not handled "just
> fine", as there is (1) no automatic mount as degraded, so the machine
> won't reboot properly and (2) the r/w degraded mount is[*] one-timer.
> Again, this should be:
One
On 12/19/2017 06:46 AM, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 07:25:49 -0500, Austin S. Hemmelgarn wrote:
Well, the RAID1+ is all about the failing hardware.
About catastrophically failing hardware, not intermittent failure.
It shouldn't matter - as long as disk failing once is kicked
On Tue, Dec 19, 2017 at 11:35:02 -0500, Austin S. Hemmelgarn wrote:
>> 2. printed on screen when creating/converting "RAID1" profile (by btrfs
>> tools),
> I don't agree on this one. It is in no way unreasonable to expect that
> someone has read the documentation _before_ trying to use
On 2017-12-19 09:46, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 07:25:49 -0500, Austin S. Hemmelgarn wrote:
Well, the RAID1+ is all about the failing hardware.
About catastrophically failing hardware, not intermittent failure.
It shouldn't matter - as long as disk failing once is kicked out
On Tue, Dec 19, 2017 at 07:25:49 -0500, Austin S. Hemmelgarn wrote:
>> Well, the RAID1+ is all about the failing hardware.
> About catastrophically failing hardware, not intermittent failure.
It shouldn't matter - as long as disk failing once is kicked out of the
array *if possible*. Or
[ ... ]
> The advantage of writing single chunks when degraded, is in
> the case where a missing device returns (is readded,
> intact). Catching up that device with the first drive, is a
> manual but simple invocation of 'btrfs balance start
> -dconvert=raid1,soft -mconvert=raid1,soft' The
On Tue, Dec 19, 2017 at 1:28 AM, Chris Murphy wrote:
> On Mon, Dec 18, 2017 at 1:49 AM, Anand Jain wrote:
>
>> Agreed. IMO degraded-raid1-single-chunk is an accidental feature
>> caused by [1], which we should revert back, since..
>>- balance
On 2017-12-18 17:01, Peter Grandi wrote:
The fact is, the only cases where this is really an issue is
if you've either got intermittently bad hardware, or are
dealing with external
Well, the RAID1+ is all about the failing hardware.
storage devices. For the majority of people who are using
On Mon, Dec 18, 2017 at 03:28:14PM -0700, Chris Murphy wrote:
> On Mon, Dec 18, 2017 at 1:49 AM, Anand Jain wrote:
> > Agreed. IMO degraded-raid1-single-chunk is an accidental feature
> > caused by [1], which we should revert back, since..
> >- balance (to raid1
On 2017-12-18 14:43, Tomasz Pala wrote:
On Mon, Dec 18, 2017 at 08:06:57 -0500, Austin S. Hemmelgarn wrote:
The fact is, the only cases where this is really an issue is if you've
either got intermittently bad hardware, or are dealing with external
Well, the RAID1+ is all about the failing
On Mon, Dec 18, 2017 at 3:28 PM, Chris Murphy wrote:
> On Mon, Dec 18, 2017 at 1:49 AM, Anand Jain wrote:
>
>> Agreed. IMO degraded-raid1-single-chunk is an accidental feature
>> caused by [1], which we should revert back, since..
>>- balance
On Mon, Dec 18, 2017 at 1:49 AM, Anand Jain wrote:
> Agreed. IMO degraded-raid1-single-chunk is an accidental feature
> caused by [1], which we should revert back, since..
>- balance (to raid1 chunk) may fail if FS is near full
>- recovery (to raid1 chunk) will
>> The fact is, the only cases where this is really an issue is
>> if you've either got intermittently bad hardware, or are
>> dealing with external
> Well, the RAID1+ is all about the failing hardware.
>> storage devices. For the majority of people who are using
>> multi-device setups, the
On Mon, Dec 18, 2017 at 08:06:57 -0500, Austin S. Hemmelgarn wrote:
> The fact is, the only cases where this is really an issue is if you've
> either got intermittently bad hardware, or are dealing with external
Well, the RAID1+ is all about the failing hardware.
> storage devices. For the
what was intended that it should be able to detect a previous
member block-device becoming available again as a different
device inode, which currently is very dangerous in some vital
situations.
Peter, What's the dangerous part here ?
If device disappears, the patch [4] will completely
On 2017-12-16 14:50, Dark Penguin wrote:
Could someone please point me towards some read about how btrfs handles
multiple devices? Namely, kicking faulty devices and re-adding them.
I've been using btrfs on single devices for a while, but now I want to
start using it in raid1 mode. I booted
On 2017-12-17 10:48, Peter Grandi wrote:
"Duncan"'s reply is slightly optimistic in parts, so some
further information...
[ ... ]
Basically, at this point btrfs doesn't have "dynamic" device
handling. That is, if a device disappears, it doesn't know
it.
That's just the consequence of what
On 18.12.2017 10:49, Anand Jain wrote:
>
>
>> Put another way, the multi-device design is/was based on the
>> demented idea that block-devices that are missing are/should be
>> "remove"d, so that a 2-device volume with a 'raid1' profile
>> becomes a 1-device volume with a 'single'/'dup'
>> I haven't seen that, but I doubt that it is the radical
>> redesign of the multi-device layer of Btrfs that is needed to
>> give it operational semantics similar to those of MD RAID,
>> and that I have vaguely described previously.
> I agree that btrfs volume manager is incomplete in view of
>
formerly missing device - a very big penalty because the whole array
has to be done to catch it up for what might be only a few minutes of
missing time.
For raid1 [1] cli will pick only new chunks.
[1]
btrfs bal start -dprofiles=single -mprofiles=single
Thanks, Anand
--
To unsubscribe
Put another way, the multi-device design is/was based on the
demented idea that block-devices that are missing are/should be
"remove"d, so that a 2-device volume with a 'raid1' profile
becomes a 1-device volume with a 'single'/'dup' profile, and not
a 2-device volume with a missing
Nice status update about btrfs volume manager. Thanks.
Below I have added the names of the patch in ML/wip addressing
the current limitations.
On 12/17/2017 07:58 PM, Duncan wrote:
Dark Penguin posted on Sat, 16 Dec 2017 22:50:33 +0300 as excerpted:
Could someone please point me towards
On 2017年12月17日 03:50, Dark Penguin wrote:
> Could someone please point me towards some read about how btrfs handles
> multiple devices? Namely, kicking faulty devices and re-adding them.
>
> I've been using btrfs on single devices for a while, but now I want to
> start using it in raid1 mode. I
On Sun, Dec 17, 2017 at 8:48 AM, Peter Grandi
wrote:
> "Duncan"'s reply is slightly optimistic in parts, so some
> further information...
>> and it should detect a device coming back as a different
>> device too.
>
> That is disagreeable because of poor terminology:
"Duncan"'s reply is slightly optimistic in parts, so some
further information...
[ ... ]
> Basically, at this point btrfs doesn't have "dynamic" device
> handling. That is, if a device disappears, it doesn't know
> it.
That's just the consequence of what is a completely broken
conceptual
Dark Penguin posted on Sat, 16 Dec 2017 22:50:33 +0300 as excerpted:
> Could someone please point me towards some read about how btrfs handles
> multiple devices? Namely, kicking faulty devices and re-adding them.
>
> I've been using btrfs on single devices for a while, but now I want to
> start
59 matches
Mail list logo