Re: How to erase a RAID1 (+++)?

2018-08-31 Thread Duncan
Alberto Bursi posted on Fri, 31 Aug 2018 14:54:46 + as excerpted:

> I just keep around a USB drive with a full Linux system on it, to act as
> "recovery". If the btrfs raid fails I boot into that and I can do
> maintenance with a full graphical interface and internet access so I can
> google things.

I do very similar, except my "recovery boot" is my backup (with normally 
including for root two levels of backup/recovery available, three for 
some things).

I've actually gone so far as to have /etc/fstab be a symlink to one of 
several files, depending on what version of root vs. the off-root 
filesystems I'm booting, with a set of modular files that get assembled 
by scripts to build the fstabs as appropriate.  So updating fstab is a 
process of updating the modules, then running the scripts to create the 
actual fstabs, and after I update a root backup the last step is changing 
the symlink to point to the appropriate fstab for that backup, so it's 
correct if I end up booting from it.

Meanwhile, each root, working and two backups, is its own set of two 
device partitions in btrfs raid1 mode.  (One set of backups is on 
separate physical devices, covering the device death scenario, the other 
is on different partitions on the same, newer and larger pair of physical 
devices as the working set, so it won't cover device death but still 
covers fat-fingering, filesystem fubaring, bad upgrades, etc.)

/boot is separate and there's four of those (working and three backups), 
one each on each device of the two physical pairs, with the bios able to 
point to any of the four.  I run grub2, so once the bios loads that, I 
can interactively load kernels from any of the other three /boots and 
choose to boot any of the three roots.

And I build my own kernels, with an initrd attached as an initramfs to 
each, and test that they boot.  So selecting a kernel by definition 
selects its attached initramfs as well, meaning the initr*s are backed up 
and selected with the kernels.

(As I said earlier it'd sure be nice to be able to do away with the 
initr*s again.  I was actually thinking about testing that today, which 
was supposed to be a day off, but got called in to work, so the test will 
have to wait once again...)

What's nice about all that is that just as you said, each recovery/backup 
is a snapshot of the working system at the time I took the backup, so 
it's not a limited recovery boot at all, it has the same access to tools, 
manpages, net, X/plasma, browsers, etc, that my normal system does, 
because it /is/ my normal system from whenever I took the backup.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman



Re: How to erase a RAID1 (+++)?

2018-08-31 Thread Pierre Couderc



On 08/31/2018 04:54 PM, Alberto Bursi wrote:

On 8/31/2018 8:53 AM, Pierre Couderc wrote:

OK, I have understood the message... I was planning that as you said
"semi-routinely", and I understand btrfs is not soon enough ready, and
I am very very far to be a specialist as you are.
So, I shall mount my RAID1 very standard, and I  shall expect the
disaster, hoping it does not occur
Now, I shall try to absorb all that...

Thank you very much !


I just keep around a USB drive with a full Linux system on it, to act as
"recovery". If the btrfs raid fails I boot into that and I can do
maintenance with a full graphical interface and internet access so I can
google things.

Of course on a home server you can't do that without some automation
that will switch boot device after some amount of boot failures of the
main OS.

While if your server has BMC (lights-out management) you can switch boot
device through that.

-Alberto
Thank you Alberto, yes,I have some other computers to react in case of 
failure.




Re: How to erase a RAID1 (+++)?

2018-08-31 Thread Alberto Bursi

On 8/31/2018 8:53 AM, Pierre Couderc wrote:
>
> OK, I have understood the message... I was planning that as you said 
> "semi-routinely", and I understand btrfs is not soon enough ready, and 
> I am very very far to be a specialist as you are.
> So, I shall mount my RAID1 very standard, and I  shall expect the 
> disaster, hoping it does not occur
> Now, I shall try to absorb all that...
>
> Thank you very much !
>

I just keep around a USB drive with a full Linux system on it, to act as 
"recovery". If the btrfs raid fails I boot into that and I can do 
maintenance with a full graphical interface and internet access so I can 
google things.

Of course on a home server you can't do that without some automation 
that will switch boot device after some amount of boot failures of the 
main OS.

While if your server has BMC (lights-out management) you can switch boot 
device through that.

-Alberto



Re: How to erase a RAID1 (+++)?

2018-08-31 Thread Pierre Couderc




On 08/31/2018 04:29 AM, Duncan wrote:

Chris Murphy posted on Thu, 30 Aug 2018 11:08:28 -0600 as excerpted:


My purpose is a simple RAID1 main fs, with bootable flag on the 2 disks
in prder to start in degraded mode

Good luck with this. The Btrfs archives are full of various limitations
of Btrfs raid1. There is no automatic degraded mount for Btrfs. And if
you persistently ask for degraded mount, you run the risk of other
problems if there's merely a delayed discovery of one of the devices.
Once a Btrfs volume is degraded, it does not automatically resume normal
operation just because the formerly missing device becomes available.

So... this is flat out not suitable for use cases where you need
unattended raid1 degraded boot.

Agreeing in general and adding some detail...

1) Are you intending to use an initr*?  I'm not sure the current status
(I actually need to test again for myself), but at least in the past,
booting a btrfs raid1 rootfs required an initr*, and I have and use one
here, for that purpose alone (until switching to btrfs raid1 root, I went
initr*-less, and would prefer that again, due to the complications of
maintaining an initr*).

The base problem is that with raid1 (or other forms of multi-device
btrfs, but it happens to be raid1 that's in question for both you and I)
the filesystem needs multiple devices to complete the filesystem and the
kernel's root= parameter takes only one.  When mounting after userspace
is up, a btrfs device scan is normally run (often automatically by udev)
before the mount, that lets btrfs in the kernel track what devices belong
to what filesystems, so pointing to just one of the devices is enough
because the kernel knows from that what filesystem is intended and can
match up the others that go with it from the earlier scan.

Now there's a btrfs mount option, device=/dev/*, that can be provided
more than once for additional devices, that can /normally/ be used to
tell the kernel what specific devices to use, bypassing the need for
btrfs device scan, and in /theory/, passing that like other mount options
in the kernel commandline via rootflags= /should/ "just work".

But for reasons I as a btrfs user (not dev, and definitely not kernel or
btrfs dev) don't fully understand, passing device= via rootflags= is, or
at least was, broken, so properly mounting a multi-device btrfs required
(and may still require) userspace, thus for a multi-device btrfs rootfs,
an initr*.

So direct-booting to a multi-device btrfs rootfs didn't normally work.
It would if you passed rootflags=degraded (at least with a two-device
raid1 so the one device passed in root= contained one copy of
everything), but then it was unclear if the additional device was
successfully added to the raid1 later, or not.  And with no automatic
sync and bringing back to undegraded status, it was a risk I didn't want
to take.  So unfortunately, initr* it was!

But I originally tested that when I setup my own btrfs raid1 rootfs very
long ago in kernel and btrfs terms, kernel 3.6 or so IIRC, and while I've
not /seen/ anything definitive on-list to suggest rootflags=device= is
unbroken now (I asked recently and got an affirmative reply, but I asked
for clarification and I've not seen it, tho perhaps it's there and I've
not read it yet), perhaps I missed it.  And I've not retested lately, tho
I really should as while I asked I guess the only real way to know is to
try it for myself, and it'd definitely be nice to be direct-booting
without having to bother with an initr*, again.

2) As both Chris and I alluded to, unlike say mdraid, btrfs doesn't (yet)
have an automatic mechanism to re-sync and "undegrade" after having been
mounted degraded,rw.  A btrfs scrub can be run to re-sync raid1 chunks,
but single chunks may have been added while in the degraded state as
well, and those need a balance convert to raid1 mode, before the
filesystem and data on it can be be considered reliably able to withstand
device loss once again.

In fact, while the problem has been fixed now, for quite awhile if the
filesystem was mounted degraded,rw, you often had exactly that one mount
to fix the problem, as new chunks would be written in single mode and
after that the filesystem would refuse to mount writable,degraded, and
would only let you mount degraded,ro, which would let you get data off it
but not let you fix the problem.  Word to the wise if you're planning on
running stable-debian (which tend to be older) kernels, or even just
trying to use them for recovery if you need to!  (The fix was to have a
mount check if at least one copy of all chunks were available and allow rw
mounting if so, instead of simply assuming that any single-mode chunks at
all meant some wouldn't be available on a multi-device filesystem with a
device missing, thus forcing read-only mode only mounting, as it used to
do.)

3) If a btrfs raid1 is mounted degraded,rw with one device missing, then
mounted again degraded,rw, with a different device 

Re: How to erase a RAID1 (+++)?

2018-08-31 Thread Pierre Couderc




On 08/30/2018 07:08 PM, Chris Murphy wrote:

On Thu, Aug 30, 2018 at 3:13 AM, Pierre Couderc  wrote:

Trying to install a RAID1 on a debian stretch, I made some mistake and got
this, after installing on disk1 and trying to add second disk  :


root@server:~# fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2a799300

Device Boot StartEndSectors  Size Id Type
/dev/sda1  * 2048 3907028991 3907026944  1.8T 83 Linux


Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9770f6fa

Device Boot StartEndSectors  Size Id Type
/dev/sdb1  * 2048 3907029167 3907027120  1.8T  5 Extended


Extended partition type is not a problem if you're using GRUB as the
bootloader; other bootloaders may not like this. Strictly speaking the
type code 0x05 is incorrect, GRUB ignores type code, as does the
kernel. GRUB also ignores the active bit (boot flag).




And :

root@server:~# btrfs fi show
Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
 Total devices 2 FS bytes used 1.10GiB
 devid1 size 1.82TiB used 4.02GiB path /dev/sda1
 devid2 size 1.00KiB used 0.00B path /dev/sdb1

That's odd; and I know you've moved on from this problem but I would
have liked to see the super for /dev/sdb1 and also the installer log
for what commands were used for partitioning, including mkfs and
device add commands.

For what it's worth, 'btrfs dev add' formats the device being added,
it does not need to be formatted in advance, and also it resizes the
file system properly.
Thank you, in fact my system seems more and more broken, so I cannot go 
further..

For example, a simple
df gives me an IO error.
I prefer reinstall the whole.
But to avoid the same problems, is there somewhere an howto "install a 
basic RAID1 btrfs debian stretch system" ?
OK, I install strech on 1 disk and then how do I "format" and add the 
second disk ?




My purpose is a simple RAID1 main fs, with bootable flag on the 2 disks in
prder to start in degraded mode

Good luck with this. The Btrfs archives are full of various
limitations of Btrfs raid1. There is no automatic degraded mount for
Btrfs. And if you persistently ask for degraded mount, you run the
risk of other problems if there's merely a delayed discovery of one of
the devices. Once a Btrfs volume is degraded, it does not
automatically resume normal operation just because the formerly
missing device becomes available.

So... this is flat out not suitable for use cases where you need
unattended raid1 degraded boot.

Well, I understnad the lesson that there  is  no hope currently to boot 
a degraded system...

Thank you very much


Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Duncan
Chris Murphy posted on Thu, 30 Aug 2018 11:08:28 -0600 as excerpted:

>> My purpose is a simple RAID1 main fs, with bootable flag on the 2 disks
>> in prder to start in degraded mode
> 
> Good luck with this. The Btrfs archives are full of various limitations
> of Btrfs raid1. There is no automatic degraded mount for Btrfs. And if
> you persistently ask for degraded mount, you run the risk of other
> problems if there's merely a delayed discovery of one of the devices.
> Once a Btrfs volume is degraded, it does not automatically resume normal
> operation just because the formerly missing device becomes available.
> 
> So... this is flat out not suitable for use cases where you need
> unattended raid1 degraded boot.

Agreeing in general and adding some detail...

1) Are you intending to use an initr*?  I'm not sure the current status 
(I actually need to test again for myself), but at least in the past, 
booting a btrfs raid1 rootfs required an initr*, and I have and use one 
here, for that purpose alone (until switching to btrfs raid1 root, I went 
initr*-less, and would prefer that again, due to the complications of 
maintaining an initr*).

The base problem is that with raid1 (or other forms of multi-device 
btrfs, but it happens to be raid1 that's in question for both you and I) 
the filesystem needs multiple devices to complete the filesystem and the 
kernel's root= parameter takes only one.  When mounting after userspace 
is up, a btrfs device scan is normally run (often automatically by udev) 
before the mount, that lets btrfs in the kernel track what devices belong 
to what filesystems, so pointing to just one of the devices is enough 
because the kernel knows from that what filesystem is intended and can 
match up the others that go with it from the earlier scan.

Now there's a btrfs mount option, device=/dev/*, that can be provided 
more than once for additional devices, that can /normally/ be used to 
tell the kernel what specific devices to use, bypassing the need for 
btrfs device scan, and in /theory/, passing that like other mount options 
in the kernel commandline via rootflags= /should/ "just work".

But for reasons I as a btrfs user (not dev, and definitely not kernel or 
btrfs dev) don't fully understand, passing device= via rootflags= is, or 
at least was, broken, so properly mounting a multi-device btrfs required 
(and may still require) userspace, thus for a multi-device btrfs rootfs, 
an initr*.

So direct-booting to a multi-device btrfs rootfs didn't normally work.  
It would if you passed rootflags=degraded (at least with a two-device 
raid1 so the one device passed in root= contained one copy of 
everything), but then it was unclear if the additional device was 
successfully added to the raid1 later, or not.  And with no automatic 
sync and bringing back to undegraded status, it was a risk I didn't want 
to take.  So unfortunately, initr* it was!

But I originally tested that when I setup my own btrfs raid1 rootfs very 
long ago in kernel and btrfs terms, kernel 3.6 or so IIRC, and while I've 
not /seen/ anything definitive on-list to suggest rootflags=device= is 
unbroken now (I asked recently and got an affirmative reply, but I asked 
for clarification and I've not seen it, tho perhaps it's there and I've 
not read it yet), perhaps I missed it.  And I've not retested lately, tho 
I really should as while I asked I guess the only real way to know is to 
try it for myself, and it'd definitely be nice to be direct-booting 
without having to bother with an initr*, again.

2) As both Chris and I alluded to, unlike say mdraid, btrfs doesn't (yet) 
have an automatic mechanism to re-sync and "undegrade" after having been 
mounted degraded,rw.  A btrfs scrub can be run to re-sync raid1 chunks, 
but single chunks may have been added while in the degraded state as 
well, and those need a balance convert to raid1 mode, before the 
filesystem and data on it can be be considered reliably able to withstand 
device loss once again.

In fact, while the problem has been fixed now, for quite awhile if the 
filesystem was mounted degraded,rw, you often had exactly that one mount 
to fix the problem, as new chunks would be written in single mode and 
after that the filesystem would refuse to mount writable,degraded, and 
would only let you mount degraded,ro, which would let you get data off it 
but not let you fix the problem.  Word to the wise if you're planning on 
running stable-debian (which tend to be older) kernels, or even just 
trying to use them for recovery if you need to!  (The fix was to have a 
mount check if at least one copy of all chunks were available and allow rw 
mounting if so, instead of simply assuming that any single-mode chunks at 
all meant some wouldn't be available on a multi-device filesystem with a 
device missing, thus forcing read-only mode only mounting, as it used to 
do.)

3) If a btrfs raid1 is mounted degraded,rw with one device missing, then 
mounted again 

Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Chris Murphy
And also, I'll argue this might have been a btrfs-progs bug as well,
depending on what version was used and the command. Both mkfs and dev
add should not be able to add type code 0x05. At least libblkid
correctly shows that it's 1KiB in size, so really Btrfs should not
succeed at adding this device, it can't put any of the supers in the
correct location.

[chris@f28h ~]$ sudo fdisk -l /dev/loop0
Disk /dev/loop0: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x7e255cce

Device   Boot  StartEnd Sectors  Size Id Type
/dev/loop0p12048 206847  204800  100M 83 Linux
/dev/loop0p2  206848 411647  204800  100M 83 Linux
/dev/loop0p3  411648 616447  204800  100M 83 Linux
/dev/loop0p4  616448 821247  204800  100M  5 Extended
/dev/loop0p5  618496 821247  202752   99M 83 Linux

[chris@f28h ~]$ sudo kpartx -a /dev/loop0
[chris@f28h ~]$ lsblk
NAMEMAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0 7:00 1G  0 loop
├─loop0p1   253:10   100M  0 part
├─loop0p2   253:20   100M  0 part
├─loop0p3   253:30   100M  0 part
├─loop0p4   253:40 1K  0 part
└─loop0p5   253:5099M  0 part

[chris@f28h ~]$ sudo mkfs.btrfs /dev/loop0p4
btrfs-progs v4.17.1
See http://btrfs.wiki.kernel.org for more information.

probe of /dev/loop0p4 failed, cannot detect existing filesystem.
ERROR: use the -f option to force overwrite of /dev/loop0p4
[chris@f28h ~]$ sudo mkfs.btrfs /dev/loop0p4 -f
btrfs-progs v4.17.1
See http://btrfs.wiki.kernel.org for more information.

ERROR: mount check: cannot open /dev/loop0p4: No such file or directory
ERROR: cannot check mount status of /dev/loop0p4: No such file or directory
[chris@f28h ~]$


I guess that's a good sign in this case?


Chris Murphy


Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Chris Murphy
On Thu, Aug 30, 2018 at 9:21 AM, Alberto Bursi  wrote:
>
> On 8/30/2018 11:13 AM, Pierre Couderc wrote:
>> Trying to install a RAID1 on a debian stretch, I made some mistake and
>> got this, after installing on disk1 and trying to add second disk :
>>
>>
>> root@server:~# fdisk -l
>> Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
>> Units: sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disklabel type: dos
>> Disk identifier: 0x2a799300
>>
>> Device Boot StartEndSectors  Size Id Type
>> /dev/sda1  * 2048 3907028991 3907026944  1.8T 83 Linux
>>
>>
>> Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
>> Units: sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disklabel type: dos
>> Disk identifier: 0x9770f6fa
>>
>> Device Boot StartEndSectors  Size Id Type
>> /dev/sdb1  * 2048 3907029167 3907027120  1.8T  5 Extended
>>
>>
>> And :
>>
>> root@server:~# btrfs fi show
>> Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
>> Total devices 2 FS bytes used 1.10GiB
>> devid1 size 1.82TiB used 4.02GiB path /dev/sda1
>> devid2 size 1.00KiB used 0.00B path /dev/sdb1
>>
>> ...
>>
>> My purpose is a simple RAID1 main fs, with bootable flag on the 2
>> disks in prder to start in degraded mode
>> How to get out ofr that...?
>>
>> Thnaks
>> PC
>
>
> sdb1 is an extended partition, you cannot format an extended partition.
>
> change sdb1 into primary partition or add a logical partition into it.

Ahh you're correct. There is special treatment of 0x05, it's a logical
container with the start address actually pointing to the address
where the EBR is. And that EBR's first record contains the actual real
extended partition information.

So this represents two bugs in the installer:
1. If there's only one partition on a drive, it should be primary by
default, not extended.
2. But if extended, it must point to an EBR, and the EBR must be
created at that location. Obviously since there is no /dev/sdb2, this
EBR is not present.




-- 
Chris Murphy


Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Chris Murphy
On Thu, Aug 30, 2018 at 3:13 AM, Pierre Couderc  wrote:
> Trying to install a RAID1 on a debian stretch, I made some mistake and got
> this, after installing on disk1 and trying to add second disk  :
>
>
> root@server:~# fdisk -l
> Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: dos
> Disk identifier: 0x2a799300
>
> Device Boot StartEndSectors  Size Id Type
> /dev/sda1  * 2048 3907028991 3907026944  1.8T 83 Linux
>
>
> Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: dos
> Disk identifier: 0x9770f6fa
>
> Device Boot StartEndSectors  Size Id Type
> /dev/sdb1  * 2048 3907029167 3907027120  1.8T  5 Extended


Extended partition type is not a problem if you're using GRUB as the
bootloader; other bootloaders may not like this. Strictly speaking the
type code 0x05 is incorrect, GRUB ignores type code, as does the
kernel. GRUB also ignores the active bit (boot flag).


>
>
> And :
>
> root@server:~# btrfs fi show
> Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
> Total devices 2 FS bytes used 1.10GiB
> devid1 size 1.82TiB used 4.02GiB path /dev/sda1
> devid2 size 1.00KiB used 0.00B path /dev/sdb1

That's odd; and I know you've moved on from this problem but I would
have liked to see the super for /dev/sdb1 and also the installer log
for what commands were used for partitioning, including mkfs and
device add commands.

For what it's worth, 'btrfs dev add' formats the device being added,
it does not need to be formatted in advance, and also it resizes the
file system properly.



> My purpose is a simple RAID1 main fs, with bootable flag on the 2 disks in
> prder to start in degraded mode

Good luck with this. The Btrfs archives are full of various
limitations of Btrfs raid1. There is no automatic degraded mount for
Btrfs. And if you persistently ask for degraded mount, you run the
risk of other problems if there's merely a delayed discovery of one of
the devices. Once a Btrfs volume is degraded, it does not
automatically resume normal operation just because the formerly
missing device becomes available.

So... this is flat out not suitable for use cases where you need
unattended raid1 degraded boot.



-- 
Chris Murphy


Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Alberto Bursi

On 8/30/2018 11:13 AM, Pierre Couderc wrote:
> Trying to install a RAID1 on a debian stretch, I made some mistake and 
> got this, after installing on disk1 and trying to add second disk :
>
>
> root@server:~# fdisk -l
> Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: dos
> Disk identifier: 0x2a799300
>
> Device Boot Start    End    Sectors  Size Id Type
> /dev/sda1  * 2048 3907028991 3907026944  1.8T 83 Linux
>
>
> Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: dos
> Disk identifier: 0x9770f6fa
>
> Device Boot Start    End    Sectors  Size Id Type
> /dev/sdb1  * 2048 3907029167 3907027120  1.8T  5 Extended
>
>
> And :
>
> root@server:~# btrfs fi show
> Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
>     Total devices 2 FS bytes used 1.10GiB
>     devid    1 size 1.82TiB used 4.02GiB path /dev/sda1
>     devid    2 size 1.00KiB used 0.00B path /dev/sdb1
>
> ...
>
> My purpose is a simple RAID1 main fs, with bootable flag on the 2 
> disks in prder to start in degraded mode
> How to get out ofr that...?
>
> Thnaks
> PC


sdb1 is an extended partition, you cannot format an extended partition.

change sdb1 into primary partition or add a logical partition into it.


-Alberto



Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Kai Stian Olstad
On Thursday, 30 August 2018 12:01:55 CEST Pierre Couderc wrote:
> 
> On 08/30/2018 11:35 AM, Qu Wenruo wrote:
> >
> > On 2018/8/30 下午5:13, Pierre Couderc wrote:
> >> Trying to install a RAID1 on a debian stretch, I made some mistake and
> >> got this, after installing on disk1 and trying to add second disk  :
> >>
> >>
> >> root@server:~# fdisk -l
> >> Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
> >> Units: sectors of 1 * 512 = 512 bytes
> >> Sector size (logical/physical): 512 bytes / 512 bytes
> >> I/O size (minimum/optimal): 512 bytes / 512 bytes
> >> Disklabel type: dos
> >> Disk identifier: 0x2a799300
> >>
> >> Device Boot StartEndSectors  Size Id Type
> >> /dev/sda1  * 2048 3907028991 3907026944  1.8T 83 Linux
> >>
> >>
> >> Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
> >> Units: sectors of 1 * 512 = 512 bytes
> >> Sector size (logical/physical): 512 bytes / 512 bytes
> >> I/O size (minimum/optimal): 512 bytes / 512 bytes
> >> Disklabel type: dos
> >> Disk identifier: 0x9770f6fa
> >>
> >> Device Boot StartEndSectors  Size Id Type
> >> /dev/sdb1  * 2048 3907029167 3907027120  1.8T  5 Extended
> >>
> >>
> >> And :
> >>
> >> root@server:~# btrfs fi show
> >> Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
> >>  Total devices 2 FS bytes used 1.10GiB
> >>  devid1 size 1.82TiB used 4.02GiB path /dev/sda1
> >>  devid2 size 1.00KiB used 0.00B path /dev/sdb1

I think your problem is that sdb1 is a extended partition and not a primary one 
or you'll need to make a logical partition inside the extended partition and 
use that.


-- 
Kai Stian Olstad




Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Qu Wenruo


On 2018/8/30 下午6:01, Pierre Couderc wrote:
> 
> 
> On 08/30/2018 11:35 AM, Qu Wenruo wrote:
>>
>> On 2018/8/30 下午5:13, Pierre Couderc wrote:
>>> Trying to install a RAID1 on a debian stretch, I made some mistake and
>>> got this, after installing on disk1 and trying to add second disk  :
>>>
>>>
>>> root@server:~# fdisk -l
>>> Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
>>> Units: sectors of 1 * 512 =12 bytes
>>> Sector size (logical/physical): 512 bytes / 512 bytes
>>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Disklabel type: dos
>>> Disk identifier: 0x2a799300
>>>
>>> Device Boot Start    End    Sectors  Size Id Type
>>> /dev/sda1  * 2048 3907028991 3907026944  1.8T 83 Linux
>>>
>>>
>>> Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
>>> Units: sectors of 1 * 512 =12 bytes
>>> Sector size (logical/physical): 512 bytes / 512 bytes
>>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Disklabel type: dos
>>> Disk identifier: 0x9770f6fa
>>>
>>> Device Boot Start    End    Sectors  Size Id Type
>>> /dev/sdb1  * 2048 3907029167 3907027120  1.8T  5 Extended
>>>
>>>
>>> And :
>>>
>>> root@server:~# btrfs fi show
>>> Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
>>>  Total devices 2 FS bytes used 1.10GiB
>>>  devid    1 size 1.82TiB used 4.02GiB path /dev/sda1
>>>  devid    2 size 1.00KiB used 0.00B path /dev/sdb1
>>>
>>> ...
>>>
>>> My purpose is a simple RAID1 main fs, with bootable flag on the 2 disks
>>> in prder to start in degraded mode
>>> How to get out ofr that...?
>> The 2nd device is indeed strange.
>>
>> Considering how old packages Debian tends to deliver, it should be some
>> old btrfs-progs.
>>
>> You could just boot into the system and execute the following commands:
>>
>> # btrfs device remove 2 
> This works fine.
>>
>> Then add a new real device to the fs
>>
>> # btrfs device add  
>>
>>
> Thnk you, this :
> 
> btrfs device add /dev/sdb1 /
> 
>  seems to work but  gives me the same :
> 
> root@server:~# btrfs fi show
> Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
>     Total devices 2 FS bytes used 1.10GiB
>     devid    1 size 1.82TiB used 4.02GiB path /dev/sda1
>     devid    2 size 1.00KiB used 0.00B path /dev/sdb1
> 
> So I need to "prepare" or format /dev/sdb before adding /dev/sdb1
> I have tried to format /dev/sdb and it works :

What's the output of lsblk command?

If it's 2T (1.8T) show in lsblk, then in above case you could fix it by
resize that device by:

# btrfs resize 2:max 

Thanks,
Qu


> root@server:~# btrfs fi show
> Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
>     Total devices 2 FS bytes used 1.10GiB
>     devid    1 size 1.82TiB used 4.02GiB path /dev/sda1
>     devid    2 size 1.82TiB used 0.00B path /dev/sdb
> 
> but I have no partition table and no booot flag for degraded mode
> 
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Pierre Couderc




On 08/30/2018 11:35 AM, Qu Wenruo wrote:


On 2018/8/30 下午5:13, Pierre Couderc wrote:

Trying to install a RAID1 on a debian stretch, I made some mistake and
got this, after installing on disk1 and trying to add second disk  :


root@server:~# fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2a799300

Device Boot Start    End    Sectors  Size Id Type
/dev/sda1  * 2048 3907028991 3907026944  1.8T 83 Linux


Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9770f6fa

Device Boot Start    End    Sectors  Size Id Type
/dev/sdb1  * 2048 3907029167 3907027120  1.8T  5 Extended


And :

root@server:~# btrfs fi show
Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
     Total devices 2 FS bytes used 1.10GiB
     devid    1 size 1.82TiB used 4.02GiB path /dev/sda1
     devid    2 size 1.00KiB used 0.00B path /dev/sdb1

...

My purpose is a simple RAID1 main fs, with bootable flag on the 2 disks
in prder to start in degraded mode
How to get out ofr that...?

The 2nd device is indeed strange.

Considering how old packages Debian tends to deliver, it should be some
old btrfs-progs.

You could just boot into the system and execute the following commands:

# btrfs device remove 2 

This works fine.


Then add a new real device to the fs

# btrfs device add  



Thnk you, this :

btrfs device add /dev/sdb1 /

 seems to work but  gives me the same :

root@server:~# btrfs fi show
Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
    Total devices 2 FS bytes used 1.10GiB
    devid    1 size 1.82TiB used 4.02GiB path /dev/sda1
    devid    2 size 1.00KiB used 0.00B path /dev/sdb1

So I need to "prepare" or format /dev/sdb before adding /dev/sdb1
I have tried to format /dev/sdb and it works :
root@server:~# btrfs fi show
Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
Total devices 2 FS bytes used 1.10GiB
devid1 size 1.82TiB used 4.02GiB path /dev/sda1
devid2 size 1.82TiB used 0.00B path /dev/sdb

but I have no partition table and no booot flag for degraded mode





Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Qu Wenruo


On 2018/8/30 下午5:13, Pierre Couderc wrote:
> Trying to install a RAID1 on a debian stretch, I made some mistake and
> got this, after installing on disk1 and trying to add second disk  :
> 
> 
> root@server:~# fdisk -l
> Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: dos
> Disk identifier: 0x2a799300
> 
> Device Boot Start    End    Sectors  Size Id Type
> /dev/sda1  * 2048 3907028991 3907026944  1.8T 83 Linux
> 
> 
> Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: dos
> Disk identifier: 0x9770f6fa
> 
> Device Boot Start    End    Sectors  Size Id Type
> /dev/sdb1  * 2048 3907029167 3907027120  1.8T  5 Extended
> 
> 
> And :
> 
> root@server:~# btrfs fi show
> Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
>     Total devices 2 FS bytes used 1.10GiB
>     devid    1 size 1.82TiB used 4.02GiB path /dev/sda1
>     devid    2 size 1.00KiB used 0.00B path /dev/sdb1
> 
> ...
> 
> My purpose is a simple RAID1 main fs, with bootable flag on the 2 disks
> in prder to start in degraded mode
> How to get out ofr that...?

The 2nd device is indeed strange.

Considering how old packages Debian tends to deliver, it should be some
old btrfs-progs.

You could just boot into the system and execute the following commands:

# btrfs device remove 2 

Then add a new real device to the fs

# btrfs device add  

Then convert the fs to RAID1

# btrfs balance start -dconvert=RAID1 -mconvert=RAID1 

Thanks,
Qu

> 
> Thnaks
> PC



signature.asc
Description: OpenPGP digital signature


Re: How to erase a RAID1 (+++)?

2018-08-30 Thread Pierre Couderc
Trying to install a RAID1 on a debian stretch, I made some mistake and 
got this, after installing on disk1 and trying to add second disk  :



root@server:~# fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2a799300

Device Boot Start    End    Sectors  Size Id Type
/dev/sda1  * 2048 3907028991 3907026944  1.8T 83 Linux


Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9770f6fa

Device Boot Start    End    Sectors  Size Id Type
/dev/sdb1  * 2048 3907029167 3907027120  1.8T  5 Extended


And :

root@server:~# btrfs fi show
Label: none  uuid: eed65d24-6501-4991-94bd-6c3baf2af1ed
    Total devices 2 FS bytes used 1.10GiB
    devid    1 size 1.82TiB used 4.02GiB path /dev/sda1
    devid    2 size 1.00KiB used 0.00B path /dev/sdb1

...

My purpose is a simple RAID1 main fs, with bootable flag on the 2 disks 
in prder to start in degraded mode

How to get out ofr that...?

Thnaks
PC


Re: How to erase a RAID1 ?

2018-08-29 Thread Qu Wenruo
[Forgot to Cc the list]

On 2018/8/29 下午10:04, Pierre Couderc wrote:
>
> On 08/29/2018 02:52 PM, Qu Wenruo wrote:
>>
>> On 2018/8/29 下午8:49, Pierre Couderc wrote:
>>> I want to reinstall a RAID1 btrfs system (wchis is now under debian
>>> stretch, and will be reinstalled in stretch).
>> If you still want to use btrfs, just umount the original fs, and
>>
>> # mkfs.btrfs -f 
>>
>> Then a completely new btrfs.
>>
> The problem is to "unmount" the RAID1 system..

If it's not the root fs, it could be unmount if there is no user
reading/writing or has any opened file.

>
> I have not found how to do that.
>
> I have found a solution by reinstalling under debian, it now asks if
> need to reformat thez btrfs disk...

If it's root fs, either go booting from another device, or just let the
installer (well, it's booting from memory/another device already) to
re-format the fs.

Thanks,
Qu

>




signature.asc
Description: OpenPGP digital signature


Re: How to erase a RAID1 ?

2018-08-29 Thread Pierre Couderc




On 08/29/2018 02:52 PM, Qu Wenruo wrote:


On 2018/8/29 下午8:49, Pierre Couderc wrote:

I want to reinstall a RAID1 btrfs system (wchis is now under debian
stretch, and will be reinstalled in stretch).

If you still want to use btrfs, just umount the original fs, and

# mkfs.btrfs -f 



The problem is to "unmount" the RAID1 system..

I have not found how to do that.

I have found a solution by reinstalling under debian, it now asks if 
need to reformat thez btrfs disk...


Re: How to erase a RAID1 ?

2018-08-29 Thread Qu Wenruo


On 2018/8/29 下午8:49, Pierre Couderc wrote:
> I want to reinstall a RAID1 btrfs system (wchis is now under debian
> stretch, and will be reinstalled in stretch).

If you still want to use btrfs, just umount the original fs, and

# mkfs.btrfs -f 

Then a completely new btrfs.

> 
> How to correctly "erase" it ? Not truly hard erase it, but so that old
> data does not appear...

If not planing to use btrfs, umount the fs and then wipefs -fa 

Thanks,
Qu

> 
> It is not clear in the wiki.
> 
> Thanks
> 
> PC
> 



signature.asc
Description: OpenPGP digital signature


How to erase a RAID1 ?

2018-08-29 Thread Pierre Couderc
I want to reinstall a RAID1 btrfs system (wchis is now under debian 
stretch, and will be reinstalled in stretch).


How to correctly "erase" it ? Not truly hard erase it, but so that old 
data does not appear...


It is not clear in the wiki.

Thanks

PC