Re: getting data from degraded RAID 1 boot disk

2017-02-01 Thread Olivier Cherrier
On Wed, Feb 01, 2017 at 08:32:44AM -0500, ji...@devio.us wrote:
> On Wed, Feb 01, 2017 at 01:33:54PM +0100, Stefan Sperling wrote:
> > On Wed, Feb 01, 2017 at 04:12:26AM -0500, Jiri B wrote:
> > > Should have kernel automatically create 'sd4' for degraded RAID 1
> > > but it does not?
> > 
> > I believe it will auto assemble if the disk is present at boot time.
> 
> ^^ This does work, I tried to plug the disk as boot device into QEMU VM.
> 
> > But not when you hotplug the disk.
> 
> Pity. Could it be reconsidered? It would ease data recovery (ie. trying
> to get a box to boot the disk or using VM.)

It will be particularly usefull at installation time when you plan
to create a RAID1 / RAID5 setup and you don't have all the disks yet.
RAIDframe had the 'absent' device name that could be used for this
particular case.



Re: getting data from degraded RAID 1 boot disk

2017-02-01 Thread Stefan Sperling
On Wed, Feb 01, 2017 at 08:32:44AM -0500, Jiri B wrote:
> On Wed, Feb 01, 2017 at 01:33:54PM +0100, Stefan Sperling wrote:
> > On Wed, Feb 01, 2017 at 04:12:26AM -0500, Jiri B wrote:
> > > Should have kernel automatically create 'sd4' for degraded RAID 1
> > > but it does not?
> > 
> > I believe it will auto assemble if the disk is present at boot time.
> 
> ^^ This does work, I tried to plug the disk as boot device into QEMU VM.
> 
> > But not when you hotplug the disk.
> 
> Pity. Could it be reconsidered? It would ease data recovery (ie. trying
> to get a box to boot the disk or using VM.)

Sure. I am not saying the way it works now is best. Just trying to help.
Patches welcome, as usual :)



Re: getting data from degraded RAID 1 boot disk

2017-02-01 Thread Jiri B
On Wed, Feb 01, 2017 at 01:33:54PM +0100, Stefan Sperling wrote:
> On Wed, Feb 01, 2017 at 04:12:26AM -0500, Jiri B wrote:
> > Should have kernel automatically create 'sd4' for degraded RAID 1
> > but it does not?
> 
> I believe it will auto assemble if the disk is present at boot time.

^^ This does work, I tried to plug the disk as boot device into QEMU VM.

> But not when you hotplug the disk.

Pity. Could it be reconsidered? It would ease data recovery (ie. trying
to get a box to boot the disk or using VM.)

Thanks.

j.



Re: getting data from degraded RAID 1 boot disk

2017-02-01 Thread Stefan Sperling
On Wed, Feb 01, 2017 at 04:12:26AM -0500, Jiri B wrote:
> Should have kernel automatically create 'sd4' for degraded RAID 1
> but it does not?

I believe it will auto assemble if the disk is present at boot time.
But not when you hotplug the disk.



Re: getting data from degraded RAID 1 boot disk

2017-02-01 Thread Jiri B
On Tue, Jan 31, 2017 at 11:55:21PM +0100, Stefan Sperling wrote:
> On Tue, Jan 31, 2017 at 05:23:10PM -0500, Jiri B wrote:
> > I have a disk which used to be boot disk of a degraded RAID 1 (softraid).
> > The second disk is totally gone.
> > 
> > I don't want to use this disk as RAID 1 disk anymore, just to get data
> > from it.
> > 
> > I'm asking because when I plugged the disk, bioctl said 'not enough disks'.
> > 
> > Do we really have to necessary require two disks when attaching already 
> > existing
> > degraded RAID 1 with only one disk available?
> 
> Can you describe in more detail what you did to "plug the disk"?
> It sounds like you ran 'bioctl' in a way that tries to create a
> new RAID1 volume. Why?
> 
> If the disk is present during system boot, is it not auto-assembled
> as a degraded RAID1 volume? I would expect a degraded softraid RAID1
> disk to show up which you can copy data from.

Thank you very much for reply. Here are the steps:

1. original disk which used to be part of degraded RAID 1 (softraid)
   boot disk attached via USB->SATA adapter:
   
umass1 at uhub0 port 10 configuration 1 interface 0 "JMicron AXAGON USB to SATA 
Adapter" rev 3.00/81.05 addr 10
umass1: using SCSI over Bulk-Only
scsibus5 at umass1: 2 targets, initiator 0
sd3 at scsibus5 targ 1 lun 0:  SCSI4 0/direct 
fixed serial.49718017
sd3: 715404MB, 512 bytes/sector, 1465149168 sectors

2. trying to put degraded RAID 1 online:

# fdisk sd3 | grep OpenBSD
*3: A6  0   1   2 -  91200 254  63 [  64:  1465144001 ] OpenBSD
# disklabel sd3 | grep RAID
  a:   1465144001   64RAID
  # bioctl -c 1 -l /dev/sd3a softraid0
  bioctl: not enough disks

man bioctl unfortunatelly states:

~~~
The RAID 0, RAID 1 and CONCAT disciplines require a minimum of
two devices to be provided via -l...
~~~

Should have kernel automatically create 'sd4' for degraded RAID 1
but it does not? As bioctl requires "a minimin of two devices" for
RAID 1...

IMO if RAID 1 could be constructed with on disk via bioctl it would
be better also for people doing migration to RAID 1.

j.



Re: getting data from degraded RAID 1 boot disk

2017-01-31 Thread Stefan Sperling
On Tue, Jan 31, 2017 at 05:23:10PM -0500, Jiri B wrote:
> I have a disk which used to be boot disk of a degraded RAID 1 (softraid).
> The second disk is totally gone.
> 
> I don't want to use this disk as RAID 1 disk anymore, just to get data
> from it.
> 
> I'm asking because when I plugged the disk, bioctl said 'not enough disks'.
> 
> Do we really have to necessary require two disks when attaching already 
> existing
> degraded RAID 1 with only one disk available?

Can you describe in more detail what you did to "plug the disk"?
It sounds like you ran 'bioctl' in a way that tries to create a
new RAID1 volume. Why?

If the disk is present during system boot, is it not auto-assembled
as a degraded RAID1 volume? I would expect a degraded softraid RAID1
disk to show up which you can copy data from.



getting data from degraded RAID 1 boot disk

2017-01-31 Thread Jiri B
I have a disk which used to be boot disk of a degraded RAID 1 (softraid).
The second disk is totally gone.

I don't want to use this disk as RAID 1 disk anymore, just to get data
from it.

I'm asking because when I plugged the disk, bioctl said 'not enough disks'.

Do we really have to necessary require two disks when attaching already existing
degraded RAID 1 with only one disk available?

(I find it generally pretty sad we can't define RAID 1 with only disk. I could
imagine constructing RAID 1 with one disk as useful feature, eg. migration from
non-mirrored boot disk to RAID 1 boot disks which attaching just new additional
disk. At least we used to do this on RHEL.)

My current workaround is running a VM under qemu and accessing this disk
as raw device. Surprisingly this works fine in comparision with previous
attaching with bioctl.

kern.version=OpenBSD 6.0-current (GENERIC.MP) #117: Sat Jan  7 09:10:45 MST 2017

j.