>> On Thu, 03 Dec 2020 18:19:05 +1100
>> m...@mjch.net ("Malcolm Herbert") said:
> 
> As far as I understand it, ZFS vdevs have their own ID, so they can be laid 
> out correctly no matter the OS device each are discovered on ... wouldn't 
> that make a raidframe wrapper redundant?  it would also mean the zpool vdevs 
> couldn't be used on other systems that understand ZFS because they're 
> unlikely to understand raidframe ...

My knowledge about ZFS is the same.
I never care about device numbers for zpool disks on FreeBSD.

But I found that NetBSD-9.1 STABLE(23 Oct) couldn't find zpool after
reconnecting external SSD as sd1, which was connected as sd0.
In that case, "zpool import" never find last zpool elements anymore.

I tried:
# sed 's,dev/sd0,dev/sd1,g' zpool.cache > zpool.cache2
# zpool import -c zpool.cache2 -a
These operation were needed every time when sdX number moved.

I've not tested on newer kernel.
Recent kernel detect pool IDs smoothly??

Anyway I needed to get the host stable state at that time,
I constructed redundant raidframe and found working fine for me.

This should be a dirty workaround for a short term...

--yuuji

> 
> Regards,
> Malcolm
> 
> On Thu, 3 Dec 2020, at 12:00, HIROSE yuuji wrote:
> > >> On Thu, 3 Dec 2020 00:30:17 +0000
> > >> a...@absd.org (David Brownlee) said:
> > > 
> > > What would be the best practice for setting up disks to use under ZFS
> > > on NetBSD, with particular reference to handling renumbered devices?
> > > 
> > > The two obvious options seem to be:
> > > 
> > > - Wedges, setup as a single large gpt partition of type zfs (eg /dev/dk7)
> > > - Entire disk (eg: /dev/wd0 or /dev/sd4)
> > > 
> > 
> > Creating raidframe for thoset wedges or disks and "raidframe -A yes"
> > would be helpful to create stable device-name for zpool.
> > 
> > I prefer to use dummy raidframe even if the host has only single device
> > to make HDD/SSDs bootable when they attached to USB-SATA adapters.
> > 
> > --yuuji
> >
> 
> -- 
> Malcolm Herbert
> m...@mjch.net

Reply via email to