Thanks, more Qs below ;)

2012-10-19 11:16, James C. McPherson wrote:
if you run /usr/bin/strings over /etc/zfs/zpool.cache,
you'll see that not only is the device path stored, but
(more importantly) the devid.

As an excerpt from my adventurous notebook, which only has
an rpool on SAS, I see these lines IDE mode:

# /usr/bin/strings /etc/zfs/zpool.cache

(I was wrong to say earlier that with VirtualBox I can
dual-boot the VM in IDE mode flawlessly, on my last test
there were also discrepancies: 'pci-ide@1,1' vs. 'pci-ide@11'
so the rpool did not import too; I am not sure what the devid
would be in that case).

When the same notebook reconfigured into SATA mode I see:

Attacking my original problem and question, is any of these
values expected to NOT change if the driver (HBA device) is
changed, i.e. when switching between SATA and IDE modes?

As seen above, devid apparently includes the driver/technology
name (sd or cmdk), and identifiers (@tech_vendor_model_sernum)
also differ, although some components do match at least partially.
There are no problems with this when importing a "guest pool",
such as getting an existing rpool while booted from LiveCD;
panics only happen due to extra checks for the rpool import.

As far as I'm aware, having an rpool on multipathed devices
is fine. Multiple paths to the device should still allow ZFS
to obtain the same devid info... and we use devid's in
preference to physical paths.

I do hope that in case of multipathing, the multiple paths use
the same technology such as SAS or iSCSI, leading to the same
devids which can be used reliably. Are there any real-life
scenarios where a multipath can be implemented over several
different transports, or people avoid that - just in case?

In particular, can't the pool GUID be used for spa_import_rootpool?


zfs-discuss mailing list

Reply via email to