Jan, I've gotten a couple systems into this state, and regardless of 
getting open by guid to work, it's incredibly confusing, and the 
installer should really avoid letting this happen. Even just adding a 
number and incrementing it (rpool, rpool1, rpool2) would be a real 
improvement.

-jan


Jan Damborsky wrote:
> Hi Eric,
> 
> 
> eric taylor wrote:
>>
>> [...]
>>
>>>   All this still doesn't mean that we shouldn't ultimately use zfs 
>>> GUID to properly identify the dataset we came from:
>>>
>>>   The devid is just a better way of labeling the device that contains 
>>> the data. This is a dramatic improvement over just using the path, 
>>> but it still doesn't actually identify the data which we're looking for.
>>>
>>>   For example, this will survive a failed disk that's replaced (and 
>>> has data restored to it) in place since it is open by path fist. It 
>>> will survive disk movement as long as there is have a (sufficiently 
>>> unique) devid. Both happening at the same time (a rock fell on my 
>>> server, and I re-built it with a new system and restored the data 
>>> from a backup), can only be survived if we ultimately try to use the 
>>> guid that goes with the data.
>>>
>>>   The reason for getting this right sooner rather than later, is that 
>>> it makes many things that folks might want to build on top of 
>>> something that needs to (re-)deploy Solaris trivial. I appreciate 
>>> that this work is competing with lots of other work to even make your 
>>> top 10 list, but it may take enough things off of my top 10 list over 
>>> time that I'd be willing to step up, or figure out some way to share 
>>> the pain.
>>
>> Opening by guid wouldn't be a complete solution.  Going back to the
>> installer for a minute, having a choice between:
>>
>> rpool (13167525223925544499)
>> rpool (13990545618038227697)
>> rpool (17254699675488870738)
>>
>> would be cumbersome, so I've filed:
>> http://defect.opensolaris.org/bz/show_bug.cgi?id=5270
> 
> To be honest, looking at the bug report, it is not quite clear
> to me, why having more than one pool with the same name
> should constitute problem from 'ZFS boot' point of view,
> as ZFS pool can be always referred by identifiers which are
> unique to particular pool (for instance GUID).
> 
> Could I please ask you to elaborate more on this problem ?
> I would like to understand it better, since there are scenarios
> in which the installer refuses to install, because "rpool" already
> exists and in those cases, picking up different name for root pool
> would solve the problem.
> That said, I can see that there might be also other solutions,
> which I think might address that problem in cleaner way. If you
> are interested in background, following bugs might you provide
> with more details:
> 
> 1771 Installer can't be restarted if it already created ZFS root pool
> 3783 Install will fail if ZFS pool 'rpool' was imported
> 
> That is the reason, why I am trying to understand if picking up
> unique name for ZFS root pool is a requirement coming from ZFS
> boot design or workaround for problem which should be actually
> addressed in different way.
> 
> Thank you,
> Jan
> 
>>
>> The mountroot code still needs to figure out how to open the boot
>> device(s) (either by physical path or devid), so just specifying
>> a guid by itself won't work.  One possibility is to also pass up
>> the devid plus a list of all the physical paths GRUB can find
>> (maybe with some sort of pattern matching characters to keep the
>> command line from getting out of hand).  If opening by devid fails,
>> run through every path with a beefed up version of zfs_mountroot
>> to figure out which disk is the correct device to boot from.  Not
>> elegant, but workable.
>>
>> How does that sound?
>>
>> -  Eric


Reply via email to