Thanks, Daniel
Importing with alternative mount points can be done if you are importing
manually, however it can’t be done if you are booting normally. All the
problem datasets have the mountpoint set to “legacy”.
If I change the mountpoints in zznew to avoid a clash, I get a kernel panic
when the system boots.


Gareth

On 7 February 2018 at 20:46:18, Daniel Carosone ([email protected])
wrote:

> Those datasets have explicit mountpoint properties, with the same values
> as your live pool, and have been imported/mounted first, before the ones
> you wanted.
>
> You should import the inactive pool with -R, or in recovery mode import
> them both with different roots.
>
> On 8 Feb. 2018 02:00, "Gareth Howell" <[email protected]> wrote:
>
>> This is really a continuation of my previous post on migrating to a new
>> pool configuration, but it deserves a new thread.
>> 
>> After the “send/recv, destroy/create, swap names” dance, I have a single
>> disk `zones` pool with all the data on it and a new raidz1 pool called
>> `zznew` that will become the new `zones` after I have synced it. However,
>> something odd has happened somewhere.
>> 
>> Prior to creating `zznew` I just had the single disk `zones` pool. When I
>> booted, the kernel panicked and the system reset. After examining the pool
>> in recovery mode, all seemed well but I couldn’t get anywhere because I can
>> only use the console and the error messages zoom past too fast to read. (It
>> did try to save a dump log but that failed as well).
>> 
>> To make progress, I removed the single disk pool and did a clean install
>> using a newly created raidz1 pool. I could then import the bad pool, but
>> again I could see no problems.
>> 
>> I was moving towards having to do a reverse copy from the imported zone
>> to the running zone but before I did so, I tried swapping the pool names.
>>      a single disk `zones` and a “runnable” raidz1 `zznew` pool. The system
>> booted to my surprise.
>> 
>> Looking at the mount points etc showed the following
>> 
>> root@deneb ~ $ zfs list
>> NAME                                               USED  AVAIL  REFER
>> MOUNTPOINT
>> zones                                             2.67T   863G   588K
>> /zones
>> zones/archive                                      152K   863G    88K
>> none
>> …
>> zones/config                                       468K   863G   196K
>> legacy
>> zones/cores                                        250M   863G    88K
>> none
>> …
>> zones/cores/global                                 152K  10.0G    88K
>> /zones/global/cores
>> …
>> zones/dump                                         260K   863G   140K  -
>> …
>> zones/opt                                         2.50T   863G  1.20G
>> legacy
>> …
>> zones/swap                                        33.2G   896G   246M  -
>> zones/usbkey                                       196K   863G   132K
>> legacy
>> zones/var                                         1.05G   863G  1.03G
>> legacy
>> zznew                                             37.6G  3.47T  1018K
>> /zznew
>> zznew/archive                                      117K  3.47T   117K
>> /zznew/archive
>> zznew/config                                       139K  3.47T   139K
>> legacy
>> zznew/cores                                        234K  3.47T   117K
>> none
>> zznew/cores/global                                 117K  10.0G   117K
>> /zznew/global/cores
>> zznew/dump                                        1.84G  3.47T  1.84G  -
>> zznew/opt                                         2.88G  3.47T  2.88G
>> legacy
>> zznew/swap                                        32.9G  3.50T  74.6K  -
>> zznew/usbkey                                       261K  3.47T   261K
>> legacy
>> zznew/var                                         3.91M  3.47T  3.91M
>> /zznew/var
>> 
>> root@deneb ~ $ zfs mount
>> zones                           /zones
>> …
>> zznew                           /zznew
>> zznew/archive                   /zznew/archive
>> zznew/cores/global              /zznew/global/cores
>> zznew/var                       /zznew/var
>> zznew/config                    /etc/zones
>> zznew/opt                       /opt
>> zznew/usbkey                    /usbkey
>> 
>> I deleted some irrelevant bits and highlighted the problems. The system
>> is mounting some datasets from zznew as if they are from zones.
>> 
>> Any ideas on how to correct this so that /etc/zones, /opt and /usbkey
>> mount from zones rather than zznew?
>> 
>> Gareth
>> 
>> *smartos-discuss* | Archives
> <https://www.listbox.com/member/archive/184463/=now>
> <https://www.listbox.com/member/archive/rss/184463/27617033-9e23bb62> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to