I should have seen that coming, but didn't ;) I think in ths case I would go with a different approach: don't import the data pool in the AI instance and save it to zpool.cache Instead, make sure it is cleanly exported from AI instance, and in the installed system create a self-destructing init script or SMF service. For an init script in might go like this: #!/bin/sh # /etc/rc2.d/S00importdatapool [ "$1" = start ] && zpool import -f datapool && rm -f "$0" Or you can try setting the hostid in a persisnt manner (perhaps via eeprom emulation in /boot/solaris/bootenv.rc ?)
----- Original Message ----- From: Matt Keenan <matt...@opensolaris.org> Date: Tuesday, May 31, 2011 21:02 Subject: Re: [zfs-discuss] Ensure Newly created pool is imported automatically in new BE To: j...@cos.ru Cc: zfs-discuss@opensolaris.org > Jim, > > Thanks for the response, I've nearly got it working, coming up > against a > hostid issue. > > Here's the steps I'm going through : > > - At end of auto-install, on the client just installed before I > manually > reboot I do the following : > $ beadm mount solaris /a > $ zpool export data > $ zpool import -R /a -N -o > cachefile=/a/etc/zfs/zpool.cache data > $ beadm umount solaris > $ reboot > > - Before rebooting I check /a/etc/zfs/zpool.cache and it does > contain > references to "data". > > - On reboot, the automatic import of data is attempted however > following > message is displayed : > > WARNING: pool 'data' could not be loaded as it was last > accessed by > another system (host: ai-client hostid: 0x87a4a4). See > http://www.sun.com/msg/ZFS-8000-EY. > > - Host id on booted client is : > $ hostid > 000c32eb > > As I don't control the import command on boot i cannot simply > add a "-f" > to force the import, any ideas on what else I can do here ? > > cheers > > Matt > > On 05/27/11 13:43, Jim Klimov wrote: > > Did you try it as a single command, somewhat like: > > > > zpool create -R /a -o cachefile=/a/etc/zfs/zpool.cache mypool c3d0 > > Using altroots and cachefile(=none) explicitly is a nearly- > documented> way to avoid caching pools which you would not want > to see after > > reboot, i.e. removable media. > > I think that after the AI is done and before reboot you might > want to > > reset the altroot property to point to root (or be undefined) > so that > > the data pool is mounted into your new rpools hierarchy and not > > under "/a/mypool" again ;) > > And if your AI setup does not use the data pool, you might be better > > off not using altroot at all, maybe... > > > > ----- Original Message ----- > > From: Matt Keenan <matt...@opensolaris.org> > > Date: Friday, May 27, 2011 13:25 > > Subject: [zfs-discuss] Ensure Newly created pool is imported > > automatically in new BE > > To: zfs-discuss@opensolaris.org > > > > > Hi, > > > > > > Trying to ensure a newly created data pool gets import on boot > > > into a > > > new BE. > > > > > > Scenario : > > > Just completed a AI install, and on the client > > > before I reboot I want > > > to create a data pool, and have this pool automatically imported > > > on boot > > > into the newly installed AI Boot Env. > > > > > > Trying to use the -R altroot option to > zpool create > > > to achieve this or > > > the zpool set -o cachefile property, but having no luck, and > > > would like > > > some advice on what the best means of achieving this would be. > > > > > > When the install completes, we have a default root pool > "rpool", which > > > contains a single default boot environment, rpool/ROOT/solaris > > > > > > This is mounted on /a so I tried : > > > zpool create -R /a mypool c3d0 > > > > > > Also tried : > > > zpool create mypool c3d0 > > > zpool set -o cachefile=/a mypool > > > > > > I can clearly see /a/etc/zfs/zpool.cache contains information > > > for rpool, > > > but it does not get any information about mypool. I would expect > > > this > > > file to contain some reference to mypool. So I tried : > > > zpool set -o > cachefile=/a/etc/zfs/zpool.cache> > > > > Which fails. > > > > > > Any advice would be great. > > > > > > cheers > > > > > > Matt > > > _______________________________________________ > > > zfs-discuss mailing list > > > zfs-discuss@opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > -- > > > > +============================================================+ > > > | | > > | Климов > Евгений, Jim Klimov | > > | технический > директор CTO | > > | ЗАО "ЦОС и > ВТ" JSC COS&HT | > > > | | > > | +7-903-7705859 > (cellular) mailto:jimkli...@cos.ru | > > > | CC:ad...@cos.ru,jimkli...@gmail.com | > > +============================================================+ > > | () ascii ribbon campaign - against html > mail | > > | > /\ - against microsoft attachments | > > +============================================================+ > -- +============================================================+ | | | Климов Евгений, Jim Klimov | | технический директор CTO | | ЗАО "ЦОС и ВТ" JSC COS&HT | | | | +7-903-7705859 (cellular) mailto:jimkli...@cos.ru | | CC:ad...@cos.ru,jimkli...@gmail.com | +============================================================+ | () ascii ribbon campaign - against html mail | | /\ - against microsoft attachments | +============================================================+
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss