Hi Cindy,

Wouldn't

touch /reconfigure
mv /etc/path_to_inst* /var/tmp/

regenerate all device information?

AFIK zfs doesn't care about the device names it scans for them
it would only affect things like vfstab.

I did a restore from a E2900 to V890 and is seemed to work

Created the pool and zfs recieve.

I would like to be able to have a zfs send of a minimal build and
install it in an abe and activate it.
I tried that is test and it seems to work.

It seems to work but IM just wondering what I may have missed.

I saw someone else has done this on the list and was going to write a blog.

It seems like a good way to get a minimal install on a server with
reduced downtime.

Now if I just knew how to run the installer in and abe without there
being an OS there already that would be cool too.

Thanks

Peter

2009/9/24 Cindy Swearingen <cindy.swearin...@sun.com>:
> Hi Peter,
>
> I can't provide it because I don't know what it is.
>
> Even if we could provide a list of items, tweaking
> the device informaton if the systems are not identical
> would be too difficult.
>
> cs
>
> On 09/24/09 12:04, Peter Pickford wrote:
>>
>> Hi Cindy,
>>
>> Could you provide a list of system specific info stored in the root pool?
>>
>> Thanks
>>
>> Peter
>>
>> 2009/9/24 Cindy Swearingen <cindy.swearin...@sun.com>:
>>>
>>> Hi Karl,
>>>
>>> Manually cloning the root pool is difficult. We have a root pool recovery
>>> procedure that you might be able to apply as long as the
>>> systems are identical. I would not attempt this with LiveUpgrade
>>> and manually tweaking.
>>>
>>>
>>> http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery
>>>
>>> The problem is that the amount system-specific info stored in the root
>>> pool and any kind of device differences might be insurmountable.
>>>
>>> Solaris 10 ZFS/flash archive support is available with patches but not
>>> for the Nevada release.
>>>
>>> The ZFS team is working on a split-mirrored-pool feature and that might
>>> be an option for future root pool cloning.
>>>
>>> If you're still interested in a manual process, see the steps below
>>> attempted by another community member who moved his root pool to a
>>> larger disk on the same system.
>>>
>>> This is probably more than you wanted to know...
>>>
>>> Cindy
>>>
>>>
>>>
>>> # zpool create -f altrpool c1t1d0s0
>>> # zpool set listsnapshots=on rpool
>>> # SNAPNAME=`date +%Y%m%d`
>>> # zfs snapshot -r rpool/r...@$snapname
>>> # zfs list -t snapshot
>>> # zfs send -R rp...@$snapname | zfs recv -vFd altrpool
>>> # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
>>> /dev/rdsk/c1t1d0s0
>>> for x86 do
>>> # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
>>> Set the bootfs property on the root pool BE.
>>> # zpool set bootfs=altrpool/ROOT/zfsBE altrpool
>>> # zpool export altrpool
>>> # init 5
>>> remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
>>> -insert solaris10 dvd
>>> ok boot cdrom -s
>>> # zpool import altrpool rpool
>>> # init 0
>>> ok boot disk1
>>>
>>> On 09/24/09 10:06, Karl Rossing wrote:
>>>>
>>>> I would like to clone the configuration on a v210 with snv_115.
>>>>
>>>> The current pool looks like this:
>>>>
>>>> -bash-3.2$ /usr/sbin/zpool status    pool: rpool
>>>>  state: ONLINE
>>>>  scrub: none requested
>>>> config:
>>>>
>>>>       NAME          STATE     READ WRITE CKSUM
>>>>       rpool         ONLINE       0     0     0
>>>>         mirror      ONLINE       0     0     0
>>>>           c1t0d0s0  ONLINE       0     0     0
>>>>           c1t1d0s0  ONLINE       0     0     0
>>>>
>>>> errors: No known data errors
>>>>
>>>> After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to
>>>> /tmp/a so that I can make the changes I need prior to removing the drive
>>>> and
>>>> putting it into the new v210.
>>>>
>>>> I supose I could lucreate -n new_v210, lumount new_v210, edit what I
>>>> need
>>>> to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0
>>>> and
>>>> then luactivate the original boot environment.
>>>
>>> _______________________________________________
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to