sjelinek wrote:
> Hi Peter,
>
> Thank you for taking the time to writeup the issues you have 
> encountered. For those issues that don't have bugs, please file some. 
> I can tell you we are working on AI robustness and AI user experience. 
> There is a project going on that should help address the usability 
> issues such as you note below. However, filing bugs for specific 
> issues will help us track specific issues.
>
> We are aware of the robustness and usability issues that we have in AI 
> today. We are working on these.
>
> More comments inline...
>
>
>> OK, so the OBP patches for the SunBlade 1500 have been released,
>> and I've spent a little time testing AI.
>>
>> I'm using straight 0906 with the repo ISO available locally.
>>
>> Some notes:
>>
>> Boot is usually fairly quick. It's typically 30s to get the 175M or so
>> image across.
>> Due to it being hard to make AI work, I had to do this a lot of times,
>> and a couple
>> of times it took more like 50 minutes.
>>
>> The first few times I tried I got the error:
>>
>> <AI Aug  6 08:55:55> Checking any disks for minimum recommended size
>> of 12646 MB<AI Aug  6 08:55:55> Disk c6t0d0 size listed as 76319 MB
>> <AI Aug  6 08:55:55> Default disk selected is c6t0d0
>> <AI Aug  6 08:55:55> Disk name selected for installation is c6t0d0
>>
>> <OM Aug  6 08:55:55> Set zfs root pool device
>> <OM Aug  6 08:55:55> creating zpool
>> <TIZFM_E Aug  6 08:55:55> zfs: Couldn't create ZFS pool
>> <OM Aug  6 08:55:55> Could not create ZFS root pool target
>> <OM Aug  6 08:55:55> TI process failed
>> <OM Aug  6 08:55:55> Target instantiation failed exit_val=-1
>> <AI Aug  6 08:56:05> om_perform_install failed with error 208
>> <AI Aug  6 08:56:05> Auto install failed
>>
>> Right. It finds the disk OK, but gives up.
>>
>> I finally worked out that this is a known bug, I think it's a case of 
>> 6191.
>>
>> So, how to get round this? Well, this lead me down a couple more 
>> rat-holes.
>>
>> One trick I've used in the past is to simply stick an EFI label on
>> a disk which is a good way of trashing the existing contents. That
>> doesn't work:
>>
>> <OM Aug  6 12:40:37> Ignoring c6t0d0 because of bad Geometry
>> <OM Aug  6 12:40:37> Ignoring c6t1d0 because of bad Geometry
>> <AI Aug  6 12:40:45> No Disks found on the target system
>> <AI Aug  6 12:40:45> Target validation failed
>> <AI Aug  6 12:40:45> ai target device not found
>> <AI Aug  6 12:40:45> Auto install failed
>>
>> OK, so it looks as though AI really doesn't like to be given EFI
>> labeled disks. I couldn't easily spot a bug for this.

The installer libraries across the board don't have support
for EFI labeled disks.  I thought there was already a bug filed
for this somewhere since this is an issue we are aware of,
though I could be mistaken.

>>
>> So, put an SMI label back on. Nope:
>>
>> <OM_E Aug  6 13:03:45> No install slice exists.
>> <OM Aug  6 13:03:45> Couldn't get device target info.
>> <OM Aug  6 13:03:45> TI process failed
>> <OM Aug  6 13:03:45> Target instantiation failed exit_val=-1
>> <AI Aug  6 13:03:55> om_perform_install failed with error 208
>> <AI Aug  6 13:03:55> Auto install failed
>>
>> The partition table at this point is:
>>
>> Part      Tag    Flag     Cylinders         Size            Blocks
>>   0       root    wm       0                0         
>> (0/0/0)             0
>>   1       swap    wu       0                0         
>> (0/0/0)             0
>>   2     backup    wu       0 - 38306       74.53GB    (38307/0/0) 
>> 156292560
>>   3 unassigned    wm       0                0         
>> (0/0/0)             0
>>   4 unassigned    wm       0                0         
>> (0/0/0)             0
>>   5 unassigned    wm       0                0         
>> (0/0/0)             0
>>   6        usr    wm       0 - 38306       74.53GB    (38307/0/0) 
>> 156292560
>>   7 unassigned    wm       0                0         
>> (0/0/0)             0
>>
>> (which looks a bit odd - I'm used to the default partition layout
>> having a couple of 128M or so silly little slices for 0 and 1).
>>
>> So I work out that what I have to do here is delete slice 6 (so
>> there's only the overlap slice). And then I'm home free.
>>   
>
> You can specify, in the AI manifest, to delete a slice so you don't 
> have to manually do this. Certainly, one of the issues in AI today is 
> our target device choosing algorithm and the fact that we don't make 
> it easy, or obvious, why we can't use a specific device.

In addition to this, we are also working on some changes for
specifying the target disk in AI.  See bug 7057 for some details.
Also see section 5.1 of
http://www.opensolaris.org/os/project/caiman/auto_install/ai_design/ai_client_func_spec.pdf
for the functional specification of the changes we're proposing
to make for disk selection.


>> Almost. Then I get:
>>
>> <OM Aug  6 13:22:13> Set zfs root pool device
>> <OM Aug  6 13:22:13> creating zpool
>> <OM Aug  6 13:22:14> /usr/sbin/zfs get -Hp -o value available rpool
>> <OM Aug  6 13:22:15> Creating swap and dump on ZFS volumes
>> <OM Aug  6 13:22:20> TI process completed successfully
>> <OM Aug  6 13:22:20> Transfer process initiated
>> <OM Aug  6 13:22:20> IPS transfer mechanism selected
>> <OM Aug  6 13:22:20> IPS transfer phase initiated
>> <TRANSFER_MOD_E Aug  6 13:22:24> Unable to initialize the pkg image 
>> area at /a
>> <TRANSFER_MOD Aug  6 13:22:24> TValueError or TABort
>> <OM Aug  6 13:22:24> IPS initialization phase 1 failed
>> <OM Aug  6 13:22:24> Transfer failed with error -1
>> <AI Aug  6 13:22:33> om_perform_install failed with error 114
>> <AI Aug  6 13:22:33> Auto install failed
>>
>> AI could be somewhat more helpful here. What's actually happened
>> is that I haven't configured any nameservice information for this client
>> on my DHCP server (jumpstart doesn't need it) so it can't look up
>> the hostname of my repo server. Perhaps it ought to check that?

This particular issue (failure due to no name service resolution)
has been reported a few times.  We are working on the error
reporting side of it via bug 6651.

Regarding the requirement of nameservice information, one
way to potentially work around this is to specify the IP of the
repo in the repo url in your manifest instead of its name.
(I actually haven't tried this, but I don't see why this wouldn't
work.)


thanks,
-ethan


Reply via email to