Hi Peter, Thank you for taking the time to writeup the issues you have encountered. For those issues that don't have bugs, please file some. I can tell you we are working on AI robustness and AI user experience. There is a project going on that should help address the usability issues such as you note below. However, filing bugs for specific issues will help us track specific issues.
We are aware of the robustness and usability issues that we have in AI today. We are working on these. More comments inline... > OK, so the OBP patches for the SunBlade 1500 have been released, > and I've spent a little time testing AI. > > I'm using straight 0906 with the repo ISO available locally. > > Some notes: > > Boot is usually fairly quick. It's typically 30s to get the 175M or so > image across. > Due to it being hard to make AI work, I had to do this a lot of times, > and a couple > of times it took more like 50 minutes. > > The first few times I tried I got the error: > > <AI Aug 6 08:55:55> Checking any disks for minimum recommended size > of 12646 MB<AI Aug 6 08:55:55> Disk c6t0d0 size listed as 76319 MB > <AI Aug 6 08:55:55> Default disk selected is c6t0d0 > <AI Aug 6 08:55:55> Disk name selected for installation is c6t0d0 > > <OM Aug 6 08:55:55> Set zfs root pool device > <OM Aug 6 08:55:55> creating zpool > <TIZFM_E Aug 6 08:55:55> zfs: Couldn't create ZFS pool > <OM Aug 6 08:55:55> Could not create ZFS root pool target > <OM Aug 6 08:55:55> TI process failed > <OM Aug 6 08:55:55> Target instantiation failed exit_val=-1 > <AI Aug 6 08:56:05> om_perform_install failed with error 208 > <AI Aug 6 08:56:05> Auto install failed > > Right. It finds the disk OK, but gives up. > > I finally worked out that this is a known bug, I think it's a case of 6191. > > So, how to get round this? Well, this lead me down a couple more rat-holes. > > One trick I've used in the past is to simply stick an EFI label on > a disk which is a good way of trashing the existing contents. That > doesn't work: > > <OM Aug 6 12:40:37> Ignoring c6t0d0 because of bad Geometry > <OM Aug 6 12:40:37> Ignoring c6t1d0 because of bad Geometry > <AI Aug 6 12:40:45> No Disks found on the target system > <AI Aug 6 12:40:45> Target validation failed > <AI Aug 6 12:40:45> ai target device not found > <AI Aug 6 12:40:45> Auto install failed > > OK, so it looks as though AI really doesn't like to be given EFI > labeled disks. I couldn't easily spot a bug for this. > > So, put an SMI label back on. Nope: > > <OM_E Aug 6 13:03:45> No install slice exists. > <OM Aug 6 13:03:45> Couldn't get device target info. > <OM Aug 6 13:03:45> TI process failed > <OM Aug 6 13:03:45> Target instantiation failed exit_val=-1 > <AI Aug 6 13:03:55> om_perform_install failed with error 208 > <AI Aug 6 13:03:55> Auto install failed > > The partition table at this point is: > > Part Tag Flag Cylinders Size Blocks > 0 root wm 0 0 (0/0/0) 0 > 1 swap wu 0 0 (0/0/0) 0 > 2 backup wu 0 - 38306 74.53GB (38307/0/0) 156292560 > 3 unassigned wm 0 0 (0/0/0) 0 > 4 unassigned wm 0 0 (0/0/0) 0 > 5 unassigned wm 0 0 (0/0/0) 0 > 6 usr wm 0 - 38306 74.53GB (38307/0/0) 156292560 > 7 unassigned wm 0 0 (0/0/0) 0 > > (which looks a bit odd - I'm used to the default partition layout > having a couple of 128M or so silly little slices for 0 and 1). > > So I work out that what I have to do here is delete slice 6 (so > there's only the overlap slice). And then I'm home free. > You can specify, in the AI manifest, to delete a slice so you don't have to manually do this. Certainly, one of the issues in AI today is our target device choosing algorithm and the fact that we don't make it easy, or obvious, why we can't use a specific device. > Almost. Then I get: > > <OM Aug 6 13:22:13> Set zfs root pool device > <OM Aug 6 13:22:13> creating zpool > <OM Aug 6 13:22:14> /usr/sbin/zfs get -Hp -o value available rpool > <OM Aug 6 13:22:15> Creating swap and dump on ZFS volumes > <OM Aug 6 13:22:20> TI process completed successfully > <OM Aug 6 13:22:20> Transfer process initiated > <OM Aug 6 13:22:20> IPS transfer mechanism selected > <OM Aug 6 13:22:20> IPS transfer phase initiated > <TRANSFER_MOD_E Aug 6 13:22:24> Unable to initialize the pkg image area at /a > <TRANSFER_MOD Aug 6 13:22:24> TValueError or TABort > <OM Aug 6 13:22:24> IPS initialization phase 1 failed > <OM Aug 6 13:22:24> Transfer failed with error -1 > <AI Aug 6 13:22:33> om_perform_install failed with error 114 > <AI Aug 6 13:22:33> Auto install failed > > AI could be somewhat more helpful here. What's actually happened > is that I haven't configured any nameservice information for this client > on my DHCP server (jumpstart doesn't need it) so it can't look up > the hostname of my repo server. Perhaps it ought to check that? > > Once that was fixed I - finally - managed to get an install to proceed. > > It seems quite slow. Comparing it to an SXCE install, it takes somewhat > longer to install quite a lot less software - I need to get some more > accurate data. > > It would be good to have concrete data about what is slower than SXCE. I have to tell you my experience with AI, compared to SXCE has been faster actually. A lot of it depends I suppose on the network connection you have to the repo you are using for IPS. We have actually been doing some performance measurements to try to isolate areas that we can perhaps speed up. > A general comment is in order here: the amount of work and effort > (the above is a severely abridged version of the story) I had to put > in to make this work is frightening. I've spent about a day's hard > effort just to get to the point where I can get a basic installation to > work. I've not expended that much effort in my career making jumpstart > actually work. And, having a need to install RHEL for something > the other day, I was able to work out how to use my existing > jumpstart server to do a hands-off custom install of RHEL onto > a box in about 10 minutes flat. > > you are right, we shouldn't make you spend a day to setup and get an AI installation working. File bugs, keep testing and pushing on this. We are aware of the gaps in AI and are working on it. thanks, sarah ****