On 05/ 4/11 06:49 AM, Darren Kenny wrote:
Hi Dave,

Thanks for the review.

On 27/04/2011 19:18, Dave Miner wrote:
On 04/22/11 01:25 PM, Darren Kenny wrote:
Hi,

I'd like to ask for help in reviewing the code for the moving of the Auto
Installer client to the CUD architecture.

Some light reading for the weekend ;)

As Drew pointed out, we're adding less code than we're removing - that has to be
positive, doesn't it??

I'm going to schedule a code review for 8am PDT/4pm IST on Tuesday 26th, I'll
send out details closer to the date.

The webrev is at:

        http://cr.opensolaris.org/~dkenny/cud_ai-to-slim/



Overall looks quite good.  Did not review tests, but let me know if you
need reviewers for those.

Dave

auto-install/__init__.py: start date on copyright should have stayed at 2008

Fixed.

ai_get_manifest.py:

73,828,829: should really be /system/volatile now

Fixed.


ai_manifest.xml:

22: start date on copyright should have stayed at 2008

Fixed.


114: The old manifest was showing the combination of these two
properties applied together.  The change doesn't seem the same.

Does this read better?

         <!-- G3 -->
         <!--
         <disk_keyword key="boot_disk"/>
         -->
         <!--
           On X86 machines, Slices exist within partitions only
         -->
         <!--
           Uncomment this to force AI to find an existing Solaris
           partition.
         -->
         <!--
         <partition action="use_existing_solaris2">
           <slice name="0">
             <size val="20480mb"/>
           </slice>
           <slice name="4">
             <size val="20480mb"/>
           </slice>
         </partition>

         or, use the following to create a Solaris partition
         -->
         <partition name="1" part_type="191">
           <size start_sector="200" val="40960mb"/>
           <slice name="0">
             <size val="20480mb"/>
           </slice>
           <slice name="4">
             <size val="20480mb"/>
           </slice>
         </partition>
         <!-- Define some other partitions to create too -->
         <partition name="2" part_type="99">
           <size start_sector="200" val="20480mb"/>
         </partition>
         <partition name="4" part_type="99">
           <size start_sector="2000" val="20480mb"/>
         </partition>
         <!--
         On SPARC systems, only specify the Slice layout.
         -->
         <!--
             <slice name="0">
               <size val="20480mb"/>
             </slice>
             <slice name="4">
               <size val="20480mb"/>
             </slice>
         -->


I think you got the wrong stanza here; I was commenting on the disk_prop example of vendor and size that is at 112 in the old manifest, 112-116 in your new one.


ai_sd.py, 161: s.var/run.system/volatile.

Done.


auto_install.py

150 and others [esp. determine_profile_type()]: Ethan's zones spec says
that -m is used for manifests, -p for SCI profiles.  The usage here is
-p for manifests (the prior, and confusing usage/terminology).  Perhaps
you can fix this now so that the QE tests converge sooner than later and
the terminology doesn't cause developers to create bugs based on a
misunderstanding of what profile means?

We can change to using -m easy enough.

Do you want us to add the -p option here? Right now SCI profiles are picked up
from the location that manifest-locator downloads to.

The latest, hopefully final agreement with zones is to use -c for profiles, -m for manifests. You and Ethan decide whether putting -c (formerly -p) in before him is right or not.




272: It seems surprising that we have to do this special handling, as
well as the block at 395 in perform_autoinstall().  Would it make sense
to modify the derivation and parser checkpoints to deal with the fact
they may have nothing to do and let them just run in the normal
sequence?  It would seem to simplify a lot of perform_autoinstall(),
though I realize you need to do something after parsing in order to
process the proxy and auto_reboot settings, but even that could
seemingly be a checkpoint.

This would certainly appear do-able, we'll look into refactoring things.


315:  If we're always going to send debug-level to the service log as
implied at 297, is there really a reason to have -v?

If you're happy for us to always send debug information to the log, then we're
happy to remove the -v option.

If not, then we could easily toggle the level for the file based on the -v
option between INFO and DEBUG.


Kind of on the fence here. To some extent it would be nice to have a very detailed log always so we wouldn't be dependent on having to reproduce a run to get details when needed. But I'm not sure that's so great to always have an enormous log kicking around, either.


437: I'm unclear how "creating Logical" is the right context in this
error message, or what that even means.

Yep, I think that's a cut-n-paste error :-/


480: It would be nice if we could set the dataset for the engine to take
snapshots of during the install process.  This would seemingly be
helpful in debugging issues with all of the post-TI checkpoints.

Do you mean to enable resuming of checkpoints?


Not really.  That's certainly not a requirement here.

Or do you mean just to be able to access the snapshotted DOC at the time
things broke?


Snapshotted DOC as well as the install dataset.

The engine does tend to snapshot to temporary files if there is no ZFS
dataset, but it also tends to clean up after itself...


We definitely would need to clean up at the end of a successful install (could provide a way to disable this for debugging, perhaps).


567: what's the use case for -d?  Do we really need to bother?

I'd prefer to fail if there was no manifest specified, similarly I think the
-d option really isn't all that useful on it's own for a normal install.

I would certainly prefer to remove it.


If QE isn't depending on it, then I think we should.

Dave
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to