On 1/14/11 4:14 AM, Matt Keenan wrote:
Definitely a different take on the referencing, and does solve the problem of having to specify target information in two locations.

Going down this route would be mean major surgery being done to the existing schema as delivered in Solaris Express, is this something we want to do ? I know we are going to make changes either way but just wondering do we want to minimize these.

One comment further down.
As an example, here's my rpool again, with my suggestions.
<target>
<disk>
<disk_name name="c3t0d0" name_type="ctd"/>
<partition action="preserve" name="1" part_type="191">
<size val="585906615" start_sector="16065"/>
<slice name="0" action="preserve" force="false" zpool="rpool">
<size val="585842355" start_sector="16065"/>
</slice>
</partition>
</disk>
<logical>
<zpool name="rpool" action="preserve" is_root="false" redundancy="none"/>
<filesystem name="/rpool/ROOT/sol155" action="preserve">
<filesystem name="/rpool/ROOT/sol156" action="preserve">
</logical>
</target>



should is_root for "rpool" not be true ?

It should be. That's a bug in the current TD implementation that I'll squash. The default "is_root" in the schema, however is "false".


To me, this is a cleaner approach both from the perspective of manipulating the XML/DOC objects within CUD and as a user writing / editing a manifest for AI to use.

Here's an example of creating more than a single zpool with more than 1 disk (I blanked out all the size values due to not wanting to actually calculate those by hand :) ):

<target>
<disk>
<disk_name name="c3t0d0" name_type="ctd"/>
<partition action="create" name="1" part_type="191">
<size val="-" start_sector="-"/>
<slice name="0" action="create" force="false" zpool="rpool">
<size val="-" start_sector="-"/>
</slice>
<slice name="1" action="create" force="false" zpool="tank">
<size val="-" start_sector="-"/>
</slice>
</partition>
</disk>
<disk>
<disk_name name="c3t1d0" name_type="ctd" zpool="tank" />
</disk>
<disk>
<disk_name name="c3t2d0" name_type="ctd"/>
<partition action="create" name="1" part_type="-" zpool="code">
<size val="-" start_sector="-"/>
</partition>
<partition action="create" name="2" part_type="-" zpool="code">
<size val="-" start_sector="-"/>
</partition>
</disk>

<logical>
<zpool name="rpool" action="create" is_root="true" redundancy="none"/>
<zpool name="tank" action="create" is_root="false" redundancy="none"/>
<zpool name="code" action="create" is_root="false" redundancy="mirror"/>
<filesystem name="rpool/ROOT/sol155" action="create">
<filesystem name="rpool/ROOT/sol156" action="create">
<filesystem name="tank/slim_source" action="create">
<filesystem name="code/my_gate" action="create">
</logical>
</target>


In above example you specify Slice 0 of disk c3t0d0 being in zpool "rpool" yet you specify slice 1 of the same disk being part of zpool "tank", is this a typo or intentional, as I'm not even sure this is possible.

I think that if you just had zpools named "rpool" and "tank", it certainly possible. I don't know if the root pool commonly called "rpool" requires the whole boot disk or not though. I didn't think it did. What I posted above was intentional to show how different slices could be assigned to the same pool.


If zpool atttribute is omitted from any one of the targets would it default to "rpool" (or whatever the name is given to the root pool, as specified in <logical> section)

It would default to whatever the schema has.  Right now it has this:

<!ATTLIST zpool action (create|delete|preserve|use_existing) "create">
<!ATTLIST zpool name CDATA #REQUIRED>
<!ATTLIST zpool is_root (true|false) "false">
<!ATTLIST zpool mountpoint CDATA #IMPLIED>


We would have to expand this to cover things like vdev redundancy, pool options and dataset options.


What if zpool attribute on a <disk> differs from a zpool attribute on a chlld <partition> or <slice> ?, guessing this should not be allowed, e.g. if zpool specified at disk level, it would override all other zpool attributes that might be specified on it's children.

That will not be allowed and will raise an exception / set an error in the errsvc. In my mind, Disk > Partition > Slice for determining vdevs (if Slices are created inside a Partition instead as children of Disk).


Under <logical> would it be a requirement to always have at least one <zpool> where is_root="true" ?

Are we still supporting UFS root? If so, then no it's not a requirement. If not, then yes. The clients (AI / GUI installer / etc.) could also add some logic behind this to ensure a least one zpool has is_root="true"


All zpool attributes specified on target "Must" exist as <zpool>'s in <logical> ? (and only once).

Sounds about right.


I can definitely see the merits of this proposal.

Thanks for the review!

-Drew

_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to