Hi Kyle, > While booted from the network like today? or writing to an alternate > boot environment while the machine is up an running like today? Well.. what I have personally considered is while we are booted from the network, or booted from the system that you should be able to do this. So... you could fire up the installer and have your profile ready and it will find it and do the right thing. Same with the network boot.
Haven't thought through the implementation details so there is a bit of hand waving going on here... but we do want to make jumpstart more fully capable and allow users to do things like partial updates or software updates only, without having to upgrade the whole system. >>> >>> >>> 3. I'd like to be able to use the SVM 'mirror' keyword in the BE >>> definitions. >> Well.. actually, this brings up another point that needs to be made >> clear. With the Caiman installer we are proposing to support only ZFS >> as the root filesystem , not UFS/SVM. The reasons for this are that >> ZFS offers us so many things that we cannot get with UFS/SVM. The >> live upgrade process becomes much more manageable with ZFS. We get >> the ZFS data and metadata consistency guarantees. We get rollback in >> the event adding patches to the system has gone bad, even without >> live upgrade(via a snapshot). >> > Beleive me, I understand the wins from using ZFS. Using it for root > probably has benefits I haven't even imagined yet. I am looking > forward to playing with that to be sure. But I think it might be short > sighted to expect all of Sun's customers to adoopt 'ZFS on root' in > order to be able touse Solaris 11. > This is certainly an area of debate. I agree that it requires a leap for our customers. And, it requires us to help our customers with a transition from UFS to ZFS for their root filesystems. There are a couple of compromises we have considered: 1. Initial install will only support ZFS root 2. Upgrade will allow for creation of a ZFS root pool. The migration to ZFS root will happen with this if the user chooses it. 3. If the user chooses not to move to ZFS we do a live upgrade only of the UFS root and they don't get the additional capabilities we will offer as part of the ZFS root capability. > You say that this new installer will only support ZFS for root, That > makes me ask will the 'old' installer still be available? Well... for a while at least. Part of our plan is to provide the Caiman installer with the LiveDVD which is currently an opensolaris project. We won't be integrating Caiman in to Solaris for some time really so in essence the old installer will still be available. However, the plan is to replace the current installer with Caiman in the Solaris 11 release. A lot of this depends of course on what Solaris 11 becomes and when it gets released. >> Using ZFS as the root fs which implies a ZFS root pool, really helps >> us reduce the complexity of disk partitioning for most installs. >> Within a ZFS root pool all upgrades are live upgrades since we can >> take a snapshot of the existing operating environment, clone it, >> promote this and do the upgrade. All within the same pool. The >> obvious restriction is there has to be enough space. But... once the >> root pool is setup and ready to go a user doesn't need to worry about >> modifying the underlying partitions to achieve live upgrade. This is >> partly why we have decided that in place upgrades won't be supported. >> ZFS makes it very straightforward to do a live upgrade. >> > I understand that this form of live-upgrade will be easier with ZFS. > managing the disk space will be simpler, and it may turn out (if you > can avoid writing new files in the snapshot that are really the same > as the old ones.) that disk space can be conserved also. It will be > nice to make a 'root pool' or 'boot pool' and not need to put up > 'hard' partitions for /, /usr, /var, and swap.) and to be able to move > the limits on them at any time. > > But there are some benefits to the current Live-Upgrade that I'd like > to see preserved. Having mulitple full blown boot environments helps > with many things. Not just flipping back and for during upgrades. > right now I keep and S10 BE and an NV BE (or two) on my disk, for many > reasons. > This will still be available. We are not removing the full blown boot environments as part of our live upgrade even using ZFS. > In the ZFS world updating a live BE won't require a seperate BE to > update, but I hope the notion of seperate BE's doesn't diappear, for > the dual booting funtionalty. without truely seperate areas of the > disk, I can't keep S10 and SNV easily (well I can go back to switching > in OBP on SPARC but...) S10 won't understand ZFS. > Separate BE's will still be available. The biggest issue really becomes any ondisk format changes that might occur with ZFS. If you have an older release of Solaris and a newer one, say S10u5 and Nevada, and you want to upgrade the Nevada BE and that release has ZFS on disk format changes, if you apply the changes you will no longer be able to boot back to the S10u5 release if these BE's are in the same root pool. > In the next generation, When S11 and early builds of S12 both > understand ZFS, I still don't really want them to be 'forked' branches > of the same filesystem. I want to allocate diskspace to each 'OS' > seperately, and switch between them. I basically want to have each BE > (disk area) be it's own ZFS pool, that for the most part only it ever > uses. The multiple root pool support will be available. We need this as much as anything specifically due to possible on disk format changes. > Within the space allocated to an OS , applying patches, installing > packages, and even upgrading that OS to the next one, the features of > ZFS are great. But I don't think I really want to share between OS's. > > So right now, the right way for me to dual boot S10 and NV and upgrade > them requires really 3 BE's at a minimum... With ZFS, I should be able > to upgrade a BE 'live', so I could get away with 2. But please don't > make me drop to one shared disk area. >>> >>> Maybe instead of creating partitions when defining a BE, I could >>> create all my partitions with the 'filesystem' keyword (using >>> 'mirror' if I like) and then build the BE from the already defined >>> filesystems. I currently create /, /lu, /var, and /lu/var. But maybe >>> my filesystems could instead be created as "[BE1]/" , "[BE2]/", >>> "[BE1]/var", and "[BE2]/var" and the BE's wouldn't need to be >>> defined seperately? >> This would work in a ZFS root pool environment. we wouldn't be >> creating partitions when defining a BE, it would be a ZFS filesystem >> inside a root pool. > As I said above. For the cases where I am really keeping multiple > Solaris's on the disk. I would still want to keep mulitple 'root > pools' - and be able to use jumpstart to both create them initially, > and/or select one to jumpstart to.. I hope both the Caiman installer, > and the 'ZFS as root' projects will keep that in mind. There will be support for multiple root pools. And, jumpstart will be modified to create, populate and later select one to upgrade. Right now the jumpstart support for live upgrade is very minimal. thanks, sarah **** > > -Kyle > > > _______________________________________________ > install-discuss mailing list > install-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/install-discuss >
