Sarah Jelinek wrote:
> Hi Kyle,
>
>> As I said above I just don't trust upgrades. Upgrading from one
>> build of NV to the next might be ok since it's a 'test' machine
>> anyway, and I'll do a clean install when it's done.But I can't
>> imagine taking some machine that had had 2.5.1 on it an upgrading it
>> to 2.6 then to 7 then to 8... 9... 10??? And possibly upgrading to
>> update release in the middle.
>>
>> There are just too many questions about the state of the machines. I
>> f I had edited a config file, are the changes overwritten during the
>> install? or are they left, and the newer version of that config file
>> just isn't installed at all?
>> What am I missing out on? What am I losing that I had?
>>
>> To make LU work for me:
>>
>> 1. I'd personally like to see a 'LiveJumpStart'. Where the ABE is
>> wiped, and a fresh install is done using all the jumpstart logic on a
>> running machine. Then I can have the speed and known state of JS,
>> with the limited downtime of LU.
> Certainly, this is doable. We do plan to enhance jumpstart to do more
> things with the BE's. And more things in general that don't even
> require an install or upgrade command. Things like installing software
> packages from a remote repository only.
While booted from the network like today? or writing to an alternate
boot environment while the machine is up an running like today?
>>
>>
>> 3. I'd like to be able to use the SVM 'mirror' keyword in the BE
>> definitions.
> Well.. actually, this brings up another point that needs to be made
> clear. With the Caiman installer we are proposing to support only ZFS
> as the root filesystem , not UFS/SVM. The reasons for this are that
> ZFS offers us so many things that we cannot get with UFS/SVM. The live
> upgrade process becomes much more manageable with ZFS. We get the ZFS
> data and metadata consistency guarantees. We get rollback in the event
> adding patches to the system has gone bad, even without live
> upgrade(via a snapshot).
>
Beleive me, I understand the wins from using ZFS. Using it for root
probably has benefits I haven't even imagined yet. I am looking forward
to playing with that to be sure. But I think it might be short sighted
to expect all of Sun's customers to adoopt 'ZFS on root' in order to be
able touse Solaris 11.
You say that this new installer will only support ZFS for root, That
makes me ask will the 'old' installer still be available?
> Using ZFS as the root fs which implies a ZFS root pool, really helps
> us reduce the complexity of disk partitioning for most installs.
> Within a ZFS root pool all upgrades are live upgrades since we can
> take a snapshot of the existing operating environment, clone it,
> promote this and do the upgrade. All within the same pool. The obvious
> restriction is there has to be enough space. But... once the root pool
> is setup and ready to go a user doesn't need to worry about modifying
> the underlying partitions to achieve live upgrade. This is partly why
> we have decided that in place upgrades won't be supported. ZFS makes
> it very straightforward to do a live upgrade.
>
I understand that this form of live-upgrade will be easier with ZFS.
managing the disk space will be simpler, and it may turn out (if you can
avoid writing new files in the snapshot that are really the same as the
old ones.) that disk space can be conserved also. It will be nice to
make a 'root pool' or 'boot pool' and not need to put up 'hard'
partitions for /, /usr, /var, and swap.) and to be able to move the
limits on them at any time.
But there are some benefits to the current Live-Upgrade that I'd like to
see preserved. Having mulitple full blown boot environments helps with
many things. Not just flipping back and for during upgrades. right now I
keep and S10 BE and an NV BE (or two) on my disk, for many reasons.
In the ZFS world updating a live BE won't require a seperate BE to
update, but I hope the notion of seperate BE's doesn't diappear, for the
dual booting funtionalty. without truely seperate areas of the disk, I
can't keep S10 and SNV easily (well I can go back to switching in OBP on
SPARC but...) S10 won't understand ZFS.
In the next generation, When S11 and early builds of S12 both understand
ZFS, I still don't really want them to be 'forked' branches of the same
filesystem. I want to allocate diskspace to each 'OS' seperately, and
switch between them. I basically want to have each BE (disk area) be
it's own ZFS pool, that for the most part only it ever uses.
Within the space allocated to an OS , applying patches, installing
packages, and even upgrading that OS to the next one, the features of
ZFS are great. But I don't think I really want to share between OS's.
So right now, the right way for me to dual boot S10 and NV and upgrade
them requires really 3 BE's at a minimum... With ZFS, I should be able
to upgrade a BE 'live', so I could get away with 2. But please don't
make me drop to one shared disk area.
>>
>> Maybe instead of creating partitions when defining a BE, I could
>> create all my partitions with the 'filesystem' keyword (using
>> 'mirror' if I like) and then build the BE from the already defined
>> filesystems. I currently create /, /lu, /var, and /lu/var. But maybe
>> my filesystems could instead be created as "[BE1]/" , "[BE2]/",
>> "[BE1]/var", and "[BE2]/var" and the BE's wouldn't need to be defined
>> seperately?
> This would work in a ZFS root pool environment. we wouldn't be
> creating partitions when defining a BE, it would be a ZFS filesystem
> inside a root pool.
As I said above. For the cases where I am really keeping multiple
Solaris's on the disk. I would still want to keep mulitple 'root pools'
- and be able to use jumpstart to both create them initially, and/or
select one to jumpstart to.. I hope both the Caiman installer, and the
'ZFS as root' projects will keep that in mind.
-Kyle