Hi Alan,

Thanks for the review. Comments/questions inline..

> There's been a lot of good comments, and I looked through the document and 
> wanted to comment on a couple things myself.
>
> Packaging
> ---------
> Early on in the doc it states:
>
> "While packaging and patching are important in delivering Solaris it is 
> considered a separable problem area."
>
> While a seperate issue in itself, there's a lot of seperation between ITUs, 
> Packages, Patches, and Updates. Software is software is software to me, the 
> installer boots and gets configured, transfers the packages needed, installs, 
> then configures them (we fail horribly on configuring software today). When I 
> install postgres, mysql, or any piece of software, have some type of setup 
> done so it doesn't toss all of that burden on the admin/user. The key to ITUs 
> is to have them available at install time, and aside from NIC drivers, all 
> others can be gotten over the net after the network has been configured 
> (early in install is much better than later as we do today of course).
>   
Certainly, software is software, in the end, but it comes in many forms 
which require different installation techniques. The Caiman installer, 
as with the current Solaris installer, doesn't really do software 
configuration. Installing and configuring an OS is much different than 
installing an application in terms of configuration. However, that said, 
we are looking at how it might be possible to find the common ground 
between an OS installer and an application installer(which expects a 
running, configured system for the installation environment).

We are planning the ITU and device verification service for Caiman. We 
currently have ITU available, early on in the install process, but it is 
a manual process for the users to choose and specify the locations of 
the drivers. We don't have device/driver verification, which is a key 
component for the users to understand what is needed to get their 
systems up and running and installable. We plan to use well known 
repositories for ITU's and other software so that users can get what 
they need over the net.

> I actually believe that packaging and patching is a very important aspect, to 
> work with in the long run, and I wonder why there is a difference between 
> package installation, patches, upgrades, or even management changes done 
> through package distribution. It is all about the software on the system.
>   
We agree that the patch/packaging stuff is critical. But, with limited 
cycles we have chosen to tackle the installation problems first. The 
Caiman installer is being architected and designed in such a way as to 
be package/patch agnostic. We realize that we shouldn't have the 
installer intimately aware of the software format if at all possible.

You are right, it is all software, but currently bundled in much 
different ways. We can't just make it all pkgs, or patches, or tar balls 
without some thought as to the ramifications. For example, our customers 
want patches because they can 1) control what they install on their 
systems 2) patch a specific binary for a specific bug 3) manage the 
amount of change on their systems at any point in time, and other 
reasons I am sure. I am not saying we shouldn't consider just using some 
sort of packaging mechanism for all software, but many customers like 
the fined grained control patches give them. Many customers who must 
certify their apps after applying updates want to have less uncertainty 
in the process of applying updates to software.

We could modify our package contents so as to provide this kind of 
control. This would take a lot of thought and work to achieve and ensure 
it is correct. We could just use tarballs. We could use debian pkg'ing 
and use an appget like feature to get software updates.

There are a lot of choices for us, and simply put, the time it will take 
to decide the best business as well as technical strategy for this 
problem is too far out for us to hold the progress on Caiman.
> Push vs. Pull
> -------------
> This has been a religous debate amongst those who have been involved with 
> updating and installation of software. How does one view it? Some folks 
> insist that pulling is the traditional and defacto way that the management of 
> such software should be handled. There are good reasons for pushing as well, 
> and in terms of management there are some great things that can be done to a 
> large group of servers being managed by an admin. Even in the case where the 
> same tool is being used, rsync as an example, there's a difference in the way 
> you view things if you were pushing or pulling, depending on the source of 
> your data. While a minor or distracting point to some, being able to push and 
> pull is very useful.
>   
If I get the intent of this discussion, I believe you are discussing 
push vs pull of software for a customers system. In general I believe in 
'pull'. In general, in Caiman, we expect 'pull'. Certainly in the 
interactive installer environment. For example, if we find that drivers 
are missing and can be found in a repository somewhere we will ask the 
user for confirmation before downloading the driver. With jumpstart we 
will do this automatically(I believe that is the current plan), so this 
will be push. Those using jumpstart will have to be aware that pushing 
drivers is a possibility.

Internally in a customers WAN or LAN, we can provide tools that will 
enable them to push software out to multiple machines if they choose to 
to this. We are planning on providing replication and migration 
utilities that will enable just what you are referring to regarding 
sysadmins managing their systems.
> Security
> --------
> I don't see anything about security. I really hope this is a piece that will 
> be included, since having a secure install network to install from is very 
> important in today's climate. I'd like to see a seperate Security Network 
> that could be used to query package authentication, so that packages could be 
> gotten from anywhere, and safely verified. Solaris has much of the capability 
> today in terms of knowing which files have been modified on the system or 
> not, but doesn't use it too effectively. All of install, update, and patching 
> should be secure in the future, so that it can be done safely over the 
> network.
>   
You are right, we don't have anything about security in our current 
docs. Certainly, the software repository service will have to understand 
something about security. We just haven't gotten that far.

I would like more details about what you mean by Security Network. Can 
you describe the requirements and functionality you expect this to 
provide? Do you envision this as a separate service in Caiman?
> Dynamic Repository
> ------------------
> For me, sources are sources, wether a package comes from xyz server, my local 
> hard disk, a CD/DVD, nfs mount, or *preferably* over the network, all of 
> these packages should be available as a whole.
Not sure what this statement means. Which packages should be available 
as a whole?
>  There could be advantages to 
> putting encrypted software in a specific geo, to make it easier to get. The 
> open source communities do this for the most part, and we should also to 
> remove some of the burden placed on Sun as a corporation. Ultimately, Sun is 
> a user of OpenSolaris just like any other distribution would be. Knowing 
> these packages are trusted and secure when installed to my system is very 
> important in this regard.
>   
Our plan is to have repositories on opensolaris for software that users 
may want to install on their systems. Certainly, Sun is a customer of 
opensolaris and this software will be available to anyone who wants to 
download it. The difference with Caiman is that we will enable the users 
to utilize this feature during install/upgrade or even after so that 
they have easy access to the repositories we consider trusted.

> Simplification
> --------------
> Some of the flow diagrams are complex to look at, 1.1.1 as an example could 
> be 
> simplified and/or arranged to remove some of that complexity, at least to me.
>   
I can see your point about 1.1.1. But, honestly, this was as simple as 
we thought we could show the dependency chain. Our first pass at this 
was all spaghetti :-). But, I will take a look at this and see if there 
is a way to simplify it. If you have suggestions on how this might be 
done, please do let us know.
> The left column of services could be grouped as a whole and labeled something 
> like initialization/setup/similar, the the transfer service taking action 
> from there. My target could be local or remote, in the long run, getting back 
> to push vs. pull which I mention above, but let's just focus on pulling data 
> to a local system first I guess.
>
> Logging/Meta Data/Patch services seem like Admin type services for the 
> install.
>
> I'm not trying to nitpick, I'm not great at drawings, and someone put quite a 
> bit of effort in the doc already, I'm just trying to point out that this 
> complexity will confuse management also, IMO.
>
>   
We did group some of the services in to the phases you mention later on, 
in section 2.1, in an effort to reduce the complexity.
> Drawing 2.1.3 might be a better view of how I was thinking 1.1.1 could be 
> simplified, and the left most Software Selection is like the 
> boot/config/setup I mention above, and the right most software selection is 
> really configuration/customization.
>   
Let me see what I can do with these. I agree, they are complex and 
difficult to follow. To a certain extent the detail is for us, so we 
clearly understand the flow and dependencies when working on the 
architecture and design.
> These would both be clearer to me if the GUI was on top of the Orchestrator, 
> since in theory it is.
>   
In some cases the GUI should be shown on top, but in some cases it 
shouldn't be shown at all(like the post install phase) since it isn't a 
player in the function being performed.

I will work on the diagrams. I am doing more flow analysis anyway as 
part of the design work for the Orchestrator so hopefully I can simplify 
these as I dig down deeper in to this.
> Database
> --------
> If there is any way at all possible to use SQL-Light, Berkeley DB, or similar 
> light database client to offer an option over the "contents" file as we know 
> it today, that would be swell. I'm ok with the contents file and amazed it 
> doesn't get more corrupt on more systems, than it has. This causes us a lot 
> of time during package installation I would guess, but not certain.
>
>   
the /var/sadm/install/contents file is one of the reasons it takes time 
to do installations. And, it is a single point of failure for sure. As 
to the solution, there have been a few bandied about. One I thought had 
some promise was doors as files: 
http://approach.sfbay/wiki/index.php/Doors-As-Files

This is something we need to look at for sure.
> Process Control
> ---------------
> I like the idea of how the orchestrator handles things, and not sure how the 
> services were intended to be distributed, in the sense of processes being 
> created, running, and ending opposed to created, running, and using IPC to 
> talk between them in more of a static state with queues. The orchestrator 
> might work well having a few running subordinates, which it would talk to 
> with IPC, but have those subordinates manage the background proccesing of 
> their own respective processes that will end after completed, if that makes 
> sense (hey it does to me;-). Maybe that could be 
> Setup-Boot-Config/Transfer/Target/Customization agents running, that would 
> distribute the other various services, if that makes sense. 
>
>   
We have just started the discussion about how the services in Caiman 
would be instantiated. That is, should they be separate processes, or a 
multi-threaded process, or a combination of some that are started as 
separate processes and some that are part of a bigger process which is 
multi-threaded to handle the various functionality. Part of what I am 
doing now is trying to determine the orchestrator's design, how it would 
communicate with the other services, which ones could run standalone or 
that are driven by other services than the orchestrator.

For the installation apps the orchestrator is the interface they use to 
invoke and subsequently communicate about the installation/upgrade. 
Whether or not the orchestrator stops/starts all the other subordinate 
services, or those services are allowed to be somewhat independent is 
what we are looking at now. I kind of like your idea about the groupings 
of services in to a functional scope. Will keep this in mind.
> Anyway, just some food for thought...
>
>   
Thanks for all your comments. I appreciate your review of the 
architecture document.

sarah

Reply via email to