On Thu, 2006-06-29 at 21:20, James Falkner wrote: > * What use cases are we trying to address?
It seems to me that the two use cases you're trying to address - developer install (for test and development) and application administrator (for production deployment) are actually rather different. The first is very ad-hoc, the second highly structured. Is the same solution applicable to both? > ** Developer install > > A product like Java Studio Creator provides a way for developers to create > applications that run in web containers like the Sun Java System > Application Server. This means Creator includes a copy of the > Application Server so that after installing Creator the developer can > proceed to create an application and run it. > > Many application developers in big companies just have a normal UNIX > account, no root access, and have a lot of trouble getting any attention > from the harried system administrators who do have root access. This > means that if root access is required to install things like Creator (and > thus the Application Server) then Creator just won't get tried, since > alternatives like Eclipse and JBoss will install without root access. It isn't at all clear to me that any answer other than 'use tarballs' is really appropriate here. I've done quite a lot of this sort of thing, and tarballs (or, more generally, a simple archive I can extract) is exactly what I want. I want to try a new app *now*. I want to be able to try different versions, compile it myself with different options, throw the whole lot away and start over. The last thing I want is any software management framework getting in the way. The advantage things like eclipse have is that they're simply distributed as archives - which is what people want. Unzip and go. As a developer, I simply don't want to mess with anything more complex, and I certainly don't want to have to learn different tricks on different platforms. As it is, I simply won't touch applications like openoffice that force packages down my throat. > ** Application Administrator > > In large data centers, the responsiblity of administering application > "stacks" may be divided from the responsiblity of administering the > underlying system. This means that the administrators who install, > upgrade, and migrate the applications don't have full root access. > While it's true that there are a few operations needed for complete > deployment of a server, like adding and enabling an SMF service (or > manipulating /etc/init.d and /etc/rc?.d for older Solaris > releases), those operations aren't always needed, and when they are > they are generally easily accomplished under typical data center > change control procedures. Getting an authorized user to run a few > specified commands at a specific scheduled time is much more likely > than getting an authorized user to run through an often lengthy and > somewhat trial-and-error procedure like the Java ES component > installations. Why stop halfway? Have the root user explicitly create a new software domain, and delegate it to the application administrator. Is it possible to create an SMF delegated restarter for that user, as well, so the service creation/start/stop/management are handled as well? > * Can't we already do what is needed? > > It's never a bad idea to question whether a proposed new feature is > really needed. > > ** Why not just use tarballs? > > As mentioned above, the way application developers deal with our > use cases now is to create tarballs (or functional equivalents like > cpio archives.) This clearly works, at least for initial software > installation. So it's clearly a good answer for the developer use case. > Where it is lacking is in the inventory management > and dependency checking provided by a packaging system. I don't see this as an issue for the developer case. > If you > don't know what you have, it's very hard to tell if you need > updates, or to automatically apply them if you do. One of the main > reasons we want to use the Solaris packaging system in particular > is that the Solaris patch tools and the higher levels (smpatch, Sun > Update Manager) can then be simply extended to maintain > "middleware stacks" as well as operating system instances. > > It would clearly be possible to layer some extra versioning > information into tarballs by convention, but it seems to be > duplication of effort when there's already a part of the system > that does it. > > The other main problem with the current use of tarballs is that > middleware developers end up packaging their products twice: once > for installation as "part of the system" by authorized > administrators, once for installation by regular users or > application administrators. This is extra work for the middleware > developers we'd like to eliminate. Given a *single* choice, I would go for tarballs. (Or zip files, or whatever.) > ** Why not just use RBAC? The way that RBAC is currently structured is that it solves a different problem - that of allowing non-root users to manage the core Solaris packages. It's not the same problem, and I don't want developers or application administrators managing system packages - they should be managing their own software. > * What are the risks? > > ** Not "everything" in the system is visible to the SA > > Up to now, everything installed on a particular operating system > instance through the packaging system has been registred in the > same place. Well, no. pkgadd -R breaks this already. And, with reference to the two use cases: the developer install I really don't care what a developer has installed, and in the application administrator I really do. (Although I might delegate the caring about it too.) So for the application administrator I would want some capability to get visibility into what software they have installed. (Even if it's a two-stage process: first generate a list of software domains; then query a software domain explicitly.) > This means that a system administrator could > reasonably trust that, when removing software, there would be a > notification that other parts of the system were depending on it. Assuming the package dependency system will tell you, and that the dependencies are accurate and meaningful. I still feel that there's something more fundamentally wrong if pkgrm breaks a system. To start with, why is an administrator removing any software on a production system in the first place? Why are they removing *this* piece of software? Ignore the package system for the moment, why isn't there a way of telling whether a software package is really in use (or when the binaries or shared libraries in it were last used - I tend to look at last access times - or are actively still in use)? What packages do add-on software suites actually depend on - and are these reasonably central (so the package is unlikely to be removed, or if it is will render the system unuseable in other ways) or peripheral and obscure (in which case it may not normally be installed in which case the 3rd-party software won't install in many cases)? > The hard problem here is basically referential integrity. Once a > domain is registered, nothing keeps the domain creator from > destroying the domain without updating the registry. ... > One way to preserve referential integrity is to control domain > creation and registry by having both done by the same process. > People who want to do this already can -- the N1 System > Provisioning System is a product which does it, and many big data > centers have developed their own provisioning systems. But that > approach is completely incompatible with the ad-hoc nature of our > developer install use case. Yes, but the two use cases are sufficiently different that different solutions are likely to arise. As I see it, software domains have to be delegated from the centre, rather than letting any user randomly create themself a software domain at any time of their choosing. -- -Peter Tribble L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
