Hi, Bart Smaalders p????e v ??t 13. 09. 2007 v 15:18 -0700: > Milan Jurik wrote: > > > E.g. if you mean BartS blog entry about dim sum patching then I can say > > that I disagree. Linux distributions are able to work in their way of > > dim sum patching. And their space is more "unstable" than Solaris space. > > > > I don't think you mean the core OS here. If you're talking about > selecting which Apache build to run on your 2.16.mumble kernel, > yes, that works. But I seriously doubt you can select various kernel > binaries released over the last two years from Red Hat and pick and > choose which binaries you wish to use. Most Linux distros group their > kernel into a very small number of (or single) packages - so you > automatically get a consistent view of the world. >
I agree. Like with KU, which integrates all relevant deliverables. And because dependency between kernel and user space is much bigger in Solaris, you need to accumulate more things. But this situation is growing in Linux also. And, what a surprise, they are counting with it and thinking about it during development of new features. > In the end, the limit to selection is a file. You can only have > one copy of genunix active on the system at once; if you fix a bug > in the S10 source base today and it generates a new version of genunix, > any customer wishing to get that bug fix _MUST_ accept all previous bug > fixes that affected genunix. Of course. > > Now, if a developer does a bug fix that affects 10 binaries, it is my > assertion that all 10 of those binaries should appear on the system, > not just one or two. The difficulties in testing various combinations Yes, they should go in one patch. It means bigger binary dependency inside the patch itself. And because our packaging system wasn't touched for years, many parts which should be there, are missing. Everybody hoped - "gatekeepers will do it, somehow". > across all the various deployments (architectures, global vs local zone, > previous patch level) are insurmountable. As a result, the larger > the putbacks that go into the patch gate, the more the customer's > possible choices are curtailed. > Two points: a) this happens with feature deliveration at the most time b) patch is not for feature deliveration, upgrade is for feature deliveration Your solution is to decrease variability, but this can be forced even with the actual packaging system. > We're far better off improving our packaging and source code > management technology to permit us to deliver multiple streams > of tested change, delivering at different rates to meet customer > requirements, than attempting to carefully handcraft a single stream of > binaries, assembled into groups called patches, that can be assembled by > our customers into a untested-by-Sun amalgam of bug fixes and features. > How many streams will we test? With small amount, we will force users to not apply important new fixes, if there will be significant regression in the middle, even if it would be independend on the rest of the stream. Btw. I didn't still see two key things: a) how will be solved transition of already installed machines, with all their package scripts? b) how will be solved deliveration of interim solutions/fixes and quick security fixes, which will not go in the main streams. And one more, if you want to patch large box with many zones with the actual set of patches, it takes looooooong time. How will your system fix this problem? It can, I'm just asking :-) Best regards, Milan
