[EMAIL PROTECTED] wrote: > I'll take you back to Danek's original answer: pkg verify. We have an > existing mechanism, built into the pkg client, that verifies checksums, > permissions, etc. I don't know what argument you're trying to make. Do > you really believe users and technicians are incapable of learning a new > set of tools?
I believe that they have a set of tools that they know and understand - start with "ls -l" and "sum", and continue on to more advanced tools like bart(1M) and tripwire - and that if those tools don't produce the expected results they will call for help. You will then be able to educate them on the use of the new tools, and maybe they will accept them and maybe they won't. (Whether or not they do, the support technician gets paid.) This scheme sure sounds like it will make it impossible to use tools like bart and tripwire to ensure that all of your systems have the same installs, and that seems like a problem that can't readily be fixed through education. Here's another "for instance". My group needs to verify the installations of our product, on a variety of platforms. Should we work with each of the three or more packaging subsystems that we support (SVR4, IPS, RPM), writing custom code to interpret the results from each and working around the gaps in each, or should we write a single verifier that works on all of the systems? (Why do we need such a verifier? Because we might have screwed up and built an update that didn't really include all of the files that changed in a new release, or that updated a file wrong, or included the wrong version of a file.) >> If we believe that there are parts of ELF files that (a) routinely >> change and (b) don't matter, and we want to avoid delivering unnecessary >> change to the customer, perhaps we should do that filtering at an >> earlier point, so that we never even deliver the change internally... >> perhaps all the way back to not including those sections in the file in >> the first place. > > Omitting sections that change contradicts your argument in favor of > supportability. I did say "perhaps not including them". Not delivering the change into the repository would be another possibility, though even there there will be people who are surprised that the file that came out of their compiler isn't the very same file that ended up in the repository and thence on the target system. Best is if building the same source twice is guaranteed to produce the same bits. > CTF, a very useful debugging tool for us, has > non-deterministic output. That's pretty scary. > We'd like that information to be in the > binary, but it changes from build to build, even if the binary doesn't. So what would you do if there was a bug in the tool that generates the data, and a release went to customers with bad data in that section? > That's just one example. We don't add random sections to binaries and > ship them to customers because we feel like it. Optimist :-) _______________________________________________ pkg-discuss mailing list [email protected] http://mail.opensolaris.org/mailman/listinfo/pkg-discuss
