--- a b <[EMAIL PROTECTED]> wrote: > >Pretty much. Install, copy configs over, reboot. > >Viola. Minus the thousands of servers claim. And > of > >course no Oracle. > > Exactly. As soon as you have to "copy config over", > you're in ad-hoc land. > That works for maybe up to 100 servers with three > full-time people, but > simply shatters for huge server farms and one single > person.
even the 'config' is automatically generated and works for all the mail servers? > > And in an environment which increases by 5-10 > servers per week, it would be > next to impossible to concentrate on actually > delivering any real work for > customers. It would be a full time job just "copying > configs over and > rebooting". That's exactly my point. > > Of course, a valid question might pop up: "what if I > don't have that many > systems and such issues to worry about?" > > The answer is quite simple: one can use the methods > and the engineering for > tens of thousands of systems and run just one single > system with them, so > the process scales up and down. The same is not true > for even partially > manual / interactive work. Sooner or later one will > hit the ceiling of > what's possible, even with 30 people. Lovely. Are you proposing such a solution for Open Solaris deployment? I am sure people will come over in droves after they get the rest of their stuff validated on Open Solaris. > > Note that I am specifically not discussing dektop > users. Desktop users have > at most a handful of systems, if that, and don't run > production > environments, which is exactly what makes them > desktop users. > > >You may get security/bug fixes to core components > like > >the kernel, system libraries. If doing an apt or a > yum > >on a staging proves clean, I don't see how that > should > >be a problem. On Open Solaris (this is about Open > > It's a problem because testing should be exhaustive, > automated, and > structured. There needs to be a strict process on > how and when stuff is > allowed to be put into production. If you have one > person that just goes and > updates systems ad-hoc, that person is burning time > and resources updating > systems instead of doing engineering. In practice, > people get distracted, > delayed or otherwise short on time. Systems get > neglected and soon enough > you have a salad. Like I said. Staging box. Staging box. > > Plus, lots of places simply have a policy of "never > touch a running system". > > The way "apt-get" or "yum update" should be > performed is by integrating > those fixes and patches into the next release of the > platform, be it your > Flash(TM) archive or whatever distribution medium > one picks. Exactly what the staging box is all about. > > Once the testing process passes in developement, it > moves to the |product > testing and acceptance" phase. If it passes that > too, it is deemed > production ready. > > So the way to do a fix is to integrate it into the > next release of the > platform, not do ad-hoc patching. Just look at Sun; > they do something very > similar, and for the at least last 25 years, it's > worked for both them and a > very large number of their customers. I believe this is basically what I said. > > As for OpenSolaris, the way to update Solaris > Express is to upgrade to the > next release, or depending on your environment, do a > BFU (Blindingly Fast > Update). Contrary to all the Joyent propaganda, Sun > has never claimed this > to be the production depoyment thing to do, and I > wholeheartedly agree with > them. Considering the environment where I work, > anything even remotely > "developer-like" would get a production signoff when > hell freezes over. And > rightly so! Who cares what Sun claimed about Open Solaris. I thought this is the Open Solaris list, not the Sun Microsystems PR channel. > > >Yes, people do use Fedora in production. It is > stable > >enough and this is not some mission critical bank > system. > > One of the "El-Reg" readers recently wrote in > response to an article: > > "if it's good enough for a bank, it's sure good > enough for me!" > > And knowing what kind of structured, rigorous, CMMI > and Six Sigma based > engineering process these institutions go through, I > couldn't agree more > with that reader. > > So when I think of running Fedora in "production"... > that would never be > production. That would be what we call "break-fix > mode", or "putting out > constant fires". But such environments are usually > small, with a handful of > white box systems and a few self-taught "computer > guys". Or IEEE guys. > > I'm sure that at this point someone will find > themselves compelled to point > out how organization XYZ runs Fedora on X number of > systems... and having > worked as a Linux system engineer before, I have > just one thing to say to > that: yes, but how much longer before the whole > "break-fix" mode collapses? > We are dealing with simple economies of scale here, > after all. I don't know, I heard a rumour that some big hotshot company with thousands of servers that was using RHEL3 could not wait for the next RHEL4 release and decided to go with Fedora Core 2 way back then. Of course, they did not have rigid mission-critical bank requirements. But I guess they did not use yum/apt to manage their stuff since I also heard that they had disk images that get dumped in and up it goes for every restart. Don't know what that company runs now but you can continue on and on about Solaris being in banks and what not. Linux is slowly eating into Solaris space with banks and what not and while there may be some areas it simply cannot touch, it is probably slowly getting there. Even now, zfs is becoming available for Linux. Open Solaris should take the fight downwards too. Nexenta is imho a step in the right direction except that i think the use of gcc compromises Solaris quality. But whatever. Send instant messages to your online friends http://uk.messenger.yahoo.com _______________________________________________ opensolaris-discuss mailing list [email protected]
