Thanks for your answers, I can just add that to avoid std problems, we have only one clone-base(golden) per SLES version. We use the 'clone-any-server' possibility just if really needed or for example when we have one new server that had been setup with customer apps, we can clone that one to save the setup work. Some wants two servers for redundancy (we have two z/VM systems).
That read-only-root parameter sounds interesting, have to check that out. And about IPconfig att every boot, we do change the etc-files, with that script so we will need /etc writable. Which leads me to a question about bind mount: Am I right that you only need to put the changed files in the bind-mounted dir ? Cordialement / Vriendelijke Groeten / Best Regards / Med Vänliga Hälsningar Tore Agblad Volvo Information Technology Infrastructure Mainframe Design & Development SE-405 08, Gothenburg Sweden E-mail: [email protected] http://www.volvo.com/volvoit/global/en-gb/ ________________________________________ From: Linux on 390 Port [[email protected]] On Behalf Of Michael MacIsaac [[email protected]] Sent: Thursday, July 02, 2009 14:49 To: [email protected] Subject: Re: Updated paper available - "Sharing and maintaining SLES 10 SP2 Linux under z/VM" Agblad, > Here comes some feedback. > It is a very good source for info in this matter, and explains a lot of stuff you need to know. Thanks! > I have read the previous ones and tried to implement it. > It was however tricky, most due to that reality changes (like: oops, we need more space on /opt ) > and it is a couple of lines of code to make work and maintain. The size of /opt was increased in this version of the paper, but it is still relatively small. Whichever app you want in /opt, consider installing it on a different virtual machine and mounting the app on the clones over a deeper mount point (e.g. /opt/IBM/Websphere). We wrote about this technique for WAS, DB2 and MQSeries in http://linuxvm.org/present/misc/virt-cookbook-2.pdf It's getting old, but the basic technique should still work fine. > The bind mount is a source of failure, or not ? What happens if it does not work for some reason ? > I guess the original mountpoint takes effect, and that one can be at a read-only disk. When was that > one updated last ? And we have duplicate storing here ? Whew, lot of questions. What I will say is that the bind-mounts usually just work. But yes, it is a source of failure if one does not work :)) > My prime concern here was /etc, I wanted that one on a separate minidisk to be safe. You could do that. But it would be "major surgury" and in the end, it would be a lot of work and I'm not really sure a separate disk is any "safer" than a bind-mount. I can also point out that Red Hat has a "readonly-root" parameter, mainly designed for Stateless Linux I believe, and it has a single file system with *all* of the R/W pieces being bind-mounted. > I saw last in the pdf you have one guy,Carlos Ordonez, always asking: isn't there a simpler way to do this ? > I like that guy :) I will pass that on. > About creating clones, we have made another approach. > Every server configures the network/ip itself at every boot, Configures the network at *every* boot? Why not just at first boot? Do you never put the real IP address, host name and other values in /etc? > and at first boot there is a > script cleaning out old logs and other stuff remaining from setting the clonebase up. Oh, I like this! We write about cloneprep.sh which is run on the golden image just before "flashing" it to the gold disks. But if you forget to run it, then the old logs are still there. Putting cloneprep.sh into boot.findself would guarantee the logs, etc. are cleaned up. > So we don't prepare a clone, just clone any server and it will fix itself at first boot. Yup > (Well, normally we clone from one clonebase, that makes it safer and you need the clonebase shutdown, > but we can always clone any server that is shutdown) OK, I see the value here. But this can be a double-edged sword - if you have too many golden masters, you lose standardization. Still, an interesting approach. > To have this working in SLES10 : > We set a number of FKeys in the VM profile to for example: IPNR=1.2.3.4 > and MASK=255.255.255.0 and GATEWAY=1.2.3.0 and so on. > Then we get these values via vmcp in /etc/init.d/boot.local, and sets the ipconfig values. > And whatever you need or want to set here. > This also makes it easy to move servers around in different network(security)zones, just > change in the VMprofile and reboot. OK, that answers my question above. Sounds like you're well down the road to a similar system. Hopefully you can use some of the techniques described in the paper. "Mike MacIsaac" <[email protected]> (845) 433-7061 ---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 ---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
