Tzfir Cohen wrote
"And if you had all of those Office machines as separate images on a
giant T-Rex, those IT folks would still have to manually patch each and
every image separately, and spend 15 minutes on that.

As for "cloning", "patch distribution" etc.: those solutions
are exactly "solutions (?) to the management problem". As you mentioned
in the beginning, just cramming many images on one mainframe won't make
it go away."

One of the interesting things about total cost is that
centralization/consolidation surfaces costs that are otherwise hidden.  I
certainly don't need to have admin functions for my laptop nickel and
diming my time, but it never shows up on the books.  The inexpensiveness of
PC's is one of the myths of IT that go right along with the inflated cpu
speed myths about the mainframe...

VM does provide an opportunity to do somewhat better with cloning solutions
by allowing the files which make an image to be shared.  These files need
only be updated once for a set of images.   My view is that there is a
trade off between cost to consolidate images and the cost to manage images.
This says that for each application/situation there is an optimum amount of
image consolidation which is driven ability of zLinux to scale up, the
ability of the application to scale up,  and the skill and mindset of the
application integration programmers,  v the ability of the system
programmers/sysadmins  to exploit VM and Linux cloning tools for
automation.  Basically, you are looking for the minimum number of images
that can be run at the availability you want, and won't cost you the farm
to get to.  I would also expect the number of images per unit of work to
shrink over time as this optimum level is sought.   This should be true
whether you are working with "blades" or VM/linux or some combination of
both.


Joe Temple
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211 home 845-338-8794

Reply via email to