In some of the 'old time' virtualization (think IBM VM in 1980's or so), it
was
done because people were cheaper than machines.

We went to lots of 'single use machines' because machines were cheaper than
people. (Typically in the 1-way and 2-way Intel-ish market with M$oftware).

Now we are finding as the cost/benefits change, so does our balance between
virtualization, appliances vs applications, administration costs, and
'costs of going green' (saving power mainly, but also hardware maintenance),
using 'server' vs 'commodity' hardware, versus the savings each of them
bring.

This is a non-linear set of equations, and not a simple answer.
Also no one answer is right for everyone.

One solution I haven't seen talked about in years is 'hardware based
virtualization'.
Basically partitioning hardware with 'firmware based microkernel' to allow
multiple
'independant' systems run on one set of physical hardware.  IBM did that
'BackWhen'
by putting IBMs VM on big iron, and 'virtually partitioning' the machine.
They tried
to keep us system grunts from seeing the 'hardware console' inside the
service panels,
but once we saw it, it was just running a cut down VM operating system on
the raw
hardware, and everything else was virtualized. ... Not much different in
concept than
having a big intelish boxen and running VMWare enterprise on it and calling
the
hardware and the 'next to the metal' vmware all 'hardware'. ... Just folks
not selling
it as one unit and trying to keep you believing it is all 'magic behined the
sheet metal'.

><> ... Jack
_______________________________________________
Tech mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to