-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Andrew Lentvorski wrote:

> No, but unlike FOSS, it is tested for backward compatibility for 30+
> *years*.

So we should wait 30 years before using stuff like Xen? I don't really
need 30 years of backwards compatibility anyway. In fact, I don't need
to be backwards compatible with much of anything virtualization-wise at
this point since I am just now getting into it. And really, I have not
had much problem with backwards compatibility in the Linux world over
the last 10 years. You can probably trot out an example of something
that broke for you somewhere along the way and inconvenienced you but I
can only speak for myself.

> Personally, I'd take an actual unit tested OS over more features at this
> point.

For me and all of my uses Linux gets pretty decent testing. Sure, to be
able to have more testing would be nice but is it quite sufficient for
my uses so far.

> Ahem.  The pee-cee world has never been particularly good at adhereing
> to a standard.  In addition, the x86 architecture has been absolute
> garbage for having the extensions required to support some of this stuff
> until the last couple of years.

And to what standards do IBM's OS/390 (or Z series or whatever it's
called this week) adhere to? Oh, right: their own. Open standards allow
competition and lower price. A lot of the IBM users back in the day were
wishing for something like what we are getting now. Due to commodity
cpu's and open software we can now do things that used to require
millions. I think that is progress.

The extensions required to support some of this stuff were not in demand
or technologically feasible in the mass market until now. I'm not sure
if elitism or being old and crotchety or what but a lot of you guys seem
to have trouble understanding what volume production can do for you.

> For the love of Deity, I *still* can't do remote access and control of a
> $300 pee-cee.  IPMI compliant stuff is starting to appear, but it's just
> *starting* and its expensive.

No, you can't. I desperately want that also. But only you and me and a
couple of our geeky friends are requesting it. The virtualization stuff
is appearing because enough people want it and it has become cheap
enough that it has finally become feasible. I think once everyone
realizes what all of these cool things can do for them they will want
more manageability for their suddenly more capable and consequently more
critical systems. My company alone spends $500 a month on remote hands
requests with our datacenter.

Here is how Xen will help us reduce the need for remote access and
control of our $300 PC's: We will eventually be running Xen as the
standard config on every box. The dom0 (the first instance that gets
started up) will be a control node. It will be a minimal Linux install
intended only to manage the other domains. Nothing else will be done
with it. All of our real applications will run in another domain or
domains with resources allocated by dom0. If an application goes wild or
a domain crashes or whatever we can log into dom0 and take a look at the
crashed domain and fix/kill/restart it. dom0 acts like a management node
or monitor. Now the vast majority of our remote hands requests can be
handled remotely. This will save us lots of time and money. I have not
yet played with the domain migration but eventually I anticipate that it
will also allow us to repartition our workloads on the fly. When I get
back to the US I intend to play with this and some AoE (ATA over
ethernet) attached disk and see how it goes.

We'll get hardware remote access eventually. If you need it now then by
all means, cough up the big bucks and join the proprietary world.

> It isn't the fact that mainframes hoarded their knowledge (I can point
> you to *lots* of IBM Journal of R&D articles on how to do
> virtualization).  It's simply that none of the FOSS folks cared until
> recently.

Because FOSS folks traditionally don't have the money to buy IBM
hardware. It's not that they didn't care, it's that it just wasn't
practical with the hardware. Other non-x86 cpu's may have had
virtualization capabilities but who really had enough RAM in their Mac
or Atari system to do much virtualization? I know all of the Macs I used
tended to be memory starved anyway due to the expense of RAM.

> Personally, I *still* don't care.  Rather than virtual x86's, I would
> rather have a "virtual processor" so I don't have to care about x86,
> PowerPC, Alpha, etc.  I'm tired of "processor specific".  Given that
> people don't actually care about overall performance in 90+% of all
> applications, it's time we dumped that whole issue in the trash.

When you run that 10% of applications where performance does matter
you'll be wishing you could run native code.

- --
Tracy R Reed
http://ultraviolet.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.6 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFC01nK9PIYKZYVAq0RAjt8AJ9xj5J7GRiGAyjubn9BImdGOV2DWgCfcqoN
qCtPFHNdhIflo+LB1Rp7BHw=
=EVZW
-----END PGP SIGNATURE-----


-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to