Well from what I understand about how Google works, virtualization is
not the thing to focus on. They use a distributed processing model where
a master server(s) sends jobs to any available node that has enough
available CPU cycles. Think MOSIX if you're familiar with that.. They
has have their own distributed file system as well.

-Sam

-----Original Message-----
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Alan Altmark
Sent: Friday, May 05, 2006 1:43 PM
To: [email protected]
Subject: Re: Google out of capacity?


On Friday, 05/05/2006 at 10:09 EST, Rich Smrcina <[EMAIL PROTECTED]>
wrote:
> I think it's a safe bet that many 54-way z9's would be required, and
lots of
> fully loaded
> DS8000's, with the new 4G Ficon (insert tool man growl here).
>
> It would be a sweet coup if there were any interest.

Picture it:  The year is 2050.  The public's demand for information has
continued to grow unabated, and there are 9+B people on the planet
(source: US Census Bureau).   The landfills are full of broken servers
and
the deserts are covered with solar collectors to fuel the server farms.
These servers are located underground as there is no more space above
ground.

The heat from all the servers has altered the climate and raised the
ocean
levels.  All of our homes are on stilts.

Or, they could choose some form of virtualization and save us all.  (He
Who Must Not Be Annoyed says that z is not the answer for Google.)

-- Chuckie

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to