Utilization of something as large as Google is an interesting issue.
Given the structure that Samuel explained: "they use a distributed
processing model where a master server(s) sends jobs to any available node
that has enough available CPU cycles."  The ability to utilize all those
processors will depend on how well the master can load balance the work on
the nodes.  This usually depends on the variability in the size and shape
of the work to be done, the length (delay) of the feedback path that
delivers the "I have cycles" info to the master and the affinities of
various types of work to particular pools of capacity.  Generally speaking
high variability, long feed back path and affinity scheduling will lead to
reduced utilization.    Round robin routing will not have the feedback and
affinity drivers, but will have some servers clogged while others are idle
given enough workload variability.  In general the more servers there are
in the cluster, the stronger these effects and the lower the utilization.
On the other hand if the master is able to break the work into relatively
uniform packages and distribute them the utilization can be quite high.
Depends on the load.


Joe Temple
Distinguished Engineer
Sr. Certified IT Specialist
[EMAIL PROTECTED]
845-435-6301  295/6301   cell 914-706-5211
Home office 845-338-1448  Home 845-338-8794



             "Little, Chris"
             <[EMAIL PROTECTED]
             hs.org>                                                    To
             Sent by: Linux on         [email protected]
             390 Port                                                   cc
             <[EMAIL PROTECTED]
             IST.EDU>                                              Subject
                                       Re: Google out of capacity?

             05/05/2006 01:46
             PM


             Please respond to
             Linux on 390 Port
             <[EMAIL PROTECTED]
                 IST.EDU>






If the servers are running at a certain percentage of capacity, how would
virtualization help?  z or otherwise?

> -----Original Message-----
> From: Alan Altmark [mailto:[EMAIL PROTECTED]
> Sent: Friday, May 05, 2006 12:43 PM
> To: [email protected]
> Subject: Re: Google out of capacity?
>
> On Friday, 05/05/2006 at 10:09 EST, Rich Smrcina <[EMAIL PROTECTED]>
> wrote:
> > I think it's a safe bet that many 54-way z9's would be required, and
> lots of
> > fully loaded
> > DS8000's, with the new 4G Ficon (insert tool man growl here).
> >
> > It would be a sweet coup if there were any interest.
>
> Picture it:  The year is 2050.  The public's demand for
> information has continued to grow unabated, and there are 9+B
> people on the planet
> (source: US Census Bureau).   The landfills are full of
> broken servers and
> the deserts are covered with solar collectors to fuel the
> server farms.
> These servers are located underground as there is no more
> space above ground.
>
> The heat from all the servers has altered the climate and
> raised the ocean levels.  All of our homes are on stilts.
>
> Or, they could choose some form of virtualization and save us
> all.  (He Who Must Not Be Annoyed says that z is not the
> answer for Google.)
>
> -- Chuckie
>
> ----------------------------------------------------------------------
> For LINUX-390 subscribe / signoff / archive access
> instructions, send email to [EMAIL PROTECTED] with the
> message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
>

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to