Thanks for sharing your views.

I share your desire for that "big cluster manager".

For now, we are still using a "load-balancer" type of failover where
we have a frontend linux box that does IPVS (or BIGIP for web).  Many
other popular protocols like SMTP already have something else built
in.  The cost of deploying zones means we simply duplicate our
environments across different physical hardware and load each of them
with many zones fulfilling different functions and still manage to
free up more machines.

Warm migration of zones would be a nice progression :P.

Just me,
Wire ...

On 11/26/06, Mike Gerdts <[EMAIL PROTECTED]> wrote:
On 11/25/06, Wee Yeh Tan <[EMAIL PROTECTED]> wrote:
> Hi Mike,
> Can you share why you want the NGZ to know about the GZ?

There is little technical reason that most people will need to know.
However, due to a variety of reasons (integration with monitoring,
asset management, some notion that knowing the real box name will make
things better, etc.) many non-root users feel that they need to know.
  Giving people this visibility is easy enough and is of little
consequence in my environment.

The key reason that I would need to know is if I am looking into a
performance problem on the machine and I need to do something from the
global zone (run dtrace, snoop, adjust resource allocations).  Taking
a quick look at /etc/hardwarename can be useful to allow me to avoid
looking at some other external cross-reference that would likely be
maintained manually (and therefore likely to degrade over time).

The key reason that I want to provide it is so that our monitoring
group can track any migrations of zones between servers and correlate
that movement to performance or availability changes.  For example, if
a zone migrates from a V240 to a T2000, it would be really nice to
have people not get too excited about going from 80% CPU utilization
down to 15% utilization or suddenly having a few GB of RAM free.
Assuming anyone is watching for such a situation, it would be normally
be indicative of a portion of the application having crashed.

> The reason I ask is that we are already doing zones but we will be
> scaling up the effort quite tremendously and I want to get my bases
> covered.

The key thing that I am looking for is a way to handle lots of zones
efficiently as almost every server has somewhere between 1 and 30 of
them.  For example, I am looking at various clustering products to
provide "free failover" so long as a few basic rules are followed.  Of
course, my ulterior motive is that I am looking for a management
framework that will allow me to say "vacate that server - it needs to
go back to the lessor".   A cluster that can scale to hundreds of
machines and thousands of resources would be ideal.  If it can handle
this number of resources and the aspects of site failover in the event
of a disaster, I would be extremely happy.

> I currently use our network operations centre software to track which
> zone is which but the zone owners do not really know which hardware
> they are in.  I personally have not seen any issues whether the zone
> owners are in the know so I will let the zone owners know if they ask
> -- but so far, no one ever did.

As my users get more comfortable with zones, they tend to demand this
information less.   Keeping users within their "comfort zones" has
been a big part of introducing the new features that come with Solaris


Mike Gerdts

zones-discuss mailing list

Reply via email to