Re: [zones-discuss] zones on zfs on san - shared pool or separate ?

2010-03-31 Thread Alexander J. Maidak
On Wed, 2010-03-31 at 18:01 -0700, Brett wrote:
> Hi Folks,
> 
> I would like to source opinions of the forum on whether its better to share a 
> zfs storage pool for all zones on a machine or create one zpool per zone? The 
> premise is that this zpool (zonepool) would be sitting on san storage.
> 
> I posed that a consolidated pool (zonepool) with each zone zfs sitting under 
> that was good for ease of management and efficiency of resource. ie:
> zonepool/zone1
> zonepool/zone2
> zonepool/zone3
> 
> However a colleague suggests keeping separate zpools, one for each zone is 
> better for reasons of portability (ie being able to detach a zone and then 
> export the zonepool and move the san storage to another like machine if 
> resource becomes constrained). ie:
> zonepool1/zone1
> zonepool2/zone2
> zonepool3/zone3
> 
> Your thoughts are very welcome.
> Regards Rep

We deploy a zpool for each zone to allow for easier mobility.  We also
use the delegated dataset option, so our config is basically:

zone1/zonepath  <- this is were the zoneroot is stored
zone1/zp00  <- this dataset is delegated to the zone for zone data
zone2/zonepath
zone2/zp00

All application data is stored in child datasets of zp00.  This allows
live upgrade to create clones of the zonepath dataset and keeps our
application data isolated from the zoneroot.

Zone Migration consists of:

shutdown zone
detach zone
export zpool 
rezone LUNs to another machine
import zpool
attach zone

Putting the zones in one pool will use storage more efficiently, but
you're zone migration will require a zfs send -> zfs receive step, which
if you're dealing with large zone datasets could be time consuming.

If you're going to deploy lots of zones that don't use much disk I think
I'd go with option 1, because send/recv with a few gigs of data is
probably no big deal.  If you're zones are going to hold many gigs of
data you're life may be easier with SAN based migration.

-Alex

___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] S8 containers and application compatibility

2010-02-17 Thread Alexander J. Maidak
On Mon, 2010-02-15 at 22:01 +, Peter Tribble wrote:
> I've been looking at the feasibility of using S8 containers to replace
> some truly
> antiquated hardware. However, evaluation has (apparently - I'm still trying to
> establish the details) uncovered a snag.
> 
> One of the applications in use is the old iPlanet suite. Directory names 
> include
> things like ias6, and the problem is that it seems to fail to start
> and leaves the odd
> core dump around. So, while those involved are rubbishing containers and want
> to redeploy onto antiquated bare metal, I was wondering
> 
> (a) if anyone had used the old iPlanet app server in an S8 container, and
> (b) if there are compatibility problems, where do I need to start
> turning over rocks?
> 
> Thanks,
> 
> (and apologies for the vagueness)
> 

Have you tried just copying the binaries to Solaris 10 and running them
there?  I moved some old iPlanet Webserver 6.0 stuff from 8 -> 10 this
way a few months ago and we haven't had any issues, yet...

-Alex

___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] Zone Memory/Cpu Utilization Reporting

2009-08-06 Thread Alexander J. Maidak
I have a number of systems running Solaris zones.  I'm looking for a
tool that will do the following:

1) Capture/store the cpu utilization of the global zone and all
non-global zones
2) Capture/store the memory utilization of the global zone and all
non-global zones
3) Post the graphics to a website that can display both historical and
realtime data.

What tools have people used to do this?  I'm looking at Cacti, but it
looks like I would be have to figure out a good programmatic way to
generate (1) and (2) to feed into it.  

Does anyone have anything to share on the commercial products on the
market?

Thanks,

Alex

___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Survey of networking feature use in native Solaris 10 zones

2009-07-23 Thread Alexander J. Maidak
On Thu, 2009-07-23 at 17:32 -0700, Jordan Vaughan wrote:
> Hello zones community members,
> 
> I'm one of the engineers working on Solaris 10 Containers (S10Cs) for 
> OpenSolaris (http://www.opensolaris.org/os/project/s10brand).  I'm 
> currently evaluating networking requirements for S10Cs.  Our ultimate 
> goal is to achieve networking feature parity with native Solaris 10 
> zones: we will want S10Cs to do everything that native Solaris 10 zones 
> can do.
> 
> I would appreciate any input you can provide regarding what you (or your 
> customers) currently do with your native Solaris 10 zones (both 
> exclusive- and shared-stack zones), especially the commands (arp, snoop, 
> traceroute, etc.), protocols, and other features/services (SMA, 
> Solstice, IPMP, NAT, IP Filter, DHCP client/server, IP tunnels, PPP, 
> IPsec, etc.) that you use most frequently.  Your input will help us 
> prioritize networking features and set realistic expectations for our 
> product.
> 
> Thanks,
> Jordan Vaughan
> Solaris Zones
> ___
> zones-discuss mailing list
> zones-discuss@opensolaris.org

At my site we rarely use exclusive stack zones.  This is because
interface consumption would become a problem.  The shared stack
interface is limiting.  Not having bandwidth controls etc makes me
nervous that someday I'll have a bandwidth utilization problem and not
have any great solutions.  I've also had non-global zone administrators
ask to be able to run snoop.  While this is possible with a shared stack
its not secure.  So the reason I'd want the S10Cs to support exclusive
IP is because I'd want be able take advantage of crossbow to solve some
of the limitations I have with shared stack Native Zones now.  If
exclusive IP for S10Cs isn't an option a work around might be to setup
crossbow vnics for each zone I want to run and attach that zone
exclusively to that interface as "shared".  Having only limited
experience with crossbow I'm not exactly sure if this would work and it
would feel somewhat hackish.  I'd also hate to see what my GlobalZone
routing table would look like - I suspect I could cause myself an
interesting network problem If I'm not careful.  

Thanks for the request for input.

-Alex

P.S.  Will the Solaris 10 Containers support delegated zfs datasets?  In
my case this is a more important feature to have.

___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Zone Stuck in a shutting_down state

2009-04-28 Thread Alexander J. Maidak
If its hung nfs mount you should be able to see it still mounted in
the /etc/mntab file in the global zone: grep nfs /etc/mntab. It will be
mounted under the zonepath.  You should then be able to do a umount
-f / from the global zone and if you're really lucky the
zone will finish shutting down.

-Alex

On Tue, 2009-04-28 at 16:19 -0500, Derek McEachern wrote:
> It's possible that it could be nfs mount related since the zone did
> have nfs mounted fs's but they should have been umounted prior to
> shutting down the zone.  In any event I can no longer get into the
> zone to checkusing  zlogin and zlogin -C.
> 
> I tried Bryan's suggestion on looking for processes that might have
> open filehandles to files under the zone's filesystem tree but I don't
> see that there are any.
> 
> On Tue, Apr 28, 2009 at 3:40 PM, Bryan Allen 
> wrote:
> 
> +--
> | On 2009-04-28 15:37:22, Derek McEachern wrote:
> |
> | We were trying to bring down a zone on a S10 U4 system and
> it ended up stuck
> | in the shutting_down state.
> |
> | ID NAME STATUS PATH
> BRANDIP
> | 74 zonetest-new shutting_down /zone/zonetest-new
> native
> | shared
> |
> |
> | The only process I see running is the zoneadmd process
> |
> | dlet15:/home/derekm/ ps -efZ | grep zonetest-new
> |   globalroot 12680 1   0   Apr 24 ?   0:02
> zoneadmd -z
> | zonetest-new
> 
> 
> 
> Do any processes (notably shells in the global zones) have an
> open filehandle
> somewhere under the zone's filesystem tree? This can (at least
> on Sol10) cause
> zones to not shut down, since it can't close the FH (I assume,
> anyway).
> --
> bda
> cyberpunk is dead. long live cyberpunk.
> ___
> zones-discuss mailing list
> zones-discuss@opensolaris.org
> 
> ___
> zones-discuss mailing list
> zones-discuss@opensolaris.org

___
zones-discuss mailing list
zones-discuss@opensolaris.org