There are a number of ways to mount storage from the global zone to a
non-global zone. What are the differences and how to choose?
Thanks,
Anthony
___
zones-discuss mailing list
zones-discuss@opensolaris.org
Is there any way to create non-legacy, canmount=yes filesystems with
set mountpoints for a zone prior to zoneadm install?
I'm trying to do some zone creation automation, and one of the things
is a per-zone, writable /usr/local (yes it's not 'standard' but then I
can count on one hand the number of
it depends on your requirements.
since you haven't specified any, i'll just say that i prefer using zfs
over ufs. hence i'd just recommend adding a zfs dataset via zonecfg.
ed
On Thu, Jan 22, 2009 at 10:44:09PM +1100, Anthony Yeung wrote:
> There are a number of ways to mount storage from the g
We are in the process of setting up a service consisting of SAN based global
storage which will host a number of ZFS based zones, each running an
application that must be nade highly avalable. The zones are made highly
available using Solaris Cluster and failover. This is all rather standard and
Hi,
Has anyone *actually* observe that you can communicate between zones
with the cable removed when /dev/ip ip_restrict_interzone_loopback is
set to 0?
Here's my setup, s10u5.
global: 192.168.1.60/24 e1000g0, cabled
zone1: 192.168.1.61/24 e1000g1, cabled
zone2: 192.168.1.62/24 e1000g2, not cabl
Thanks.
I am just trying to understand the pros/cons. If zfs is the preferred
way, under what circumstances should we consider using the others (such
as raw device, ufs)? I suppose software compatibility (such as Oracle)
is one reason why we consider to use ufs.
Thanks,
Anthony
Edward Pilatowi
Hi Geoff,
As an introduction I work on the Sun Cluster development team.
Questions about how to support the existing Sun Cluster product
can be sent to
sunclus...@sun.com
There are people that answer questions about the existing product.
Sun Cluster today supports non-global zone in tw