> Hello and thanks a lot for quick answers.
> By looking at your answer it looks like we are going
> to go for spare-root model.
> So, then to my second question which is actually
> something more of a design question i think...
> 1, I let's say i have my Main server running with a
> HBA card attached to a Disk Array (same as mentioned
> in my earlier question). I create and mount a nice
> disk 'pool' with ZFS and everything is working
create a zfs filesystem in the global zone for the zone. Something like
$ zfs create mypool/export/zones/zone1
$ mkdir -p /export/zones/zone1
$ zfs set mountpoint=/export/zones/zone1 mypool/export/zones/zone1
With zonecfg set your zonepath=/export/zones/zone1
> 2, I now create ZONE1. I would then like to mount a
> /export/home/ftp area on the Disk Array to use as
> well. Now, What is controlling my access to the HBA
> card and Disk Array from my Non-Global zone? How do i
> see my newly created /dev/dsk/... on the Array? Do i
> have to install the same driver into the non-global
> zone as i did to the Server itself or?
The easiest way is -
$ touch /reconfigure
$ init 6
After the reboot in theory you should see your new device. By memory, you
should be able to use the luxadm command to see the failover paths. Are you
able to do a reboot?? There are a few more commands you need if you can not. I
am assuming the server is currently not production :)
You should mount /export/home/ftp on your global zone (using ZFS), and then
define it as a mount point using zonecfg for each of your zones.
> I hope i have succeded in making some sense here
> since this is bugging me and my Manager a little now.
I will all look easy and logical once you have done it once.
This message posted from opensolaris.org
zones-discuss mailing list