Re: [zones-discuss] netmask warning, misconfiguration

2007-11-30 Thread Antonello Cruz
Jordan,

How did you setup the IP address for that zone?

Did you use, in zonecfg:
zonecfg:int-sagent-1-z1:net set address=172.20.46.188/24
?

Antonello

Jordan Brown (Sun) wrote:
 I get:
 
 zoneadm: zone 'int-sagent-1-z1': WARNING: bge0:1: no matching subnet 
 found in netmasks(4) for 172.20.46.188; using default of 255.255.0.0.
 
 but my /etc/netmasks (on both the global and local zone) looks good:
 
 172.20.46.0255.255.255.0
 
 (I also tried 172.20.0.0 on the theory that maybe it wanted me to set 
 the netmask for the entire Class B, but no dice.)
 
 I see many instances of this message in BugTraq and Google searches, but 
 I don't immediately see any resolutions.
 ___
 zones-discuss mailing list
 zones-discuss@opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] [storage-discuss] Using LiveUpgrade with Zones

2007-12-04 Thread Antonello Cruz
I don't think live_upgrade(5) is zfs friendly. I've tried to use 
live_upgrade(5) with my zones in a zfs filesystem and it didn't worked. 
  You could try to detach the zone before the luupgrade and then attach 
it in the new BE, but I wouldn't hold my breath.

Antonello

Jeff Cheeney wrote:
 Maybe someone on the install or zones discussion lists can help answer 
 this question.
 
 
 Giovanni Schmid wrote:
 I have read different articles/docs/posts about solaris zones and 
 liveupgrade issues until now; however, I have some doubts about the right 
 way to deploy liveupgrade boot environments in the following case.
 I have a system with two disks configured as mirrors (that is, the same 
 fdisk partition and VTOC).
 On the primary disk, I installed Solaris 10 8/07 with two sparse root zones, 
 say Z1 and Z2. Just two file systems were settled on the primary disk: an 
 UFS mounted on /, and a ZFS pool mirroring  slice 4 of the two disks and 
 mounted on /zfspool on the 1st disk. The UFS is intended to contain all but 
 user's homes. These are served through a ZFS, namely /zfspool/users/home. 
 Only zone Z2 inherits this ZFS, via the add fs setting.
 All that premised,  my questions are:
 What is the correct way of using Live Upgrade for this case ? Would 
 something like:

 # lucreate -c bootenv1 -m /:c2d0s0:ufs -n bootenv2

 be sufficient ? That is, will Z2 in bootenv2 see  /zfspool/users/home ?

 Any help is appreciated !

 g.s
  
  
 This message posted from opensolaris.org
 ___
 storage-discuss mailing list
 [EMAIL PROTECTED]
 http://mail.opensolaris.org/mailman/listinfo/storage-discuss
   
 
 ___
 zones-discuss mailing list
 zones-discuss@opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org