>
> I am using nested zfs filesystems:>zfs list -o name,zoned,mountpoint
>

  I used the following script to create a zone on zfs nested
filesystem and this worked

node default {
  zone { "nested":
      realhostname => "nested",
      autoboot     => "true",
      path         => "/nested/mount/zones/nested",
      ip           => ["e1000g0:10.1.16.240" ],
      sysidcfg     => "zones/sysidcfg",
  }
}

puppet ./zone-works.pp
notice: //Node[default]/Zone[nested]/ensure: created

zoneadm -z nested list -v
  ID NAME             STATUS     PATH
BRAND    IP
  10 nested           running    /nested/mount/zones/nested
native   shared

zfs list |grep nested
rpool/nested                                      3.87G  85.1G
23K  /nested
rpool/nested/mount                                3.87G  85.1G
23K  /nested/mount
rpool/nested/mount/zones                          3.87G  85.1G
23K  /nested/mount/zones
rpool/nested/mount/zones/nested                   3.87G  85.1G
3.87G  /nested/mount/zones/nested

  The zoned setting appears to only matter for a dataset given to a
non-global zone.  I would suggest you try to spin up the same zone on
a non nested zfs filesystem to see if that works.   I've used zones on
all versions of Solaris 10 and have not encountered the error your
hitting but i've never used nested mounts and I've only used puppet to
spin up zones on update 8 nodes.   I'm thinking their may be something
related to the nested mounts and your Solaris patch level?

   HTH. Derek.



> NAME                               ZONED  MOUNTPOINT
> rootpool                             off  /rootpool
> rootpool/ROOT                        off  legacy
> rootpool/ROOT/s10x_u7                off  /
> rootpool/ROOT/s10x_u7/var            off  /var
> rootpool/dump                          -  -
> rootpool/export                      off  /export
> rootpool/export/home                 off  /export/home
> rootpool/export/zones                off  /export/zones
> rootpool/export/zones/test           off  /export/zones/test
> rootpool/swap                          -  -
>
> Here is my zonecfg:>zonecfg -z test info
>
> zonename: test
> zonepath: /export/zones/test
> brand: native
> autoboot: true
> bootargs:
> pool:
> limitpriv:
> scheduling-class:
> ip-type: shared
> net:
>         address: 192.168.1.100
>         physical: aggr10001
>         defrouter not specified
>
> I don't get the error when the zonepath is on a ufs filesystem.
> Originally I was thinking that the error was only on newer releases of
> solaris, but it probably has more to do with ufs vs. zfs.
>
> John

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to