Re: [zones-discuss] Zones on shared storage - a warning

2010-02-23 Thread Frank Batschulat (Home)
update on this one: 

a workaround if you so will, or the more appropriate way to do this is 
apparently
to use lofiadm(1M) to create a pseudo block device comprising the file hosted 
on NFS
and use the created lofi device (eg. /dev/lofi/1) as the device for zpool create
and all subsequent I/O (this was not producing the strange CKSUM errors), eg.:

osoldev.root./export/home/batschul.= mount -F nfs opteron:/pool/zones /nfszone
osoldev.root./export/home/batschul.= mount -v| grep nfs
opteron:/pool/zones on /nfszone type nfs 
remote/read/write/setuid/devices/xattr/dev=9080001 on Tue Feb  9 10:37:00 2010
osoldev.root./export/home/batschul.= nfsstat -m
/nfszone from opteron:/pool/zones
 Flags: 
vers=4,proto=tcp,sec=sys,hard,intr,link,symlink,acl,rsize=1048576,wsize=1048576,retrans=5,timeo=600
 Attr cache:acregmin=3,acregmax=60,acdirmin=30,acdirmax=60

osoldev.root./export/home/batschul.=  mkfile -n 7G /nfszone/remote.file
osoldev.root./export/home/batschul.=  ls -la /nfszone
total 28243534
drwxrwxrwx   2 nobody   nobody 6 Feb  9 09:36 .
drwxr-xr-x  30 batschul other 32 Feb  8 22:24 ..
-rw---   1 nobody   nobody   7516192768 Feb  9 09:36 remote.file

osoldev.root./export/home/batschul.= lofiadm -a /nfszone/remote.file
/dev/lofi/1

osoldev.root./export/home/batschul.= lofiadm
Block Device File   Options
/dev/lofi/1  /nfszone/remote.file   -

osoldev.root./export/home/batschul.= zpool create -m /tank/zones/nfszone 
nfszone /dev/lofi/1

Feb  9 10:50:35 osoldev zfs: [ID 249136 kern.info] created version 22 pool 
nfszone using 22

osoldev.root./export/home/batschul.= zpool status -v nfszone
  pool: nfszone
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
nfszoneONLINE   0 0 0
  /dev/lofi/1  ONLINE   0 0 0

---
frankB
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Zones on shared storage - a warning

2010-01-09 Thread Frank Batschulat (Home)
On Fri, 08 Jan 2010 18:33:06 +0100, Mike Gerdts mger...@gmail.com wrote:

 I've written a dtrace script to get the checksums on Solaris 10.
 Here's what I see with NFSv3 on Solaris 10.

jfyi, I've reproduces it as well using a Solaris 10 Update 8 SB2000 sparc client
and NFSv4.

much like you I also get READ errors along with the CKSUM errors which
is different from my observation on a ONNV client.

unfortunately your dtrace script did not worked for me, ie. it
did not spit out anything :(

cheers
frankB

___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Frank Batschulat (Home)
On Wed, 23 Dec 2009 03:02:47 +0100, Mike Gerdts mger...@gmail.com wrote:

 I've been playing around with zones on NFS a bit and have run into
 what looks to be a pretty bad snag - ZFS keeps seeing read and/or
 checksum errors.  This exists with S10u8 and OpenSolaris dev build
 snv_129.  This is likely a blocker for anything thinking of
 implementing parts of Ed's Zones on Shared Storage:

 http://hub.opensolaris.org/bin/view/Community+Group+zones/zoss

 The OpenSolaris example appears below.  The order of events is:

 1) Create a file on NFS, turn it into a zpool
 2) Configure a zone with the pool as zonepath
 3) Install the zone, verify that the pool is healthy
 4) Boot the zone, observe that the pool is sick
[...]
 r...@soltrain19# zoneadm -z osol boot

 r...@soltrain19# zpool status osol
   pool: osol
  state: DEGRADED
 status: One or more devices has experienced an unrecoverable error.  An
 attempt was made to correct the error.  Applications are unaffected.
 action: Determine if the device needs to be replaced, and clear the errors
 using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
  scrub: none requested
 config:

 NAME  STATE READ WRITE CKSUM
 osol  DEGRADED 0 0 0
   /mnt/osolzone/root  DEGRADED 0 0   117  too many errors

 errors: No known data errors

Hey Mike, you're not the only victim of these strange CHKSUM errors, I hit
the same during my slightely different testing, where I'm NFS mounting an
entire, pre-existing remote file living in the zpool on the NFS server and use 
that to create a zpool and install zones into it.

I've filed today:

6915265 zpools on files (over NFS) accumulate CKSUM errors with no apparent 
reason

here's the relevant piece worth investigating out of it (leaving out the actual 
setup etc..)
as in your case, creating the zpool and installing the zone into it still gives
a healthy zpool, but immediately after booting the zone, the zpool served over 
NFS
accumulated CHKSUM errors.

of particular interest are the 'cksum_actual' values as reported by Mike for his
test case here:

http://www.mail-archive.com/zfs-disc...@opensolaris.org/msg33041.html

if compared to the 'chksum_actual' values I got in the fmdump error output on 
my test case/system:

note, the NFS servers zpool that is serving and sharing the file we use is 
healthy.

zone halted now on my test system, and checking fmdump:

osoldev.batschul./export/home/batschul.= fmdump -eV | grep cksum_actual | sort 
| uniq -c | sort -n | tail
   2cksum_actual = 0x4bea1a77300 0xf6decb1097980 0x217874c80a8d9100 
0x7cd81ca72df5ccc0
   2cksum_actual = 0x5c1c805253 0x26fa7270d8d2 0xda52e2079fd74 
0x3d2827dd7ee4f21
   6cksum_actual = 0x28e08467900 0x479d57f76fc80 0x53bca4db5209300 
0x983ddbb8c4590e40
*A   6cksum_actual = 0x348e6117700 0x765aa1a547b80 0xb1d6d98e59c3d00 
0x89715e34fbf9cdc0
*B   7cksum_actual = 0x0 0x0 0x0 0x0
*C  11cksum_actual = 0x1184cb07d00 0xd2c5aab5fe80 0x69ef5922233f00 
0x280934efa6d20f40
*D  14cksum_actual = 0x175bb95fc00 0x1767673c6fe00 0xfa9df17c835400 
0x7e0aef335f0c7f00
*E  17cksum_actual = 0x2eb772bf800 0x5d8641385fc00 0x7cf15b214fea800 
0xd4f1025a8e66fe00
*F  20cksum_actual = 0xbaddcafe00 0x5dcc54647f00 0x1f82a459c2aa00 
0x7f84b11b3fc7f80
*G  25cksum_actual = 0x5d6ee57f00 0x178a70d27f80 0x3fc19c3a19500 
0x82804bc6ebcfc0

osoldev.root./export/home/batschul.= zpool status -v
  pool: nfszone
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
nfszone DEGRADED 0 0 0
  /nfszone  DEGRADED 0 0   462  too many errors

errors: No known data errors

==

now compare this with Mike's error output as posted here:

http://www.mail-archive.com/zfs-disc...@opensolaris.org/msg33041.html

# fmdump -eV | grep cksum_actual | sort | uniq -c | sort -n | tail

   2cksum_actual = 0x14c538b06b6 0x2bb571a06ddb0 0x3e05a7c4ac90c62 
0x290cbce13fc59dce
*D   3cksum_actual = 0x175bb95fc00 0x1767673c6fe00 0xfa9df17c835400 
0x7e0aef335f0c7f00
*E   3cksum_actual = 0x2eb772bf800 0x5d8641385fc00 0x7cf15b214fea800 
0xd4f1025a8e66fe00
*B   4cksum_actual = 0x0 0x0 0x0 0x0
   4cksum_actual = 0x1d32a7b7b00 0x248deaf977d80 0x1e8ea26c8a2e900 
0x330107da7c4bcec0
   5cksum_actual = 0x14b8f7afe6 0x915db8d7f87 0x205dc7979ad73 
0x4e0b3a8747b8a8
*C   6cksum_actual = 0x1184cb07d00 0xd2c5aab5fe80 0x69ef5922233f00 
0x280934efa6d20f40
*A   6

Re: [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread James Carlson
Frank Batschulat (Home) wrote:
 This just can't be an accident, there must be some coincidence and thus 
 there's a good chance
 that these CHKSUM errors must have a common source, either in ZFS or in NFS ?

One possible cause would be a lack of substantial exercise.  The man
page says:

 A regular file. The use of files as a backing  store  is
 strongly  discouraged.  It  is  designed  primarily  for
 experimental purposes, as the fault tolerance of a  file
 is  only  as  good  as  the file system of which it is a
 part. A file must be specified by a full path.

Could it be that discouraged and experimental mean not tested as
thoroughly as you might like, and certainly not a good idea in any sort
of production environment?

It sounds like a bug, sure, but the fix might be to remove the option.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Mike Gerdts
On Fri, Jan 8, 2010 at 5:28 AM, Frank Batschulat (Home)
frank.batschu...@sun.com wrote:
[snip]
 Hey Mike, you're not the only victim of these strange CHKSUM errors, I hit
 the same during my slightely different testing, where I'm NFS mounting an
 entire, pre-existing remote file living in the zpool on the NFS server and use
 that to create a zpool and install zones into it.

What does your overall setup look like?

Mine is:

T5220 + Sun System Firmware 7.2.4.f 2009/11/05 18:21
   Primary LDom
  Solaris 10u8
  Logical Domains Manager 1.2,REV=2009.06.25.09.48 + 142840-03
  Guest Domain 4 vcpus + 15 GB memory
 OpenSolaris snv_130
(this is where the problem is observed)

I've seen similar errors on Solaris 10 in the primary domain and on a
M4000.  Unfortunately Solaris 10 doesn't show the checksums in the
ereport.  There I noticed a mixture between read errors and checksum
errors - and lots more of them.  This could be because the S10 zone
was a full root SUNWCXall compared to the much smaller default ipkg
branded zone.  On the primary domain running Solaris 10...

(this command was run some time ago)
primary-domain# zpool status myzone
  pool: myzone
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
myzone  DEGRADED 0 0 0
  /foo/20g  DEGRADED 4.53K 0   671  too many errors

errors: No known data errors


(this was run today, many days after previous command)
primary-domain# fmdump -eV | egrep zio_err | uniq -c | head
   1zio_err = 5
   1zio_err = 50
   1zio_err = 5
   1zio_err = 50
   1zio_err = 5
   1zio_err = 50
   2zio_err = 5
   1zio_err = 50
   3zio_err = 5
   1zio_err = 50


Note that even though I had thousands of read errors the zone worked
just fine. I would have never known (suspected?) there was a problem
if I hadn't run zpool status or the various FMA commands.


 I've filed today:

 6915265 zpools on files (over NFS) accumulate CKSUM errors with no apparent 
 reason

Thanks.  I'll open a support call to help get some funding on it...

 here's the relevant piece worth investigating out of it (leaving out the 
 actual setup etc..)
 as in your case, creating the zpool and installing the zone into it still 
 gives
 a healthy zpool, but immediately after booting the zone, the zpool served 
 over NFS
 accumulated CHKSUM errors.

 of particular interest are the 'cksum_actual' values as reported by Mike for 
 his
 test case here:

 http://www.mail-archive.com/zfs-disc...@opensolaris.org/msg33041.html

 if compared to the 'chksum_actual' values I got in the fmdump error output on 
 my test case/system:

 note, the NFS servers zpool that is serving and sharing the file we use is 
 healthy.

 zone halted now on my test system, and checking fmdump:

 osoldev.batschul./export/home/batschul.= fmdump -eV | grep cksum_actual | 
 sort | uniq -c | sort -n | tail
   2    cksum_actual = 0x4bea1a77300 0xf6decb1097980 0x217874c80a8d9100 
 0x7cd81ca72df5ccc0
   2    cksum_actual = 0x5c1c805253 0x26fa7270d8d2 0xda52e2079fd74 
 0x3d2827dd7ee4f21
   6    cksum_actual = 0x28e08467900 0x479d57f76fc80 0x53bca4db5209300 
 0x983ddbb8c4590e40
 *A   6    cksum_actual = 0x348e6117700 0x765aa1a547b80 0xb1d6d98e59c3d00 
 0x89715e34fbf9cdc0
 *B   7    cksum_actual = 0x0 0x0 0x0 0x0
 *C  11    cksum_actual = 0x1184cb07d00 0xd2c5aab5fe80 0x69ef5922233f00 
 0x280934efa6d20f40
 *D  14    cksum_actual = 0x175bb95fc00 0x1767673c6fe00 0xfa9df17c835400 
 0x7e0aef335f0c7f00
 *E  17    cksum_actual = 0x2eb772bf800 0x5d8641385fc00 0x7cf15b214fea800 
 0xd4f1025a8e66fe00
 *F  20    cksum_actual = 0xbaddcafe00 0x5dcc54647f00 0x1f82a459c2aa00 
 0x7f84b11b3fc7f80
 *G  25    cksum_actual = 0x5d6ee57f00 0x178a70d27f80 0x3fc19c3a19500 
 0x82804bc6ebcfc0

 osoldev.root./export/home/batschul.= zpool status -v
  pool: nfszone
  state: DEGRADED
 status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
 action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
  scrub: none requested
 config:

        NAME        STATE     READ WRITE CKSUM
        nfszone     DEGRADED     0     0     0
          /nfszone  DEGRADED     0     0   462  too many errors

 errors: No known data errors

 ==

 now compare this with Mike's error output as posted here:

 

Re: [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Mike Gerdts
On Fri, Jan 8, 2010 at 9:11 AM, Mike Gerdts mger...@gmail.com wrote:
 I've seen similar errors on Solaris 10 in the primary domain and on a
 M4000.  Unfortunately Solaris 10 doesn't show the checksums in the
 ereport.  There I noticed a mixture between read errors and checksum
 errors - and lots more of them.  This could be because the S10 zone
 was a full root SUNWCXall compared to the much smaller default ipkg
 branded zone.  On the primary domain running Solaris 10...

I've written a dtrace script to get the checksums on Solaris 10.
Here's what I see with NFSv3 on Solaris 10.

# zoneadm -z zone1 halt ; zpool export pool1 ; zpool import -d
/mnt/pool1 pool1 ; zoneadm -z zone1 boot ; sleep 30 ; pkill dtrace

# ./zfs_bad_cksum.d
Tracing...
dtrace: error on enabled probe ID 9 (ID 43443:
fbt:zfs:zio_checksum_error:return): invalid address (0x301b363a000) in
action #4 at DIF offset 20
dtrace: error on enabled probe ID 9 (ID 43443:
fbt:zfs:zio_checksum_error:return): invalid address (0x3037f746000) in
action #4 at DIF offset 20
cccdtrace:
error on enabled probe ID 9 (ID 43443:
fbt:zfs:zio_checksum_error:return): invalid address (0x3026e7b) in
action #4 at DIF offset 20
cc
Checksum errors:
   3 : 0x130e01011103 0x20108 0x0 0x400 (fletcher_4_native)
   3 : 0x220125cd8000 0x62425980c08 0x16630c08296c490c
0x82b320c082aef0c (fletcher_4_native)
   3 : 0x2f2a0a202a20436f 0x7079726967687420 0x2863292032303031
0x2062792053756e20 (fletcher_4_native)
   3 : 0x3c21444f43545950 0x452048544d4c2050 0x55424c494320222d
0x2f2f5733432f2f44 (fletcher_4_native)
   3 : 0x6005a8389144 0xc2080e6405c200b6 0x960093d40800
0x9eea007b9800019c (fletcher_4_native)
   3 : 0xac044a6903d00163 0xa138c8003446 0x3f2cd1e100b10009
0xa37af9b5ef166104 (fletcher_4_native)
   3 : 0xbaddcafebaddcafe 0xc 0x0 0x0 (fletcher_4_native)
   3 : 0xc4025608801500ff 0x1018500704528210 0x190103e50066
0xc34b90001238f900 (fletcher_4_native)
   3 : 0xfe00fc01fc42fc42 0xfc42fc42fc42fc42 0xfffc42fc42fc42fc
0x42fc42fc42fc42fc (fletcher_4_native)
   4 : 0x4b2a460a 0x0 0x4b2a460a 0x0 (fletcher_4_native)
   4 : 0xc00589b159a00 0x543008a05b673 0x124b60078d5be
0xe3002b2a0b605fb3 (fletcher_4_native)
   4 : 0x130e010111 0x32000b301080034 0x10166cb34125410
0xb30c19ca9e0c0860 (fletcher_4_native)
   4 : 0x130e010111 0x3a201080038 0x104381285501102
0x418016996320408 (fletcher_4_native)
   4 : 0x130e010111 0x3a201080038 0x1043812c5501102
0x81802325c080864 (fletcher_4_native)
   4 : 0x130e010111 0x3a0001c01080038 0x1383812c550111c
0x818975698080864 (fletcher_4_native)
   4 : 0x1f81442e9241000 0x2002560880154c00 0xff10185007528210
0x19010003e566 (fletcher_4_native)
   5 : 0xbab10c 0xf 0x53ae 0xdd549ae39aa1ba20 (fletcher_4_native)
   5 : 0x130e010111 0x3ab01080038 0x1163812c550110b
0x8180a7793080864 (fletcher_4_native)
   5 : 0x61626300 0x0 0x0 0x0 (fletcher_4_native)
   5 : 0x8003 0x3df0d6a1 0x0 0x0 (fletcher_4_native)
   6 : 0xbab10c 0xf 0x5384 0xdd549ae39aa1ba20 (fletcher_4_native)
   7 : 0xbab10c 0xf 0x0 0x9af5e5f61ca2e28e (fletcher_4_native)
   7 : 0x130e010111 0x3a201080038 0x104381265501102
0xc18c7210c086006 (fletcher_4_native)
   7 : 0x275c222074650a2e 0x5c222020436f7079 0x7269676874203139
0x38392041540a2e5c (fletcher_4_native)
   8 : 0x130e010111 0x3a0003101080038 0x1623812c5501131
0x8187f66a4080864 (fletcher_4_native)
   9 : 0x8a000801010c0682 0x2eed0809c1640513 0x70200ff00026424
0x18001d16101f0059 (fletcher_4_native)
  12 : 0xbab10c 0xf 0x0 0x45a9e1fc57ca2aa8 (fletcher_4_native)
  30 : 0xbaddcafebaddcafe 0xbaddcafebaddcafe 0xbaddcafebaddcafe
0xbaddcafebaddcafe (fletcher_4_native)
  47 : 0x0 0x0 0x0 0x0 (fletcher_4_native)
  92 : 0x130e01011103 0x10108 0x0 0x200 (fletcher_4_native)

Since I had to guess at what the Solaris 10 source looks like, some
extra eyeballs on the dtrace script is in order.

Mike

-- 
Mike Gerdts
http://mgerdts.blogspot.com/


zfs_bad_cksum.d
Description: Binary data
___
zones-discuss mailing list
zones-discuss@opensolaris.org

[zones-discuss] Zones on shared storage - a warning

2009-12-22 Thread Mike Gerdts
I've been playing around with zones on NFS a bit and have run into
what looks to be a pretty bad snag - ZFS keeps seeing read and/or
checksum errors.  This exists with S10u8 and OpenSolaris dev build
snv_129.  This is likely a blocker for anything thinking of
implementing parts of Ed's Zones on Shared Storage:

http://hub.opensolaris.org/bin/view/Community+Group+zones/zoss

The OpenSolaris example appears below.  The order of events is:

1) Create a file on NFS, turn it into a zpool
2) Configure a zone with the pool as zonepath
3) Install the zone, verify that the pool is healthy
4) Boot the zone, observe that the pool is sick

r...@soltrain19# mount filer:/path /mnt
r...@soltrain19# cd /mnt
r...@soltrain19# mkdir osolzone
r...@soltrain19# mkfile -n 8g root
r...@soltrain19# zpool create -m /zones/osol osol /mnt/osolzone/root
r...@soltrain19# zonecfg -z osol
osol: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:osol create
zonecfg:osol info
zonename: osol
zonepath:
brand: ipkg
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
hostid:
zonecfg:osol set zonepath=/zones/osol
zonecfg:osol set autoboot=false
zonecfg:osol verify
zonecfg:osol commit
zonecfg:osol exit

r...@soltrain19# chmod 700 /zones/osol

r...@soltrain19# zoneadm -z osol install
   Publisher: Using opensolaris.org (http://pkg.opensolaris.org/dev/
http://pkg-na-2.opensolaris.org/dev/).
   Publisher: Using contrib (http://pkg.opensolaris.org/contrib/).
   Image: Preparing at /zones/osol/root.
   Cache: Using /var/pkg/download.
Sanity Check: Looking for 'entire' incorporation.
  Installing: Core System (output follows)
DOWNLOAD  PKGS   FILESXFER (MB)
Completed46/46 12334/1233493.1/93.1

PHASEACTIONS
Install Phase18277/18277
No updates necessary for this image.
  Installing: Additional Packages (output follows)
DOWNLOAD  PKGS   FILESXFER (MB)
Completed36/36   3339/333921.3/21.3

PHASEACTIONS
Install Phase  4466/4466

Note: Man pages can be obtained by installing SUNWman
 Postinstall: Copying SMF seed repository ... done.
 Postinstall: Applying workarounds.
Done: Installation completed in 2139.186 seconds.

  Next Steps: Boot the zone, then log into the zone console (zlogin -C)
  to complete the configuration process.
6.3 Boot the OpenSolaris zone
r...@soltrain19# zpool status osol
  pool: osol
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
osol  ONLINE   0 0 0
  /mnt/osolzone/root  ONLINE   0 0 0

errors: No known data errors

r...@soltrain19# zoneadm -z osol boot

r...@soltrain19# zpool status osol
  pool: osol
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
osol  DEGRADED 0 0 0
  /mnt/osolzone/root  DEGRADED 0 0   117  too many errors

errors: No known data errors

r...@soltrain19# zlogin osol uptime
  5:31pm  up 1 min(s),  0 users,  load average: 0.69, 0.38, 0.52


-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Zones on shared storage - a warning

2009-12-22 Thread Mike Gerdts
On Tue, Dec 22, 2009 at 8:02 PM, Mike Gerdts mger...@gmail.com wrote:
 I've been playing around with zones on NFS a bit and have run into
 what looks to be a pretty bad snag - ZFS keeps seeing read and/or
 checksum errors.  This exists with S10u8 and OpenSolaris dev build
 snv_129.  This is likely a blocker for anything thinking of
 implementing parts of Ed's Zones on Shared Storage:

 http://hub.opensolaris.org/bin/view/Community+Group+zones/zoss

 The OpenSolaris example appears below.  The order of events is:

 1) Create a file on NFS, turn it into a zpool
 2) Configure a zone with the pool as zonepath
 3) Install the zone, verify that the pool is healthy
 4) Boot the zone, observe that the pool is sick
[snip]

An off list conversation and a bit of digging into other tests I have
done shows that this is likely limited to NFSv3.  I cannot say that
this problem has been seen with NFSv4.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zones-discuss mailing list
zones-discuss@opensolaris.org