Maybe ZFS hasn't seen an error in a long enough time that it considers
the pool healthy? You could try clearing the pool and then observing.
On Wed, Jan 28, 2009 at 9:40 AM, Ben Miller mil...@eecis.udel.edu wrote:
# zpool status -xv
all pools are healthy
Ben
What does 'zpool status -xv'
# zpool status -xv
all pools are healthy
Ben
What does 'zpool status -xv' show?
On Tue, Jan 27, 2009 at 8:01 AM, Ben Miller
mil...@eecis.udel.edu wrote:
I forgot the pool that's having problems was
recreated recently so it's already at zfs version 3.
I just did a 'zfs upgrade -a' for
I forgot the pool that's having problems was recreated recently so it's already
at zfs version 3. I just did a 'zfs upgrade -a' for another pool, but some of
those filesystems failed since they are busy and couldn't be unmounted.
# zfs upgrade -a
cannot unmount '/var/mysql': Device busy
cannot
What does 'zpool status -xv' show?
On Tue, Jan 27, 2009 at 8:01 AM, Ben Miller mil...@eecis.udel.edu wrote:
I forgot the pool that's having problems was recreated recently so it's
already at zfs version 3. I just did a 'zfs upgrade -a' for another pool,
but some of those filesystems failed
You can upgrade live. 'zfs upgrade' with no arguments shows you the
zfs version status of filesystems present without upgrading.
On Jan 24, 2009, at 10:19 AM, Ben Miller mil...@eecis.udel.edu wrote:
We haven't done 'zfs upgrade ...' any. I'll give that a try the
next time the system
We haven't done 'zfs upgrade ...' any. I'll give that a try the next time the
system can be taken down.
Ben
A little gotcha that I found in my 10u6 update
process was that 'zpool
upgrade [poolname]' is not the same as 'zfs upgrade
[poolname]/[filesystem(s)]'
What does 'zfs upgrade' say?
Ben Miller wrote:
We haven't done 'zfs upgrade ...' any. I'll give that a try the next time
the system can be taken down.
No need to take the system down, it can be done on the fly.
The only downside to the upgrade is that you may not be able
to import the pool or file system on an older
A little gotcha that I found in my 10u6 update process was that 'zpool
upgrade [poolname]' is not the same as 'zfs upgrade
[poolname]/[filesystem(s)]'
What does 'zfs upgrade' say? I'm not saying this is the source of
your problem, but it's a detail that seemed to affect stability for
me.
On
The pools are upgraded to version 10. Also, this is on Solaris 10u6.
# zpool upgrade
This system is currently running ZFS pool version 10.
All pools are formatted using this version.
Ben
What's the output of 'zfs upgrade' and 'zpool
upgrade'? (I'm just
curious - I had a similar situation
Bug ID is 6793967.
This problem just happened again.
% zpool status pool1
pool: pool1
state: DEGRADED
scrub: resilver completed after 0h48m with 0 errors on Mon Jan 5 12:30:52 2009
config:
NAME STATE READ WRITE CKSUM
pool1 DEGRADED 0 0 0
What's the output of 'zfs upgrade' and 'zpool upgrade'? (I'm just
curious - I had a similar situation which seems to be resolved now
that I've gone to Solaris 10u6 or OpenSolaris 2008.11).
On Wed, Jan 21, 2009 at 2:11 PM, Ben Miller mil...@eecis.udel.edu wrote:
Bug ID is 6793967.
This
I just put in a (low priority) bug report on this.
Ben
This post from close to a year ago never received a
response. We just had this same thing happen to
another server that is running Solaris 10 U6. One of
the disks was marked as removed and the pool
degraded, but 'zpool status -x' says
This post from close to a year ago never received a response. We just had this
same thing happen to another server that is running Solaris 10 U6. One of the
disks was marked as removed and the pool degraded, but 'zpool status -x' says
all pools are healthy. After doing an 'zpool online' on
We run a cron job that does a 'zpool status -x' to check for any degraded
pools. We just happened to find a pool degraded this morning by running 'zpool
status' by hand and were surprised that it was degraded as we didn't get a
notice from the cron job.
# uname -srvp
SunOS 5.11 snv_78 i386
#
14 matches
Mail list logo