[zfs-discuss] Disk keeps resilvering, was: Replacing a disk never completes

2010-09-30 Thread Ben Miller
On 09/22/10 04:27 PM, Ben Miller wrote: On 09/21/10 09:16 AM, Ben Miller wrote: I had tried a clear a few times with no luck. I just did a detach and that did remove the old disk and has now triggered another resilver which hopefully works. I had tried a remove rather than a detach before

Re: [zfs-discuss] Replacing a disk never completes

2010-09-21 Thread Ben Miller
On 09/20/10 10:45 AM, Giovanni Tirloni wrote: On Thu, Sep 16, 2010 at 9:36 AM, Ben Miller bmil...@mail.eecis.udel.edu mailto:bmil...@mail.eecis.udel.edu wrote: I have an X4540 running b134 where I'm replacing 500GB disks with 2TB disks (Seagate Constellation) and the pool seems sick now

[zfs-discuss] Replacing a disk never completes

2010-09-16 Thread Ben Miller
I have an X4540 running b134 where I'm replacing 500GB disks with 2TB disks (Seagate Constellation) and the pool seems sick now. The pool has four raidz2 vdevs (8+2) where the first set of 10 disks were replaced a few months ago. I replaced two disks in the second set (c2t0d0, c3t0d0) a

[zfs-discuss] Pool is wrong size in b134

2010-06-17 Thread Ben Miller
I upgraded a server today that has been running SXCE b111 to the OpenSolaris preview b134. It has three pools and two are fine, but one comes up with no space available in the pool (SCSI jbod of 300GB disks). The zpool version is at 14. I tried exporting the pool and re-importing and I get

Re: [zfs-discuss] Pool is wrong size in b134

2010-06-17 Thread Ben Miller
be in touch. Thanks, Cindy On 06/17/10 07:02, Ben Miller wrote: I upgraded a server today that has been running SXCE b111 to the OpenSolaris preview b134. It has three pools and two are fine, but one comes up with no space available in the pool (SCSI jbod of 300GB disks). The zpool version

Re: [zfs-discuss] zpool status -x strangeness

2009-01-28 Thread Ben Miller
# zpool status -xv all pools are healthy Ben What does 'zpool status -xv' show? On Tue, Jan 27, 2009 at 8:01 AM, Ben Miller mil...@eecis.udel.edu wrote: I forgot the pool that's having problems was recreated recently so it's already at zfs version 3. I just did a 'zfs upgrade

Re: [zfs-discuss] zpool status -x strangeness

2009-01-27 Thread Ben Miller
unmount '/var/postfix': Device busy 6 filesystems upgraded 821 filesystems already at this version Ben You can upgrade live. 'zfs upgrade' with no arguments shows you the zfs version status of filesystems present without upgrading. On Jan 24, 2009, at 10:19 AM, Ben Miller mil

Re: [zfs-discuss] zpool status -x strangeness

2009-01-24 Thread Ben Miller
? I'm not saying this is the source of your problem, but it's a detail that seemed to affect stability for me. On Thu, Jan 22, 2009 at 7:25 AM, Ben Miller The pools are upgraded to version 10. Also, this is on Solaris 10u6. -- This message posted from opensolaris.org

Re: [zfs-discuss] zpool status -x strangeness

2009-01-22 Thread Ben Miller
which seems to be resolved now that I've gone to Solaris 10u6 or OpenSolaris 2008.11). On Wed, Jan 21, 2009 at 2:11 PM, Ben Miller mil...@eecis.udel.edu wrote: Bug ID is 6793967. This problem just happened again. % zpool status pool1 pool: pool1 state: DEGRADED scrub

Re: [zfs-discuss] zpool status -x strangeness

2009-01-21 Thread Ben Miller
Bug ID is 6793967. This problem just happened again. % zpool status pool1 pool: pool1 state: DEGRADED scrub: resilver completed after 0h48m with 0 errors on Mon Jan 5 12:30:52 2009 config: NAME STATE READ WRITE CKSUM pool1 DEGRADED 0 0 0

Re: [zfs-discuss] zpool status -x strangeness

2009-01-12 Thread Ben Miller
I just put in a (low priority) bug report on this. Ben This post from close to a year ago never received a response. We just had this same thing happen to another server that is running Solaris 10 U6. One of the disks was marked as removed and the pool degraded, but 'zpool status -x' says

Re: [zfs-discuss] zpool status -x strangeness

2009-01-07 Thread Ben Miller
This post from close to a year ago never received a response. We just had this same thing happen to another server that is running Solaris 10 U6. One of the disks was marked as removed and the pool degraded, but 'zpool status -x' says all pools are healthy. After doing an 'zpool online' on

[zfs-discuss] zpool status -x strangeness on b78

2008-02-06 Thread Ben Miller
We run a cron job that does a 'zpool status -x' to check for any degraded pools. We just happened to find a pool degraded this morning by running 'zpool status' by hand and were surprised that it was degraded as we didn't get a notice from the cron job. # uname -srvp SunOS 5.11 snv_78 i386 #

Re: [zfs-discuss] System hang caused by a bad snapshot

2007-09-18 Thread Ben Miller
Hello Matthew, Tuesday, September 12, 2006, 7:57:45 PM, you wrote: MA Ben Miller wrote: I had a strange ZFS problem this morning. The entire system would hang when mounting the ZFS filesystems. After trial and error I determined that the problem was with one

[zfs-discuss] Re: Remove files when at quota limit

2007-05-15 Thread Ben Miller
Has anyone else run into this situation? Does anyone have any solutions other than removing snapshots or increasing the quota? I'd like to put in an RFE to reserve some space so files can be removed when users are at their quota. Any thoughts from the ZFS team? Ben We have around 1000

[zfs-discuss] Remove files when at quota limit

2007-05-10 Thread Ben Miller
We have around 1000 users all with quotas set on their ZFS filesystems on Solaris 10 U3. We take snapshots daily and rotate out the week old ones. The situation is that some users ignore the advice of keeping space used below 80% and keep creating large temporary files. They then try to

[zfs-discuss] Re: Re: ZFS disables nfs/server on a host

2007-04-27 Thread Ben Miller
I just threw in a truss in the SMF script and rebooted the test system and it failed again. The truss output is at http://www.eecis.udel.edu/~bmiller/zfs.truss-Apr27-2007 thanks, Ben This message posted from opensolaris.org ___ zfs-discuss mailing

[zfs-discuss] Re: ZFS disables nfs/server on a host

2007-04-26 Thread Ben Miller
I was able to duplicate this problem on a test Ultra 10. I put in a workaround by adding a service that depends on /milestone/multi-user-server which does a 'zfs share -a'. It's strange this hasn't happened on other systems, but maybe it's related to slower systems... Ben This message

[zfs-discuss] Re: ZFS disables nfs/server on a host

2007-04-19 Thread Ben Miller
It does seem like an ordering problem, but nfs/server should be starting up late enough with SMF dependencies. I need to see if I can duplicate the problem on a test system... This message posted from opensolaris.org ___ zfs-discuss mailing list

[zfs-discuss] Re: Re[2]: System hang caused by a bad snapshot

2006-09-13 Thread Ben Miller
Hello Matthew, Tuesday, September 12, 2006, 7:57:45 PM, you wrote: MA Ben Miller wrote: I had a strange ZFS problem this morning. The entire system would hang when mounting the ZFS filesystems. After trial and error I determined that the problem was with one of the 2500 ZFS

[zfs-discuss] System hang caused by a bad snapshot

2006-09-12 Thread Ben Miller
I had a strange ZFS problem this morning. The entire system would hang when mounting the ZFS filesystems. After trial and error I determined that the problem was with one of the 2500 ZFS filesystems. When mounting that users' home the system would hang and need to be rebooted. After I