On 09/22/10 04:27 PM, Ben Miller wrote:
On 09/21/10 09:16 AM, Ben Miller wrote:
I had tried a clear a few times with no luck. I just did a detach and that
did remove the old disk and has now triggered another resilver which
hopefully works. I had tried a remove rather than a detach before
On 09/20/10 10:45 AM, Giovanni Tirloni wrote:
On Thu, Sep 16, 2010 at 9:36 AM, Ben Miller bmil...@mail.eecis.udel.edu
mailto:bmil...@mail.eecis.udel.edu wrote:
I have an X4540 running b134 where I'm replacing 500GB disks with 2TB
disks (Seagate Constellation) and the pool seems sick now
I have an X4540 running b134 where I'm replacing 500GB disks with 2TB disks
(Seagate Constellation) and the pool seems sick now. The pool has four
raidz2 vdevs (8+2) where the first set of 10 disks were replaced a few
months ago. I replaced two disks in the second set (c2t0d0, c3t0d0) a
I upgraded a server today that has been running SXCE b111 to the
OpenSolaris preview b134. It has three pools and two are fine, but one
comes up with no space available in the pool (SCSI jbod of 300GB disks).
The zpool version is at 14.
I tried exporting the pool and re-importing and I get
be in touch.
Thanks,
Cindy
On 06/17/10 07:02, Ben Miller wrote:
I upgraded a server today that has been running SXCE b111 to the
OpenSolaris preview b134. It has three pools and two are fine, but
one comes up with no space available in the pool (SCSI jbod of 300GB
disks). The zpool version
# zpool status -xv
all pools are healthy
Ben
What does 'zpool status -xv' show?
On Tue, Jan 27, 2009 at 8:01 AM, Ben Miller
mil...@eecis.udel.edu wrote:
I forgot the pool that's having problems was
recreated recently so it's already at zfs version 3.
I just did a 'zfs upgrade
unmount '/var/postfix': Device busy
6 filesystems upgraded
821 filesystems already at this version
Ben
You can upgrade live. 'zfs upgrade' with no
arguments shows you the
zfs version status of filesystems present without
upgrading.
On Jan 24, 2009, at 10:19 AM, Ben Miller
mil
? I'm not saying this is
the source of
your problem, but it's a detail that seemed to affect
stability for
me.
On Thu, Jan 22, 2009 at 7:25 AM, Ben Miller
The pools are upgraded to version 10. Also, this
is on Solaris 10u6.
--
This message posted from opensolaris.org
which seems to be
resolved now
that I've gone to Solaris 10u6 or OpenSolaris
2008.11).
On Wed, Jan 21, 2009 at 2:11 PM, Ben Miller
mil...@eecis.udel.edu wrote:
Bug ID is 6793967.
This problem just happened again.
% zpool status pool1
pool: pool1
state: DEGRADED
scrub
Bug ID is 6793967.
This problem just happened again.
% zpool status pool1
pool: pool1
state: DEGRADED
scrub: resilver completed after 0h48m with 0 errors on Mon Jan 5 12:30:52 2009
config:
NAME STATE READ WRITE CKSUM
pool1 DEGRADED 0 0 0
I just put in a (low priority) bug report on this.
Ben
This post from close to a year ago never received a
response. We just had this same thing happen to
another server that is running Solaris 10 U6. One of
the disks was marked as removed and the pool
degraded, but 'zpool status -x' says
This post from close to a year ago never received a response. We just had this
same thing happen to another server that is running Solaris 10 U6. One of the
disks was marked as removed and the pool degraded, but 'zpool status -x' says
all pools are healthy. After doing an 'zpool online' on
We run a cron job that does a 'zpool status -x' to check for any degraded
pools. We just happened to find a pool degraded this morning by running 'zpool
status' by hand and were surprised that it was degraded as we didn't get a
notice from the cron job.
# uname -srvp
SunOS 5.11 snv_78 i386
#
Hello Matthew,
Tuesday, September 12, 2006, 7:57:45 PM, you
wrote:
MA Ben Miller wrote:
I had a strange ZFS problem this morning.
The
entire system would
hang when mounting the ZFS filesystems. After
trial and error I
determined that the problem was with one
Has anyone else run into this situation? Does anyone have any solutions other
than removing snapshots or increasing the quota? I'd like to put in an RFE to
reserve some space so files can be removed when users are at their quota. Any
thoughts from the ZFS team?
Ben
We have around 1000
We have around 1000 users all with quotas set on their ZFS filesystems on
Solaris 10 U3. We take snapshots daily and rotate out the week old ones. The
situation is that some users ignore the advice of keeping space used below 80%
and keep creating large temporary files. They then try to
I just threw in a truss in the SMF script and rebooted the test system and it
failed again.
The truss output is at http://www.eecis.udel.edu/~bmiller/zfs.truss-Apr27-2007
thanks,
Ben
This message posted from opensolaris.org
___
zfs-discuss mailing
I was able to duplicate this problem on a test Ultra 10. I put in a workaround
by adding a service that depends on /milestone/multi-user-server which does a
'zfs share -a'. It's strange this hasn't happened on other systems, but maybe
it's related to slower systems...
Ben
This message
It does seem like an ordering problem, but nfs/server should be starting up
late enough with SMF dependencies. I need to see if I can duplicate the
problem on a test system...
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hello Matthew,
Tuesday, September 12, 2006, 7:57:45 PM, you wrote:
MA Ben Miller wrote:
I had a strange ZFS problem this morning. The
entire system would
hang when mounting the ZFS filesystems. After
trial and error I
determined that the problem was with one of the
2500 ZFS
I had a strange ZFS problem this morning. The entire system would hang when
mounting the ZFS filesystems. After trial and error I determined that the
problem was with one of the 2500 ZFS filesystems. When mounting that users'
home the system would hang and need to be rebooted. After I
21 matches
Mail list logo