I've read various articles along those lines. My understanding is that a 500GB
odd raid-z / raid-5 array has around a 1 in 10 chance of loosing at least some
data during a rebuild.
I'd have raid-5 arrays fail at least 4 times, twice during a rebuild. In most
cases I've been able to recover
Just re-read that and it's badly phrased. What I meant to say is that a raid-z
/ raid-5 array based on 500GB drives seems to have around a 1 in 10 chance of
loosing some data during a full rebuild.
This message posted from opensolaris.org
___
Ross wrote:
Just re-read that and it's badly phrased. What I meant to say is that a
raid-z / raid-5 array based on 500GB drives seems to have around a 1 in 10
chance of loosing some data during a full rebuild.
Actually, I think it's been explained already why this is actually one
Hi--
Here's the scoop, in probably too much detail:
I'm a sucker for new filesystems and new tech in general. For you old-
time Mac people, I installed Sequoia when it was first seeded, and had
to reformat my drive several times as it grew to the final release. I
flipped the journaled flag
Hi gurus,
I like zpool iostat and I like system monitoring, so I setup a script
within sma to let me get the zpool iostat figures through snmp.
The problem is that as zpool iostat is only run once for each snmp
query, it always reports a static set of figures, like so:
[EMAIL PROTECTED]:snmp
About a month ago (Jun 2008), I received information indicating that a putback
fixing this problem was in the works and might appear as soon as b92.
Apparently this estimate was overly optimistic; Does anyone know anything about
progress on this issue or have a revised estimate for the
On Sat, Jul 5, 2008 at 9:34 PM, Robert Lawhead
[EMAIL PROTECTED] wrote:
About a month ago (Jun 2008), I received information indicating that a
putback fixing this problem was in the works and might appear as soon as
b92. Apparently this estimate was overly optimistic; Does anyone know
On Sat, Jul 5, 2008 at 2:33 PM, Matt Harrison
[EMAIL PROTECTED] wrote:
Alternatively is there a better way to get read/write ops etc from my
pool for monitoring applications?
I would really love if monitoring zfs pools from snmp was better all
round, but I'm not going to reel off my wish list
If it ever does get released I'd love to hear about it. That bug, and the fact
it appears to have been outstanding for three years, was one of the major
reasons behind us not purchasing a bunch of x4500's.
This message posted from opensolaris.org
Mike Gerdts wrote:
$ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
unix:0:vopstats_zfs:nread 418787
unix:0:vopstats_zfs:read_bytes612076305
unix:0:vopstats_zfs:nwrite163544
unix:0:vopstats_zfs:write_bytes 255725992
Thanks Mike, thats exactly what I was
Booted from 2008.05
and the error was the same as before: corrupted data for both last disks.
zdb -l was the same as before: read label from disk 1 but not from disks 2 3.
This message posted from opensolaris.org
___
zfs-discuss mailing list
FYI, we are literally just days from having this fixed.
Matt: after putback you really should blog about this one --
both to let people know that this long-standing bug has been
fixed, and to describe your approach to it.
It's a surprisingly tricky and interesting problem.
Jeff
On Sat, Jul 05,
On Sat, Jul 05, 2008 at 03:03:34PM -0500, Mike Gerdts wrote:
You can access the kstats directly to get the counter values.
First off, let me say that: kstat++
That's too cool.
$ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
unix:0:vopstats_zfs:nread 418787
On Sat, Jul 5, 2008 at 9:48 PM, Brian Hechinger [EMAIL PROTECTED] wrote:
On Sat, Jul 05, 2008 at 03:03:34PM -0500, Mike Gerdts wrote:
$ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
unix:0:vopstats_zfs:nread 418787
unix:0:vopstats_zfs:read_bytes612076305
14 matches
Mail list logo