I know I'm a bit late to contribute to this thread, but I'd still like to
add my $0.02. My gut feel is that we (generally) don't yet understand the
subtleties of disk drive failure modes as they relate to 1.5 or 2Tb+ drives.
Why? Because those large drives have not been widely available until
Applying classic RAID terms to zfs is just plain
wrong and misleading
since zfs does not directly implement these classic
RAID approaches
even though it re-uses some of the algorithms for
data recovery.
Details do matter.
That's not entirely true, is it?
* RAIDZ is RAID5 + checksum +
On Tue, 22 Dec 2009, Marty Scholes wrote:
If there is a RAIDZ write penalty over mirroring, I am unaware of
it. In fact, sequential writes are faster under RAIDZ.
There is always an IOPS penalty for raidz when writing or reading,
given a particular zfs block size. There may be a write
On Dec 21, 2009, at 11:56 PM, Roman Naumenko ro...@naumenko.ca wrote:
On Dec 21, 2009, at 4:09 PM, Michael Herf
mbh...@gmail.com wrote:
Anyone who's lost data this way: were you doing
weekly scrubs, or
did you find out about the simultaneous failures
after not touching
the bits for
Hi Ross,
What about old good raid10? It's a pretty
reasonable choice for
heavy loaded storages, isn't it?
I remember when I migrated raidz2 to 8xdrives
raid10 the application
administrators were just really happy with the new
access speed. (we
didn't use stripped raidz2
On Tue, 22 Dec 2009, Ross Walker wrote:
Raid10 provides excellent performance and if performance is a priority then I
recommend it, but I was under the impression that resiliency was the
priority, as raidz2/raidz3 provide greater resiliency for a sacrifice in
performance.
Why are people
Bob Friesenhahn wrote:
Why are people talking about RAID-5, RAID-6, and
RAID-10 on this
list? This is the zfs-discuss list and zfs does not
do RAID-5,
RAID-6, or RAID-10.
Applying classic RAID terms to zfs is just plain
wrong and misleading
since zfs does not directly implement these
On Tue, 22 Dec 2009, Ross Walker wrote:
Applying classic RAID terms to zfs is just plain
wrong and misleading since zfs does not directly implement these classic
RAID approaches
even though it re-uses some of the algorithms for data recovery.
Details do matter.
Bob
--
Bob
On Tue, 22 Dec 2009, Marty Scholes wrote:
That's not entirely true, is it?
* RAIDZ is RAID5 + checksum + COW
* RAIDZ2 is RAID6 + checksum + COW
* A stack of mirror vdevs is RAID10 + checksum + COW
These are layman's simplifications that no one here should be
comfortable with.
Zfs borrows
On 22.12.09 18:42, Roman Naumenko wrote:
On Tue, 22 Dec 2009, Ross Walker wrote:
Applying classic RAID terms to zfs is just plain
wrong and misleading since zfs does not directly implement these classic RAID
approaches
even though it re-uses some of the algorithms for data recovery.
Details do
On Tue, 22 Dec 2009, Roman Naumenko wrote:
raid6 is raid6, not matter how you name it: raidz2, raid-dp, raid-ADG or
somehow else.
Sounds nice, but it's is just buzzwords.
It is true that many vendors like to make their storage array seem
special, but references to RAID6 when describing
Interesting discussion. I know the bias here is generally toward enterprise
users. I was wondering if the same recommendations hold for home users that are
generally more price sensitive. I'm currently running OpenSolaris on a system
with 12 drives. I had split them into 3 sets of 4 raidz1
Bob Friesenhahn wrote:
On Tue, 22 Dec 2009, Marty Scholes wrote:
That's not entirely true, is it?
* RAIDZ is RAID5 + checksum + COW
* RAIDZ2 is RAID6 + checksum + COW
* A stack of mirror vdevs is RAID10 + checksum +
COW
These are layman's simplifications that no one here
should be
On 22-Dec-09, at 12:42 PM, Roman Naumenko wrote:
On Tue, 22 Dec 2009, Ross Walker wrote:
Applying classic RAID terms to zfs is just plain
wrong and misleading since zfs does not directly implement these
classic RAID approaches
even though it re-uses some of the algorithms for data
On Dec 22, 2009, at 11:49 AM, Toby Thain wrote:
On 22-Dec-09, at 12:42 PM, Roman Naumenko wrote:
On Tue, 22 Dec 2009, Ross Walker wrote:
Applying classic RAID terms to zfs is just plain
wrong and misleading since zfs does not directly implement these
classic RAID approaches
even though it
ttabbal:
If I understand correctly, raidz{1} is 1 drive protection and space is
(drives - 1) available. Raidz2 is 2 drive protection and space is (drives - 2)
etc. Same for raidz3 being 3 drive protection.
Everything I've seen you should stay around 6-9 drives for raidz, so don't
On Tue, 22 Dec 2009, James Risner wrote:
I do consider RAID5 as 'Stripeset with an interleaved Parity',
so I don't agree with the strong objection in this thread by many
about the use of RAID5 to describe what raidz does. I don't think
many particularly care about the nuanced differences
risner wrote:
If I understand correctly, raidz{1} is 1 drive
protection and space is (drives - 1) available.
Raidz2 is 2 drive protection and space is (drives -
2) etc. Same for raidz3 being 3 drive protection.
Yes.
Everything I've seen you should stay around 6-9
drives for raidz, so
Everything I've seen you should stay around 6-9
drives for raidz, so don't do a raidz3 with 12
drives. Instead make two raidz3 with 6 drives each
(which is (6-3)*1.5 * 2 = 9 TB array.)
So the question becomes, why? If it's performance, I can live with lower IOPS
and max throughput. If it's
On 22-Dec-09, at 3:33 PM, James Risner wrote:
...
Joerg Moellenkamp:
I do consider RAID5 as 'Stripeset with an interleaved
Parity', so I don't agree with the strong objection in this thread
by many about the use of RAID5 to describe what raidz does. I
don't think many particularly
On Dec 22, 2009, at 11:46 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Tue, 22 Dec 2009, Ross Walker wrote:
Raid10 provides excellent performance and if performance is a
priority then I recommend it, but I was under the impression that
resiliency was the priority, as
On December 21, 2009 10:45:29 PM -0500 Ross Walker rswwal...@gmail.com
wrote:
Scrubbing on a routine basis is good for detecting problems early, but it
doesn't solve the problem of a double failure during resilver. As the
size of disks become huge the chance of a double failure during
If you are asking if anyone has experienced two drive failures simultaneously?
The answer is yes.
It has happened to me (at home) and to one client, at least that I can
remember. In both cases, I was able to dd off one of the failed disks (with
just bad sectors or less bad sectors) and
Yes, a coworker lost a second disk during a rebuild of a raid5 and lost all
data. I have not had a failure, however when migrating EqualLogic arrays in and
out of pools, I lost a disk on an array. No data loss, but it concerns me
because during the moves, you are essentially reading and writing
Anyone who's lost data this way: were you doing weekly scrubs, or did you
find out about the simultaneous failures after not touching the bits for
months?
mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hey James,
Personally, I think mirroring is safer (and 3 way mirroring) than raidz/z2/5.
All my boot from zfs systems have 3 way mirrors root/usr/var disks (using
9 disks) but all my data partitions are 2 way mirrors (usually 8 disks or
more and a spare.)
Double-parity (or triple-parity)
On Dec 21, 2009, at 4:09 PM, Michael Herf mbh...@gmail.com wrote:
Anyone who's lost data this way: were you doing weekly scrubs, or
did you find out about the simultaneous failures after not touching
the bits for months?
Scrubbing on a routine basis is good for detecting problems early,
On Dec 21, 2009, at 4:09 PM, Michael Herf
mbh...@gmail.com wrote:
Anyone who's lost data this way: were you doing
weekly scrubs, or
did you find out about the simultaneous failures
after not touching
the bits for months?
Scrubbing on a routine basis is good for detecting
On Dec 21, 2009, at 4:09 PM, Michael Herf
mbh...@gmail.com wrote:
Anyone who's lost data this way: were you doing
weekly scrubs, or
did you find out about the simultaneous failures
after not touching
the bits for months?
Scrubbing on a routine basis is good for detecting
The zfs best practices page (and all the experts in general) talk about
MTTDL and raidz2 is better than raidz and so on.
Has anyone here ever actually experienced data loss in a raidz that
has a hot spare? Of course, I mean from disk failure, not from bugs
or admin error, etc.
-frank
30 matches
Mail list logo