Hello Philip,
Thursday, June 29, 2006, 2:58:41 AM, you wrote:
PB Erik Trimble wrote:
Since the best way to get this is to use a Mirror or RAIDZ vdev, I'm
assuming that the proper way to get benefits from both ZFS and HW RAID
is the following:
(1) ZFS mirror of HW stripes, i.e. zpool
Hello David,
Wednesday, June 28, 2006, 12:30:54 AM, you wrote:
DV If ZFS is providing better data integrity then the current storage
DV arrays, that sounds like to me an opportunity for the next generation
DV of intelligent arrays to become better.
Actually they can't.
If you want end-to-end
Hello Peter,
Wednesday, June 28, 2006, 1:11:29 AM, you wrote:
PT On Tue, 2006-06-27 at 17:50, Erik Trimble wrote:
PT You really need some level of redundancy if you're using HW raid.
PT Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT that. Seems to me that the simplest way to
Robert Milkowski wrote:
Hello David,
Wednesday, June 28, 2006, 12:30:54 AM, you wrote:
DV If ZFS is providing better data integrity then the current storage
DV arrays, that sounds like to me an opportunity for the next generation
DV of intelligent arrays to become better.
Actually they
Robert Milkowski wrote:
Hello Peter,
Wednesday, June 28, 2006, 1:11:29 AM, you wrote:
PT On Tue, 2006-06-27 at 17:50, Erik Trimble wrote:
PT You really need some level of redundancy if you're using HW raid.
PT Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT that. Seems to me
On Jun 28, 2006, at 12:32, Erik Trimble wrote:The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5: capacity = (N / 2) -1 speed N / 2 -1 minimum # disks to lose before loss of data:
On Wed, 2006-06-28 at 17:32, Erik Trimble wrote:
The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
ZFS mirror/ RAID5: capacity = (N / 2) -1
speed N / 2 -1
minimum # disks to lose before loss
Robert,
PT You really need some level of redundancy if you're using HW raid.
PT Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT that. Seems to me that the simplest way to go is to use zfs to mirror
PT HW raid5, preferably with the HW raid5 LUNs being completely
PT
Which is better -
zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5?
The latter. With a mirror of RAID-5 arrays, you get:
(1) Self-healing data.
(2) Tolerance of whole-array failure.
(3) Tolerance of *at least* three disk failures.
(4) More IOPs than raidz of hardware mirrors
Hello Peter,
Wednesday, June 28, 2006, 11:24:32 PM, you wrote:
PT Robert,
PT You really need some level of redundancy if you're using HW raid.
PT Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT that. Seems to me that the simplest way to go is to use zfs to mirror
PT HW
Hello Erik,
Wednesday, June 28, 2006, 6:32:38 PM, you wrote:
ET Robert -
ET I would definitely like to see the difference between read on HW RAID5
ET vs read on RAIDZ. Naturally, one of the big concerns I would have is
ET how much RAM is needed to avoid any cache starvation on the ZFS
ET
On Wed, 2006-06-28 at 22:13 +0100, Peter Tribble wrote:
On Wed, 2006-06-28 at 17:32, Erik Trimble wrote:
Given a reasonable number of hot-spares, I simply can't see the (very)
marginal increase in safety give by using HW RAID5 as out balancing the
considerable speed hit using RAID5 takes.
On Wed, 2006-06-28 at 14:55 -0700, Jeff Bonwick wrote:
Which is better -
zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5?
The latter. With a mirror of RAID-5 arrays, you get:
(1) Self-healing data.
(2) Tolerance of whole-array failure.
(3) Tolerance of *at least*
I'm a little confused by the first poster's message as well, but you
lose some benefits of ZFS if you don't create your pools with either
RAID1 or RAIDZ, such as data corruption detection. The array isn't
going to detect that because all it knows about are blocks.
That's the dilemma, the array
That's the dilemma, the array provides nice features like RAID1 and
RAID5, but those are of no real use when using ZFS.
RAID5 is not a nice feature when it breaks.
A RAID controller cannot guarantee that all bits of a RAID5 stripe
are written when power fails; then you have data corruption
Mika Borner writes:
RAID5 is not a nice feature when it breaks.
Let me correct myself... RAID5 is a nice feature for systems without
ZFS...
Are huge write caches really a advantage? Or are you taking about
huge
write caches with non-volatile storage?
Yes, you are right.
Hello Nathanael,
NB I'm a little confused by the first poster's message as well, but
NB you lose some benefits of ZFS if you don't create your pools with
NB either RAID1 or RAIDZ, such as data corruption detection. The
NB array isn't going to detect that because all it knows about are blocks.
Does it make sense to solve these problems piece-meal:
* Performance: ZFS algorithms and NVRAM
* Error detection: ZFS checksums
* Error correction: ZFS RAID1 or RAIDZ
Nathanael Burton wrote:
If you've got hardware raid-5, why not just run regular (non-raid) pools on
top of the raid-5?
I
Yes, but the idea of using software raid on a large server doesn't
make sense in modern systems. If you've got a large database server
that runs a large oracle instance, using CPU cycles for RAID is
counter productive. Add to that the need to manage the hardware
directly (drive
Peter Rival wrote:
storage arrays with the same arguments over and over without providing
an answer to the customer problem doesn't do anyone any good. So. I'll
restate the question. I have a 10TB database that's spread over 20
storage arrays that I'd like to migrate to ZFS. How should I
Peter Rival wrote:
See, telling folks you should just use JBOD when they don't have JBOD
and have invested millions to get to state they're in where they're
efficiently utilizing their storage via a SAN infrastructure is just
plain one big waste of everyone's time. Shouting down the
On 6/27/06, Erik Trimble [EMAIL PROTECTED] wrote:
Darren J Moffat wrote:
Peter Rival wrote:
storage arrays with the same arguments over and over without
providing an answer to the customer problem doesn't do anyone any
good. So. I'll restate the question. I have a 10TB database that's
[EMAIL PROTECTED] wrote:
That's the dilemma, the array provides nice features like RAID1 and
RAID5, but those are of no real use when using ZFS.
RAID5 is not a nice feature when it breaks.
A RAID controller cannot guarantee that all bits of a RAID5 stripe
are written when power
Your example would prove more effective if you added, I've got ten
databases. Five on AIX, Five on Solaris 8
Peter Rival wrote:
I don't like to top-post, but there's no better way right now. This
issue has recurred several times and there have been no answers to it
that cover the bases.
On Tue, 27 Jun 2006, Gregory Shaw wrote:
Yes, but the idea of using software raid on a large server doesn't
make sense in modern systems. If you've got a large database server
that runs a large oracle instance, using CPU cycles for RAID is
counter productive. Add to that the need to manage
Al Hopper wrote:
On Tue, 27 Jun 2006, Gregory Shaw wrote:
Yes, but the idea of using software raid on a large server doesn't
make sense in modern systems. If you've got a large database server
that runs a large oracle instance, using CPU cycles for RAID is
counter productive. Add to that
On Jun 27, 2006, at 3:30 PM, Al Hopper wrote:On Tue, 27 Jun 2006, Gregory Shaw wrote: Yes, but the idea of using software raid on a large server doesn'tmake sense in modern systems. If you've got a large database serverthat runs a large oracle instance, using CPU cycles for RAID iscounter
If you've got hardware raid-5, why not just run
regular (non-raid)
pools on top of the raid-5?
I wouldn't go back to JBOD. Hardware arrays offer a
number of
advantages to JBOD:
- disk microcode management
- optimized access to storage
- large write caches
- RAID
28 matches
Mail list logo