On Sun, Dec 16, 2007 at 11:23:52AM -0800, Bob La Quey wrote:
On Dec 16, 2007 10:10 AM, David Brown <[EMAIL PROTECTED]> wrote:

I don't think it provides redundancy, at least not on a full-system level
like RAID does.

Let's not let this misperception propagate. ZFS does provide redundancy.
See my previous reply to this thread.

Yes.  I did correct this.

ZFS provides a functionality similar to RAID-5.  Other parity flavors were
declared as "in the works" in Jeff's blog.  I don't know what the state of
implementation was.

The redundancy is done within the filesystem, rather than below, hence
Andrew Morton's layering violation comments.

I'm beginning to wonder if this "laying violation" they've implemented is
actually a good idea.  There seems to be almost a battle going on between
MD/LVM and filesystems over write barriers.  The filesystems want write
barriers so they can be robust but still fast.  But, without knowing where
the data is really going, they can't group things in a manner that MD/LVM
can do that barriers efficiently.  Currently Linux doesn't allow write
barriers to MD/LVM, and the only real way to implement it would be to do a
full write synchronization across all of the devices involved in the array,
which defeats the benefit gained by having the write barrier anyway.

However, I'm not sure this is solvable anyway, unless drive manufacturers
could come up with a way of getting write barriers to work between drives.

The NV RAM solution HW raid controllers use is kind of the best current
approach.  The blob says that ZFS a cache synchronize, which has the
potential to really hurt performance.

Dave


--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to