Rich Freeman <ri...@gentoo.org> writes:

> On Mon, Dec 29, 2014 at 8:55 AM, lee <l...@yagibdah.de> wrote:
>>
>> Just why can't you?  ZFS apparently can do such things --- yet what's
>> the difference in performance of ZFS compared to hardware raid?
>> Software raid with MD makes for quite a slowdown.
>>
>
> Well, there is certainly no reason that you couldn't serialize a
> logical volume as far as design goes.  It just isn't implemented (as
> far as I'm aware), though you certainly can just dd the contents of a
> logical volume.

You can use dd to make a copy.  Then what do you do with this copy?  I
suppose you can't just use dd to write the copy into another volume
group and have it show up as desired.  You might destroy the volume
group instead ...

> ZFS performs far better in such situations because you're usually just
> snapshotting and not copying data at all (though ZFS DOES support
> serialization which of course requires copying data, though it can be
> done very efficiently if you're snapshotting since the filesystem can
> detect changes without having to read everything).

How's the performance of software raid vs. hardware raid vs. ZFS raid
(which is also software raid)?

> Incidentally, other than lacking maturity btrfs has the same
> capabilities.

IIRC, there are things that btrfs can't do and ZFS can, like sending a
FS over the network.

> The reason ZFS (and btrfs) are able to perform better is that they
> dictate the filesystem, volume management, and RAID layers.  md has to
> support arbitrary data being stored on top of it - it is just a big
> block device which is just a gigantic array.  ZFS actually knows what
> is in all those blocks, and it doesn't need to copy data that it knows
> hasn't changed, protect blocks when it knows they don't contain data,
> and so on.  You could probably improve on mdadm by implementing
> additional TRIM-like capabilities for it so that filesystems could
> inform it better about the state of blocks, which of course would have
> to be supported by the filesystem.  However, I doubt it will ever work
> as well as something like ZFS where all this stuff is baked into every
> level of the design.

Well, I'm planning to make some tests with ZFS.  Particularly, I want to
see how it performs when NFS clients write to an exported ZFS file
system.

How about ZFS as root file system?  I'd rather create a pool over all
the disks and create file systems within the pool than use something
like ext4 to get the system to boot.

And how do I convert a system installed on an ext4 FS (on a hardware
raid-1) to ZFS?  I can plug in another two disks, create a ZFS pool from
them, make file systems (like for /tmp, /var, /usr ...) and copy
everything over.  But how do I make it bootable?


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.

Reply via email to