> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
> 
> A while back I've seen proposals on Linux kernel mailing lists to create
> RAID firmwares based on mdadm, and apparently some hardware
> vendors took to that. An added benefit for users was that RAID disks
> could be migrated between software and hardware RAID running same
> code, allowing for easier repairs, migrations and up/down-grades.
> 
> Now, it is just a thought. But I wonder if it's possible... Or useful? :)
> Or if anyone has already done that?

Take for example, the new AES instruction set shipping in certain modern
CPU's.  What they've done is adapted the hardware to perform the core tasks
of AES encryption/decryption more efficiently, that were formerly being done
in software, and hence gained significant performance improvements.  Take
also, for example, the TCP Offload Engine, TOE.  They moved some of the core
network processing (the TCP stack) onto the NIC processor in order to get it
away from the CPU, and gain performance in high-performance networks.  I
would venture a guess there's probably some ZFS core operations that could
be offloaded onto an HBA processor.  The question is what, and why?  I
personally never see any ZFS performance bottleneck other than disk speeds
and bus speeds.  (Neglecting dedup - dedup performance is still bad right
now.)  Even if I have checksum=sha256, and compression enabled, I never see
anything that I would call significant processor utilization.  Surely it's
possible in a really high end system, to eventually saturate the CPU cores
with checksum or compression operations or something...

There is one use I can imagine, which would be awesome.  If you were able to
offload COW, and isolate it entirely standalone inside the HBA, then you
might be able to present the OS with basically just a zvol.  So then you
could run linux, vmware, windows, or whatever...  With snapshots and data
integrity and "zfs send" under the hood at the hardware level that the OS
doesn't need to know or care about.  This is very similar to running solaris
(or whatever) as a hypervisor, and then running windows or linux or whatever
as a guest OS.  It is also very similar to running iscsi targets on ZFS,
while letting some other servers use iscsi to connect to the ZFS server.
It's a cool way to inject a ZFS layer beneath the OS that doesn't support
ZFS.  The purpose would not be performance gain, but functionality gain.
(Might be able to gain some performance, but I think it would be roughly
balanced with traditional hardware HBA's.)

I can't think of much other reason to do it...

Bear in mind, if doing something like this... Anyone other than oracle would
need to assess the possible legal risk of distributing ZFS.  (Potential
netapp lawsuit.)  You wouldn't necessarily need to do something like this
using ZFS.  It is possible that btrfs or some other option might actually be
more attractive for such an embedded application.  Also, finding engineers
to develop embedded linux is probably easier than finding engineers to
develop embedded ... whatever kernel you want to run ZFS on.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to