Jeremy Chadwick wrote:
On Mon, Feb 23, 2009 at 04:53:35AM +0800, Bill Hacker wrote:
Jeremy Chadwick wrote:

*snip*

The problem I was attempting to describe: all pool members must be the
same size, otherwise all members are considered to be equal to the size
of the smallest.  In English: you cannot "mix-and-match" different sized
disks.

*TILT*

C'mon guys - that has nuthin to do with ZFS or any other RAID [1].

{snip}

Circling back to the near-start of the thread (specifically Dmitri's
comment): the point was that Linux has btrfs and a few other filesystems
that offer some really fantastic features (including what I've
described).  Commercial filers (see: Network Appliance, Pillar Axiom)
also offer mix-matched disk sizes and grow/shrink capability.  (NetApp
is actually based on BSD, but naturally all the FS stuff is proprietary)

How/why does this matter to us?

Because users are commonly using *BSD as a form of inexpensive filer for
their servers (not everyone can afford a NetApp or Axiom), or as an OS
on their home NAS (which includes pfSense and m0n0wall).  In both of
these cases, expanding/growing the array isn't possible, which greatly
limits the user-base scope -- and sadly, users usually don't find this
out until they've already made their choice, swap/upgrade a disk, then
post "WTF!" on a mailing list or forum somewhere.

ZFS happens to be incredibly easy to manage (from an administrative POV)
and solves many shortcomings.  It's significantly easier to understand
and use than Linux LVM (Linux md/mdadm is simple, it's the LVM part that
adds excessive complexity).  HAMMER also appears to be pretty easy to
manage and also solves many shortcomings, in a significantly different
way than ZFS (obviously).  These are excellent improvements in the BSD
world, but there's still a few things folks really want which will
ultimately improve on what BSD is being used for today.  That's all I'm
trying to say.  :-)


Well said ..

With the improvements in reliability of HDD, redundancy in the HDD within any one given box is no longer our 'hot spot' - and may never be again.

A SAN or NAS doesn't improve that - it potentially makes it worse w/r single point of failure.

Going forward, it makes more sense to *us* drop each of our traditionally RAID1 or RAID5 boxes to multiple, but non-RAID HDD, connect the boxen to each other with Gig-E or better, and let each support the redundancy for one or more of the others.

Lower risk of outage from CPU or PSU fan failures... or 'fat fingers'.
Potential for IP-failover HA configuration.

Hence my testing as to how happy HAMMER is with lowly C3, (fair), C7 (quite decent), C9/Nano ('to do' but should be as good as it needs to be... 'gamers' or GUI-centric we are not).

.and the research on Gfarm, Gluster, Dfarm, Chiron, Seph ... yada, yada... But Linux - where most of these distributed tools presently perch - is just not an option for *our* use. 'Wetware' cost is too high.

FreeBSD's GEOM/GMIRROR has had the counterpart to hammer mirror-stream over the link for quite some time. But UFS(2) only, and while that fs has never done me any harm, 'snapshots' on UFS are an add-on, not inherent.

Enter HAMMER.... not because it is necessarily any better (yet) than some form of 'grid farm' w/r distributed storage....

.. but because HAMMER was optimized from the ground up for inherent ease of snapshot management, and to a higher degree than anything else since FOSSIL/VENTI - which did not scale well.....and *did* break now and then.

Likewise, hammer mirror-stream (so far) looks to be very good at steady, working - not likely to overload the primary or the b/w.

JM2CW, but I suspect we are not alone in wanting to stop sending so many RAID HDD that haven't actually failed to the landfill.

Or paying to heat a CPU, head-positioner, and NIC with constant rsyncing calculations.

JM2CW

Bill

Reply via email to