On Sat, Jul 31, 2021 at 8:59 AM William Kenworthy <bi...@iinet.net.au> wrote:
>
> I tried using moosefs with a rpi3B in the
> mix and it didn't go well once I started adding data - rpi 4's were not
> available when I set it up.

Pi2/3s only have USB2 as far as I'm aware, and they stick the ethernet
port on that USB bus besides.  So, they're terrible for anything that
involves IO of any kind.

The Pi4 moves the ethernet off of USB, upgrades it to gigabit, and has
two USB3 hosts, so this is just all-around a missive improvement.
Obviously it isn't going to outclass some server-grade system with a
gazillion PCIe v4 lanes but it is very good for an SBC and the price.

I'd love server-grade ARM hardware but it is just so expensive unless
there is some source out there I'm not aware of.  It is crazy that you
can't get more than 4-8GiB of RAM on an affordable arm system.

> I think that SMR disks will work quite well
> on moosefs or lizardfs - I don't see long continuous writes to one disk
> but a random distribution of writes across the cluster with gaps between
> on each disk (1G network).

So, the distributed filesystems divide all IO (including writes)
across all the drives in the cluster.  When you have a number of
drives that obviously increases the total amount of IO you can handle
before the SMR drives start hitting the wall.  Writing 25GB of data to
a single SMR drive will probably overrun its CMR cache, but if you
split it across 10 drives and write 2.5GB each, there is a decent
chance they'll all have room in the cache, take the write quickly, and
then as long as your writes aren't sustained they can clear the
buffer.

I think you're still going to have an issue in a rebalancing scenario
unless you're adding many drives at once so that the network becomes
rate-limiting instead of the disks.  Having unreplicated data sitting
around for days or weeks due to slow replication performance is
setting yourself up for multiple failures.  So, I'd still stay away
from them.

If you have 10GbE then your ability to overrun those disks goes way
up.  Ditto if you're running something like Ceph which can achieve
higher performance.  I'm just doing bulk storage where I care a lot
more about capacity than performance.  If I were trying to run a k8s
cluster or something I'd be on Ceph on SSD or whatever.

> With a good adaptor, USB3 is great ... otherwise its been quite
> frustrating :(  I do suspect linux and its pedantic correctness trying
> to deal with hardware that isn't truly standardised (as in the
> manufacturer probably supplies a windows driver that covers it up) is
> part of the problem.  These adaptors are quite common and I needed to
> apply the ATA command filter and turn off UAS using the usb tweaks
> mechanism to stop the crashes and data corruption.  The comments in the
> kernel driver code for these adaptors are illuminating!

Sometimes I wonder.  I occasionally get errors in dmesg about
unaligned writes when using zfs.  Others have seen these.  The zfs
developers seem convinced that the issue isn't with zfs but it simply
is reporting the issue, or maybe it happens under loads that you're
more likely to get with zfs scrubbing (which IMO performs far worse
than with btrfs - I'm guessing it isn't optimized to scan physically
sequentially on each disk but may be doing it in a more logical order
and synchronously between mirror pairs).  Sometimes I wonder if there
is just some sort of bug in the HBA drivers, or maybe the hardware on
the motherboard.  Consumer PC hardware (like all PC hardware) is
basically a black box unless you have pretty sophisticated testing
equipment and knowledge, so if your SATA host is messing things up how
would you know?

-- 
Rich

Reply via email to