On Sun, Mar 1, 2020 at 8:52 PM William Kenworthy <[email protected]> wrote:
>
> For those wanting to run a lot of drives on a single host - that defeats
> the main advantage of using a chunkserver based filesystem -
> redundancy.  Its far more common to have a host fail than a disk drive.
> Losing the major part of your storage in one go means the cluster is
> effectively dead - hence having a lot of completely separate systems is
> much more reliable

Of course.  You should have multiple hosts before you start putting
multiple drives on a single host.

However, once you have a few hosts the performance improves by adding
more, but you're not really getting THAT much additional redundancy.
You would get faster rebuild times by having more hosts since there
would be less data to transfer when one fails and more hosts doing the
work.

So, it is about finding a balance.  You probably don't want 30 drives
on 2 hosts.  However, you probably also don't need 15-30 hosts for
that many drives either.  I wouldn't be putting 16 drives onto a
single host until I had a fair number of hosts.

As far as the status of lizardfs goes - as far as I can tell it is
mostly developed by a company and they've wavered a bit on support in
the last year.  I share your observation that they seem to be picking
up again.  In any case, I'm running the latest stable and it works
just fine, but it lacks the high availability features.  I can have
shadow masters, but they won't automatically fail over, so maintenance
on the master is still a pain.  Recovery due to failure of the master
should be pretty quick though even if manual - just have to run a
command on each shadow to determine which has the most recent
metadata, then adjust DNS for my master CNAME to point to the new
master, and then edit config on the new master to tell it that it is
the master and no longer a shadow, and after restarting the daemon the
cluster should be online again.

The latest release candidate has the high availability features (used
to be paid, is now free), however it is still a release candidate and
I'm not in that much of a rush.  There was a lot of griping on the
forums/etc by users who switched to the release candidate and ran into
bugs that ate their data.  IMO that is why you don't go running
release candidates for distributed filesystems with a dozen hard
drives on them - if you want to try them out just run them in VMs with
a few GB of storage to play with and who cares if your test data is
destroyed.  It is usually wise to be conservative with your
filesystems.  Makes no difference to me if they take another year to
do the next release - I'd like the HA features but it isn't like the
old code goes stale.

Actually, the one thing that it would be nice if they fixed is the
FUSE client - it seems to leak RAM.

Oh, and the docs seem to hint at a windows client somewhere which
would be really nice to have, but I can't find any trace of it.  I
only normally run a single client but it would obviously perform well
as a general-purpose fileserver.

There has been talk of a substantial rewrite, though I'm not sure if
that will actually happen now.  If it does I hope they do keep the RAM
requirements low on the chunkservers.  That was the main thing that
turned me off from ceph - it is a great platform in general but
needing 1GB RAM per 1TB disk adds up really fast, and it basically
precludes ARM SBCs as OSDs as you can't get those with that much RAM
for any sane price - even if you were only running one drive per host
good luck finding a SBC with 13GB+ of RAM.  You can tune ceph to use
less RAM but I've heard that bad things happen if you have some hosts
shuffle during a rebuild and you don't have gobs of RAM - all the OSDs
end up with an impossible backlog and they keep crashing until you run
around like Santa Claus filling every stocking with a handful of $60
DIMMs.

Right now lizardfs basically uses almost no ram at all on
chunkservers, so an ARM SBC could run dozens of drives without an
issue.

-- 
Rich

Reply via email to