On Mon, Dec 19, 2022 at 11:43 AM Mark Knecht <markkne...@gmail.com> wrote:
>
> On Mon, Dec 19, 2022 at 6:30 AM Rich Freeman <ri...@gentoo.org> wrote:
> <SNIP>
> > My current solution is:
> > 1. Moosefs for storage: amd64 container for the master, and ARM SBCs
> > for the chunkservers which host all the USB3 hard drives.
>
> I'm trying to understand the form factor of what you are mentioning above.
> Presumably the chunkservers aren't sitting on a lab bench with USB
> drives hanging off of them. Can you point me  toward and example of
> what you are using?

Well, a few basically are just sitting on a bench, but most are in
half-decent cases (I've found that Pi4s really benefit from a decent
case as they will thermal throttle otherwise).  I then have USB3 hard
drives attached.  I actually still have one RockPro64 with an LSI HBA
on a riser card but I'm going to be moving those drives to USB3-SATA
adapters because dealing with the kernel patches needed to fix the
broken PCIe driver is too much fuss, and the HBA uses a TON of power
which I didn't anticipate when I bought it (ugh, server gear).

Really at this point for anything new 2GB Pi4s are my preferred go-to
with Argon ONE v2 cases.  Then I just get USB3 hard drives on sale at
Best Buy for ~$15/TB if possible.  USB3 will handle a few hard drives
depending on how much throughput they're getting, but this setup is
more focused on capacity/cost than performance anyway.

The low memory requirement for the chunkservers is a big part of why I
went with MooseFS instead of Ceph.  The OSDs for Ceph recommend
something like 4GB per hard drive which adds up very fast.

The USB3 hard drives do end up strewn about a fair bit, but they have
their own enclosures anyway.  I just label them well.

>
> I've been considering some of these new mini-computers that have
> a couple of 2.5Gb/S Ethernet ports and 3 USB 3 ports but haven't
> moved forward because I want it packaged in a single case.

Yeah, better ethernet would definitely be on my wish list.  I'll
definitely take a look at the state of those the next time I add a
node.

> Where does the master reside? In a container on your desktop
> machine or is that another element on your network?

In an nspawn container on one of my servers.  It is pretty easy to set
up or migrate so it can go anywhere, but it does benefit from a bit
more CPU/RAM.  Running it in a container creates obvious dependency
challenges if I want to mount moosefs on the same server - that can be
solved with systemd dependencies, but it won't figure that out on its
own.

> > 2. Plex server in a container on amd64 (looking to migrate this to k8s
> > over the holiday).
>
> Why Kubernetes?

I run it 24x7.  This is half an exercise to finally learn and grok
k8s, and half an effort to just develop better container practices in
general.  Right now all my containers run in nspawn which is actually
a very capable engine, but it does nothing for image management, so my
containers are more like pets than cattle.  I want to get to a point
where everything is defined by a few trivially backed-up config files.

One thing I do REALLY prefer with nspawn is its flexibility around
networking.  An nspawn container can use a virtual interface attached
to any bridge, which means you can give them their own IPs, routes,
gateways, VLANs, and so on.  Docker and k8s are pretty decent about
giving containers a way to listen on the network for connection
(especially k8s ingress or load balancers), but they do nothing really
to manage the outbound traffic, which just uses the host network
config.  On a multi-homed network or when you want to run services for
VLANs and so on it seems like a lot of trouble.  Sure, you can go
crazy with iptables and iproute2 and so on, but I used to do that with
non-containerized services and hated it.  With nspawn it is pretty
trivial to set that stuff up and give any container whatever set of
interfaces you want bridged however you want them.  I actually fussed
a little with running a k8s node inside an nspawn container so that I
could just tie pods to nodes to do exotic networking but clusters
inside containers (using microk8s which runs in snapd) was just a
bridge too far...

-- 
Rich

Reply via email to