Re: [gentoo-user] Rasp-Pi-4 Gentoo servers

2020-02-28 Thread Rich Freeman
On Fri, Feb 28, 2020 at 8:11 PM Daniel Frey  wrote:
>
> Thanks for the detail, I've just ordered an RPi4B to mess around with.
> It would be helpful to move DNS etc off my home server as I'm trying to
> separate everything into VLANs.
>

Keep in mind that Linux supports VLAN tagging, so if you set up your
switch to trunk your server you can have containers or even services
on multiple VLANs on the same host.

I have this configured via systemd-networkd - I'm sure you could do it
with various other network managers as well.  I just have a bridge for
each VLAN and then I can attach container virtual ethernet interfaces
to the appropriate VLAN bridge for each container.  KVM uses bridges
and it should be just as easy to put VMs on the appropriate bridges.

If you assign IPs on the host to each VLAN interface then as long as
the VLANs don't have conflicting IP addresses you can just attach
services to the appropriate VLANs by binding to their addresses.  A
service that binds to 0.0.0.0 or to multiple addresses would listen on
all of them.  Now, if your VLANs have conflicting address spaces then
I'd probably just stick to containers so that no host actually sees
conflicting IPs, otherwise you're probably going to have to go crazy
with iproute2 and netfilter to get all the packets going to the right
places.

And all of that should work from a Pi as well as long as long as you
enable CONFIG_VLAN_8021Q.  You also need to make sure the tagged VLAN
traffic is passed from the switch (which is not what you normally want
to do for a non-VLAN-aware host where you would filter out all but one
VLAN and remove the tag).

I run my DHCP server on a Pi so that it is more independent.

-- 
Rich



Re: [gentoo-user] Rasp-Pi-4 Gentoo servers

2020-02-28 Thread Daniel Frey

On 2/27/20 1:49 PM, Rich Freeman wrote:

On Thu, Feb 27, 2020 at 4:25 PM james  wrote:


Yea, I was not clear. I'd run the mail-server, on a 'cluster' (4 or
more), not an individual pi-board unless it was beef up, processor and
ram wise. Gig E would also be on my list.



Unless you have some niche need I wouldn't generally run servers on
Pis.  The biggest issue with ARM is that all the cheap platforms are
starved for RAM, and RAM is one of the biggest issues when running
services.  And of course the Pi in particular has IO issues (as do
many other cheap SBCs but this is less of an ARM issue).  The RAM
issue isn't so many an ARM issue as a supply/demand thing - the only
people asking for 64GB ARM boards are big companies that are willing
to pay a lot for them.

I do actually run a few services on Pis - DNS, DHCP, and a VPN
gateway.  That's about it.  These are fairly non-demanding tasks that
the hardware doesn't struggle with, and the data is almost entirely
static so an occasional backup makes any kind of recovery trivial.
The only reason I run these services on Pis is that they are fairly
fundamental to having a working network.  Most of my services are
running in containers on a server, but I don't want to have to think
about taking a server down for maintenance and then literally every
IOT device in the house won't work.  These particular services are
also basically dependency-free which means I can just boot them up and
they just do their jobs, while they remain a dependency for just about
everything else on the network.  When you start running DHCP in a
container you have more complex dependency issues.

A fairly cheap amd64 system can run a ton of services in containers
though, and it is way simpler to maintain that way.  I still get quick
access to snapshots/etc, but now if I want to run a gentoo container
it is no big deal if 99% of the time it uses 25MB of RAM and 1% of one
core, but once a month it needs 4GB of RAM and 100% of 6 cores.  As
long as I'm not doing an emerge -u world on half a dozen containers at
once it is no big deal at all.

Now, if I needed some server in some niche application that needed to
be able to operate off of a car battery for a few days, then sure I'd
be looking at Pis and so on.



Thanks for the detail, I've just ordered an RPi4B to mess around with. 
It would be helpful to move DNS etc off my home server as I'm trying to 
separate everything into VLANs.


Dan



Re: [gentoo-user] Rasp-Pi-4 Gentoo servers

2020-02-28 Thread Peter Humphrey
On Friday, 28 February 2020 13:28:53 GMT Wols Lists wrote:
> On 28/02/20 11:45, Michael wrote:

--->8

> > http://www.runmapglobal.com/blog/fault-tolerant-dedicated-servers/
> 
> Noted. That link pre-dates me working on the site - I haven't checked
> all the old links - I guess I should ...

If you run a KDE desktop, KDE link checker is the business.

-- 
Regards,
Peter.






Re: [gentoo-user] Rasp-Pi-4 Gentoo servers

2020-02-28 Thread Wols Lists
On 28/02/20 11:45, Michael wrote:
> On Friday, 28 February 2020 11:04:43 GMT Wols Lists wrote:
>> On 28/02/20 05:07, james wrote:
>>> For data storage, long term important stuff, you should employ RAID
>>> (1-10). We can get into that later, duplication of important data, via
>>> backups or extra storage is a good idea too. Backups are an old
>>> technology, but may help, but backups do can get old too and fragmented.
>>> For now, lets not worry so much about long term bit integrity, but focus
>>> on your next FUN gentoo rig. I'm hoping other join in to so you have
>>> more than my prospective on your solution.
>>
>> https://raid.wiki.kernel.org/index.php/Linux_Raid
>>
>> Sorry for the plug, I edit the site. Brickbats/bouquets welcome :-)
>>
>> Cheers,
>> Wol
> 
> Thanks for sharing page Wol!
> 
> I tried this page from the links at the bottom and ended up in a 404 error:
> 
> http://www.runmapglobal.com/blog/fault-tolerant-dedicated-servers/
> 
Noted. That link pre-dates me working on the site - I haven't checked
all the old links - I guess I should ...

Cheers,
Wol



Re: [gentoo-user] Rasp-Pi-4 Gentoo servers

2020-02-28 Thread Rich Freeman
On Fri, Feb 28, 2020 at 6:09 AM Wols Lists  wrote:
>
> On 27/02/20 21:49, Rich Freeman wrote:
> > A fairly cheap amd64 system can run a ton of services in containers
> > though, and it is way simpler to maintain that way.  I still get quick
> > access to snapshots/etc, but now if I want to run a gentoo container
> > it is no big deal if 99% of the time it uses 25MB of RAM and 1% of one
> > core, but once a month it needs 4GB of RAM and 100% of 6 cores.  As
> > long as I'm not doing an emerge -u world on half a dozen containers at
> > once it is no big deal at all.
>
> Do all your containers have the same make options etc? Can't remember
> which directory it is, but I had a shared emerge directory where it
> stored this stuff and I emerged with -bk options (use binary if it's
> there, create binary if it isn't).
>

They're probably not too far off in general, but not exact.  I only
run one instance of any particular container, so I haven't tried to do
parallel builds.  If portage had support for multiple binary packages
co-existing with different build options I might.  If I ever get
really bored for a few weeks I could see playing around with that.  It
seems like it ought to be possible to content-hash the list of build
options and stick that hash in the binary package filename, and then
have portage search for suitable packages, using a binary package if
one matches, and doing a new build if not.

Many of my containers don't even run Gentoo.  I have a few running
Arch, Ubuntu Server, or Debian.  If some service is well-supported in
one of those and is poorly supported in Gentoo I will tend to go that
route.  I'll package it if reasonable but some upstreams are just not
very conducive to this.

There was a question about ARM-based NAS in this thread which I'll go
ahead and tackle to save a reply.  I'm actually playing around with
lizardfs (I might consider moosefs instead if starting from scratch -
or Ceph if I were scaling up but that wouldn't be practical on ARM).
I have a mix of chunkservers but my target is to run new ones on ARM.
I'm using RockPro64 SBCs with LSI HBAs (this SBC is fairly unique in
having PCIe).  There is some issue with the lizardfs code that causes
performance issues on ARM though I understand they're working on this,
so that could change.  I'm using it for multimedia and I care more
about static space than iops, so it is fine for me.  The LSI HBA pulls
more power than the SBC does, but overall the setup is very low-power
and fairly inexpensive (used HBAs on ebay).  I can in theory get up to
16 drives on one SBC this way.  The SBC also supports USB3 so that is
another option with a hub - in fact I'm mostly shucking USB3 drives
anyway.

Main issue with ARM SBCs in general is that they don't have much RAM,
so IMO that makes Ceph a non-starter.  Otherwise that would probably
be my preferred option.  Bad things can happen on rebuilds if you
don't have 1GB/TB as they suggest, and even with the relatively
under-utilized servers I have now that would be a LOT of RAM for ARM
(really, it would be expensive even on amd64).  Lizardfs/moosefs
chunkservers barely use any RAM at all.  The master server does need
more - I have shadow masters running on the SBCs but since I'm using
this for multimedia the metadata server only uses about 100MB of RAM
and that includes processes, libraries, and random minimal service
daemons like sshd.  I'm running my master on amd64 though to get
optimal performance, shadowed on the chunkservers so that I can
failover if needed, though in truth the amd64 box with ECC is the
least likely thing to die and runs all the stuff that uses the storage
right now anyway.

The other suggestion to consider USB3 instead of SATA for storage
isn't a bad idea.  Though going that route means wall warts and drives
as far as the eye can see.  Might still be less messy than my setup,
which has a couple of cheap ATX PSUs with ATX power switches, 16x PCIe
powered risers for the HBAs (they pull too much power for the SBC),
and rockwell drive cages to stack the drives in (they're meant for a
server chasis but they're reasonably priced and basically give you an
open enclosure with a fan).  I'd definitely have a lot fewer PCBs
showing if I used USB3 instead.  I'm not sure how well that would
perform though - that HBA has a lot of bandwidth if the node got busy
with PCIe v2 x4 connectivity (SAS9200-16E) and with USB3 it would all
go through 1-2 ports.  Though I doubt I'd ever get THAT many drives on
a node and if I needed more space I'd probably expand up to 5
chunkservers before I'm putting more than about 3 drives on each - you
get better performance and more fault-tolerance that way.

One big reason I went the distributed filesystem approach was that I
was getting tired of trying to cram as many drives as I could into a
single host and then dealing with some of the inflexibilities of zfs.
The inflexibility bit is improving somewhat with removable vdevs,
though I'm not sure how much residue 

Re: [gentoo-user] Rasp-Pi-4 Gentoo servers

2020-02-28 Thread Michael
On Friday, 28 February 2020 11:04:43 GMT Wols Lists wrote:
> On 28/02/20 05:07, james wrote:
> > For data storage, long term important stuff, you should employ RAID
> > (1-10). We can get into that later, duplication of important data, via
> > backups or extra storage is a good idea too. Backups are an old
> > technology, but may help, but backups do can get old too and fragmented.
> > For now, lets not worry so much about long term bit integrity, but focus
> > on your next FUN gentoo rig. I'm hoping other join in to so you have
> > more than my prospective on your solution.
> 
> https://raid.wiki.kernel.org/index.php/Linux_Raid
> 
> Sorry for the plug, I edit the site. Brickbats/bouquets welcome :-)
> 
> Cheers,
> Wol

Thanks for sharing page Wol!

I tried this page from the links at the bottom and ended up in a 404 error:

http://www.runmapglobal.com/blog/fault-tolerant-dedicated-servers/


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Rasp-Pi-4 Gentoo servers

2020-02-28 Thread Wols Lists
On 27/02/20 21:49, Rich Freeman wrote:
> A fairly cheap amd64 system can run a ton of services in containers
> though, and it is way simpler to maintain that way.  I still get quick
> access to snapshots/etc, but now if I want to run a gentoo container
> it is no big deal if 99% of the time it uses 25MB of RAM and 1% of one
> core, but once a month it needs 4GB of RAM and 100% of 6 cores.  As
> long as I'm not doing an emerge -u world on half a dozen containers at
> once it is no big deal at all.

Do all your containers have the same make options etc? Can't remember
which directory it is, but I had a shared emerge directory where it
stored this stuff and I emerged with -bk options (use binary if it's
there, create binary if it isn't).

That way, when I updated my systems, I updated the "big grunt" system
first, then the smaller ones, so the little ones didn't have to emerge
anything other than what was unique to them.

Cheers,
Wol



Re: [gentoo-user] Rasp-Pi-4 Gentoo servers

2020-02-28 Thread Wols Lists
On 28/02/20 05:07, james wrote:
> For data storage, long term important stuff, you should employ RAID
> (1-10). We can get into that later, duplication of important data, via
> backups or extra storage is a good idea too. Backups are an old
> technology, but may help, but backups do can get old too and fragmented.
> For now, lets not worry so much about long term bit integrity, but focus
> on your next FUN gentoo rig. I'm hoping other join in to so you have
> more than my prospective on your solution.

https://raid.wiki.kernel.org/index.php/Linux_Raid

Sorry for the plug, I edit the site. Brickbats/bouquets welcome :-)

Cheers,
Wol



Re: [gentoo-user] Rasp-Pi-4 Gentoo servers

2020-02-28 Thread Michael
On Friday, 28 February 2020 05:07:07 GMT james wrote:
> On 2/27/20 9:53 PM, Dale wrote:
> > james wrote:
> >> 5G + gentoo + embedded toys, is going to be FUN FUN FUN.
> >> 
> >> 
> >> Then I'll be off to other states, via a hacked out Redneck
> >> camper.. and too many microProcessors
> >> 
> >> 
> >> Thanks Rich, your insights and comments are always most welcome.
> >> 
> >> 
> >> James
> > 
> > Off topic a bit but a question.� Would one of these Rasp-Pi-4 thingys
> > make a NAS hard drive server?
> 
> Sure, but, there may be a better solution, something all ready out there
> and it really depends on refining your needs, current and in the future.
> So lets refine your specifications (centric to your needs + growth) and
> figure out what and how much you need. Then we can survey the
> embedded-thingies, that meet your specs, with a bit of room for growth, OK?
> 
> >I have a Cooler Master HAF-932 case
> 
> Wow, that's big. What the number and capacity (TB) of
> your existing hard-drives?
> 
> How much more storage do you want?  Replacing drives with larger
> capacity, might be all you need to do?
> 
> > but
> > even it is running out of hard drive space.� I'm thinking about building
> > a NAS box, taking sheet metal and bending it until it looks like a box.
> 
> OK, so we first spec out options, then let you decide. Then you  can
> 'bargain shop' for appropriate housing/rack/open chassis, etc.
> 
> > Thing is, it needs a small puter to take data from the drives to the
> > network and vice-versa.
> 
> embedded are not only small, they can have extended temperature ranges
> of tolerance, use drastically less power and many other features. If
> it's purposed hardware, that is only a few things todo, then yes
> embedded uP (abbrev for microProcessor) are the way to go. Running off
> of 12VDC, means an old car battery and a connection to your solar panels
> (assuming you have those) and it's zero on your electric bill. There is
> usually a vast array of tax and other incentives, particularly with
> solar in Ag businesses.
> 
> > I've never even seen one of those things, except on my monitor, so I
> > have no idea what all they are capable of.
> 
> Dale, you are pretty strong with Gentoo Linux, so putting a stripped,
> purposed, minimized gentoo derivative stack, with far less ebuilds, to
> work for your operations, is going to be quite fun. On a farm or ranch,
> there are a myriad of things you can do with embedded boards and
> gentoo-stripped. You can replace many of those expensive (vendor)
> systems with embedded boards +sensors +controls codes and lots of wires
> to do most anything. Let's focus on your NAS for now.
> 
> > I figure a lot of SATA connectors and a ethernet connection
> > plus enough CPU power and memory to  get the job done.
> 
> SATA, was great years ago. Still it makes sense to use, if you already
> have them. Storage going forward is the process of faster and cheaper
> and leaving SATA behind, like ide. Still useful, but a power hog. So
> we'll start out with interfacing your existing SATA drives to the
> embedded board, and look/decide on options for newer Solid State Data
> storage options.
> 
> 
>  � https://en.wikipedia.org/wiki/USB
> 
> You might not even need many sata ports. usb3 and the upcoming usb4 have
> tons of bandwidth (date/time). Mechanical Hard drives are on the way
> out. Too expensive and failure prone. SSd and other types of storage,
> might be right for you, or a mixture. USB stick memory
> can be huge, very low power draw and very inexpensive.
> 
> A hybrid of several types of memory storage may be useful to experiment
> with. You may want to categorize your long term storage: some accessed
> often, others maybe once a year?
> 
> 
> For data storage, long term important stuff, you should employ RAID
> (1-10). We can get into that later, duplication of important data, via
> backups or extra storage is a good idea too. Backups are an old
> technology, but may help, but backups do can get old too and fragmented.
> For now, lets not worry so much about long term bit integrity, but focus
> on your next FUN gentoo rig. I'm hoping other join in to so you have
> more than my prospective on your solution.
> 
> > If those things are capable of doing that fairly
> > easily.� After all, I'm me.� :/
> 
> OK, so let's survey some system, you can just purchase
> with gentoo preinstalled, or a very easy pathway to embedded gentoo.
> Let's look at a few, have some of the other guys jump in, and find you a
> solution, to start with. Most will be expandable, and you can figure out
> the casing, mounting, power and such.
> 
> At this stage, it mostly a research effort and then deciding your
> features/price.  If you do not have massive bandwitdh requirements, I'm
> sure we can find you
> a very cost effective, DC powered  solution.
> 
> Just so you know, I use that fancy $300 OPtima 12vdc charger, and Optima
> batteries. the charger reconditions most batteries, if they