Hi. Michael told me to bomb this thread instead of another one ;)

Here is what I've seen about the current unofficial docker image
landscape:
    
- They're all designed for x86. I imagine we'd want x86+arm32v7
  instead
- Many are not that elegant. The one I liked most was, IIRC, from '
  justifiably'
  (https://hub.docker.com/r/justifiably/logitechmediaserver/dockerfile)
  (github repo 'here'
  (https://github.com/justifiably/docker-logitechmediaserver))
- They're heavy as hell: ubuntu or debian "minimal" docker images are
  huge. Alpine would be a much better candidate if LMS can compile with
  the MUSL libc (I don't know what I am talking about, here. But the
  image size difference is spectacular, at least on ARM)
- 
  

I think images are heavy because the LMS build process is not documented
or the doc is not fresh enough. -I think this is the most prominent
issue. -
    
- My cluster environment is built with Buildroot, I wanted to add an
  LMS package to buildroot, but being very clumsy with makefiles in
  general, and not being able to understand the LMS build process (some
  binary parts are included, players firmwares I understand, but there
  seems to be more), I had to give up.
  An "official" Buildroot LMS makefile would be neat, IMHO.
- The other seemingly popular way towards a clean .tar filesystem (or
  a Docker image) seems to be delivering an official Docker image that
  contains compiler and scripts. Run that and the thing pops up an
  executable image/tar. The same one, every time, since the container's
  build environment starts afresh every time. 
- In theory with multi-stage Docker files you can, in one file (hence
  pull on docker hub) compile if necessary and then use the resulting
  image (without keeping gcc et al. in a buried layer of the image); I
  don't believe this is necessary, Docker is not on the way up. I would
  wary about using too many "Dockerisms" in general (including Compose,
  that python Hydra.)
  

-The second major issue a Dockerized LMS has I think, is with
networking. -Docker networking, last time I checked, is worthless for us
outside of the basic "host" mode or classic bridge networking. SDN
designers seem to play with addressing schemes (MAC and/or IP) with
enthusiasm, but that kind of creativity does not fit well slimproto and
LMS who expect MACs or IPs to be unique in the network. Broadcasts have
to work, and this is not a given with software defined networks.
    
- Many LMS image out there will use host networking. That is fine in
  that it allows including hardware players. It isn't great from the
  security standpoint, as this mode exposes in the container all
  interfaces from the host. In addition and in my experience, LMS is
  fine when listening/using all interfaces. Restricting it to some
  interface or IP, always broke discovery for me.
- Some image/containers use routing and don't care that broadcast
  discovery is broken between hosts. To me, this is not acceptable.
- The option I chose, after trying a few things, was to use a named
  Docker bridge on each host. IP pools have to computed for each host
  (e.g 192.168.1.0/24, host 1 uses IPs 192.168.1.1-10, host2 IPs
  192.168.1.11-20 etc.) Then in the bridge I add a vxlan device with the
  same VNI and port for all hosts. The xvlan link acts as an
  interconnect cable between network switches, all containers see all
  MACs/IPs, discovery works. IP pool splitting and vxlan creation has to
  be done in a host script.
  

-Integrating physical players- to a dedicated network is a bit of a
pita, and as usual wifi is the worst offender. Assuming a dedicated
network:
    
- One of the containers in the cluster has to be a DHCP server
  (working off an IP pool not distributed by Docker). I don't believe
  hardware SBs are zeroconf-aware, I'm sure some software "audiophile"
  players won't, so I see a DHCP container as a must. dnsmasq is my
  favorite (and it is great at clustered DNS, too. Option "loop-detect"
  can be a life saver.)
- The bridge has to include a VLAN interface in the hosts that are
  near a network switch with hardware players connected. The switch has
  to define the chosen VLAN ID as PVID on the ports the hardware players
  are connected to; this because none of the SB players are VLAN-aware,
  AFAIK.
- I did not implement that; thinking of it, and for a domestic setup,
  perhaps the bridged/vxlan network I described above could be a
  bridged/vlan network all along. Vxlan is less prone to local
  configuration clashes, but in this case this is not a concern.
- And now, to Wifi: the easiest option would be to plug an AP into a
  VLAN, just like an SB3. The other options are to run an AP in a host
  or in a container.: the wireless AP interface has to be bridged with
  the bridge Docker uses, and all is well.
            
  - Doing it in the host is one more external script, and likely to
    break if the host configuration changes. 
  - Doing it in a container is much preferable for repeatability,
    except the container will require exclusive access to the wifi phy
    (and firmware blobs I think.) Quick solution, run the AP container
    in host mode, not great. 
    Better solution, use a host script to export the host's phy to the
    namespace of the AP container (once it has started; on its side, the
    AP has to wait until its environment magically spans a wifi
    interface...) This is feasible, but in my use case I had hosts with
    more than one phy, and my linux 4.14 kernel has a tendency to mix
    phys and export the wrong interface to the container (!) So,
    possible but fiddly to make work reliably, I think.
    However achieving that is kind of cool, since you can add an AP on
    as many wifi-capable hosts as you want, thus optimizing wifi
    coverage. Also, hostapd now implements a multi-AP feature, it
    entails using an ethernet backhaul as control channel between APs;
    sounds exactly like what the doctor ordered, but I haven't looked
    into it. 
        
  

Regarding- audio files-, I haven't explored the subject at all, but I
think the following options are possible:
    
- Map a volume container on the host that has some storage device
  connected, export it via NFS (Docker can do that), have the LMS server
  consume the NFS share. Host-side scripting might be necessary, e.g. to
  detect an USB drive. 
- Do the same, but distributed. In this case I think a
  discovery/merge/reexport container would be needed to abstract file
  distribution from the LMS container. In this case, the LMS DB should
  be able to gracefully cope with files that vanish or come up. I don't
  know how LMS copes with that, currently.
  

-Web front-end discovery- can become a problem. In my case I use an
orchestration such that I don't know in advance which host will run LMS
on the dedicated network. Docker will do the NAT dance all right to
allow access from the pyhsical network, but it does not advertise which
services are available on which host. I added a bit of mDNS advertising
on the host once the LMS container is confirmed to have started ok.
"Lately", vulnerabilities have been found in mDNS/DNS-SD, so the
"Bonjour tab" is gone from browsers. iOS never had that implemented. I
have added a bonjour plugin to my browsers, but I don't think that is a
viable solution.

As I mentioned in my previous post, -LMS feels a bit of a monolith-. If
it could be split into different parts, perhaps load, performance, could
be improved. IMHO anything "High Availability" (=stateful replication)
is a wrong, bad idea. But cloning stateless processes, like perhaps a
web front-end, could make sense. To do this sort of things I would not
rely on Docker's swarm/services; Easy to get working, but comes with a
lot of trade-offs. IMHO, swarm mode is another Dockerism that will fade
away.
I would rather look into setting up an independent "ingress" network
with its own load-balancing policy. I understand this is the kind of
stuff Traefik was designed for. Perhaps service advertisement could be
done there.


Apologies for the long, long post, HTH.
I subscribe to the thread, just in case.



3 SB 3 • Libratone Loop, Zipp Mini • iPeng (iPhone + iPad) • LMS 7.9
(linux) with plugins: CD Player, WaveInput, Triode's BBC iPlayer by bpa
• IRBlaster by Gwendesign (Felix) • Server Power Control by Gordon
Harris • Smart Mix, Music Walk With Me, What Was That Tune? by Michael
Herger • PowerSave by Jason Holtzapple • Song Info, Song Lyrics by
Erland Isaksson • AirPlay Bridge by philippe_44 • WeatherTime by Martin
Rehfeld • Auto Dim Display, SaverSwitcher, ContextMenu by Peter Watkins.
------------------------------------------------------------------------
epoch1970's Profile: http://forums.slimdevices.com/member.php?userid=16711
View this thread: http://forums.slimdevices.com/showthread.php?t=111828

_______________________________________________
unix mailing list
[email protected]
http://lists.slimdevices.com/mailman/listinfo/unix

Reply via email to