On Mon, Mar 2, 2020 at 12:22 AM William Kenworthy <bi...@iinet.net.au> wrote: > > I thought lizardfs was much more community minded > but you are characterising it as similar to moosefs - a taster offering > by a commercial company holding back some of the non-essential but > jucier features for the paid version - is that how you see them?
I don't see much of an active community. It seems like most actual development happens outside of the public repo, with big code drops by the private team doing the work (which seems to be associated with a company). A bit like the Android model. I'm sure they'll accept pull requests, but that isn't how most of the work is getting done. It seems like the main difference between them and moosefs is that they're making more stuff FOSS to entice users over. Shadow masters are FOSS as opposed to just having metadata loggers. HA is FOSS in the latest RC. So, it seems like their model is to trickle out the non-free stuff and make it free after a delay. It really seems like Ceph is the best fully open platform out there, but the resource requirements just make it impractical. I have no doubt that it can scale FAR better with its design, but that design basically forces every node to be a bit of a powerhouse, versus Lizardfs where you just have one daemon with all the intelligence and the rest are just dumping files on disks. And you really don't need much CPU/RAM for the master if you're serving large files - the demands would go up with IOPS and number of files, and multimedia is low on both. > By the way, to keep to the rpi subject, I did have a rpi3B with a usb2 > sata drive attached but it was hopeless as a chunkserver impacting the > whole cluster. Having the usb data flow and network data flow through > the same hub just didn't go well Hard drives plus 100Mbps LAN sharing a single USB 2.0 hub is definitely not a recipe for NAS success... When I upgraded to UniFi switches I really only noticed for the first time how many hosts I have that aren't gigabit, and they're mostly Pis at this point. They're nice little project boards but for anything IO-intensive they're almost always the wrong choice. The RockPro64 I'm using has gigabit plus PCIe 3.0 x8 plus USB3 and as far as I can tell they don't have any contention. Maybe they're all on a PCIe bus or something but obviously that can handle quite a bit. Only issue was that the rk3399 PCIe drivers were not the most robust in the kernel, but ayufan and the IRC channel were both helpful and his kernel branch is actively maintained, so I was able to get everything sorted (some delays needed during training to allow boards to initialize and I was having power issues in the beginning). Much of the rk3399 support in the kernel was pushed by Google for Chromebooks and LSI HBAs weren't exactly on their list of things to test with those - not sure if the Chromebooks put much of anything on PCIe. -- Rich