Giuseppe Ghibò wrote: > The most difficult things IMHO would be building from the same > syncronized data. In that case you might choose a master server and > several mirrors. The master might have multiple internet access points > (e.g. from two providers) and will be the only one who might receive > svn commits. Or a model without a master, I guess inspiring to a model > what UseNET is (was), I think a lot more complicate. But in that case > you have two direction of feeding and if two libraries are submitted > in different user in nearest time, you need a system to check for > coerency and set alarms in some cases. > > IMHO one of the building problems was not massive automatic rebuilding > but avoid bottenlecks to the users when building goes wrong. I really like the concept of a distributed build system.
There is the problem that in most home ISP accounts, upload speed is pitiful compared to download speed, by design - they don't want home users running servers. However, it should be possible to design something along the lines of the SETI project. Have the prospective servers rsync their captive build environments prior to build, and build in a chroot, returning the results. I'd be happy to provide a machine on my home LAN to do this, and it would be interesting to design the control system for it. Given many machines which have been rsync'd to the correct build environment, a distributed *make* could export individual compiles or groups of compiles to the cloud systems. Just as we have "committer" rights now, we could have cloud build rights that involve key pairs that could digitally sign what they send back to identify the sender. With random distribution of work, the chances of malice are considerably less, and the key signature ensures that you know who you got anything dodgy from. _______________________________________________ Mageia-dev mailing list [email protected] https://www.mageia.org/mailman/listinfo/mageia-dev
