More comments. See below. 2005/11/25, Jamie Webb <[EMAIL PROTECTED]>: > On Fri, Nov 25, 2005 at 11:46:01AM -0300, Thiago Arrais wrote: > > On a nutshell: how about using a peer-to-peer network as a darcs repo? > > Something like this design (every user keeps their own repo an a web > server, and everyone pulls from everyone else) is I think pretty much > what David originally envisaged. I'm not sure there's a great deal to > gain by additionally automating the pulls.
Darcs was the first distributed revision control system I have heard of. Before it I had used only centralized systems. When I heard about Darcs, I was pretty amused (not to say skeptical) by the word 'distributed'. The first thing that came to my mind was a peer-to-peer network. But then I found the distribution wasn't really about the storage, it was about the responsibility for consistency. Not a single machine was responsible for controlling changes and managing conflicts, but a set of machines. Patches still needed to be uploaded to a central server for publishing. I think we could get away with that last bit of centralization by using a peer-to-peer network for storing the repo. > > The idea is simple. Instead of having one central repository, we would > > have a number of interconnected machines (some would call it a > > cluster), i.e. developer machines that are interested on having access > > to the repo. Changes made (patches recorded) to one machine would get > > distributed to the others using a peer-to-peer network. > > You can probably achieve this at the moment using a record post-hook > which pushes to every developer. Hmmm... that is quite right. But it would take some unfair network usage for one machine to upload to every single other machine. Why don't we ask the first machine that gets the patch from us to help us spread the patch around? Why not doing it automatically? > > That way recording changes > > would be a local operation and would of course be faster than the > > network operation needed for centralized systems as CVS. > > But the network operation would still be necessary. Just asynchronous. > It's no faster than 'darcs push -a &'. Agreed. The main difference is the user won't even notice the network usage because all he has to do is say that he wants to record (or publish -- maybe we need to split the concepts) a change, the system will take care of synchronizing the repos. > > Of course there are the usual issues > > with conflict resolving. > > That's one reason why automatic distribution might be a bad idea. I am not quite sure it would be a completely bad idea. My first idea is to have two repos on each machine, one published and the other one private. I think this could avoid a lot of trouble, but I also think an unique repo is sufficient (maybe the two repo approach would be just a 'best practice'). But I am not stepping on really firm ground here, maybe someone with better experience on RCSs (hello, patch theorists of this group) could enlighten us at this point. What would be the main issues for automatically distributing patches? What can we do about conflicting patches? Cheers, Thiago Arrais _______________________________________________ darcs-users mailing list [email protected] http://www.abridgegame.org/mailman/listinfo/darcs-users
