On 27 Dec 2017, at 4:05pm, Warren Young <war...@etr-usa.com> wrote:

> Fossil has that problem, too.  Most DVCSes do, because by their very nature, 
> they want to clone the entire history of the whole project to every machine, 
> then make a second copy of the tip of each working branch you check out.  
> That’s a lot of I/O for a big, old, project.

Please allow for my ignorance of source-control systems here.

Apple recently moved to APFS, a file system which supports file and folder 
cloning.  If you copy a file or folder it doesn’t duplicate the data, it just 
creates a pointer that points to the existing copy.  However, if you then 
change one of the copies (e.g. change one byte of a huge file) it makes a new 
copy (of the affected sectors) at that point, so that only that one copy of the 
file has changed.

I understand that ZFS does this too, though I’ve never used ZFS.

Would running git/fossil on a filesystem like that solve the problem ?

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to