On 10/23/2016 13:59, Artur Shepilko wrote:
It's not clear what type of project the OP is trying to setup, whether it's a programming-related project, or general document-repo type.

I am doing research where we may do a dozen iterations a day generating about 10-100MB of files per iteration. within a short period of time we will have well over a terabyte of information. My current workflow uses a database (was filemaker but switched to sqlite a while ago) like a library card catalog with notes and status fields. The files are a combination of analysis code, text csv, binary, and documentation. Occasionally we find something and want to go back to older data and re-run the experiment with a few tweaks, or modify the analysis program. I have information overload and finding stuff, even with it cataloged in a database, is becoming a significant chore. So I am trying out different techniques, such as using fossil, to track the experiments and the large volume of information.

-------------------------
Scott Doctor
sc...@scottdoctor.com
-------------------------


_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to