It seems to me that ssdep (http://ssdeep.sourceforge.net/) would be a good
basis for such a tool. This tool makes hashes for blocks of data in a file,
so you can compare portions of that file to other files, or in this case,
other versions of the file. So this gives you an ordered list of hashes
for a file, presumably smaller than the file itself, and in text form. If
you add some metadata along with this list, like full path+filename, and
permissions and so on, you have everything you need to deal with the file.
Now we just need 2 components. The first is a way to store blocks of data.
Since we already have hashes for those blocks, this is essentially a
key-value store. Fortunately such things are pretty popular these days, and
various experimenting can determine which would be good for this job.
The second would be a set of scripts as scm pre- and post-(commit/checkout)
hooks which converts between the large binaries and the above mentioned
metadata format. This minimizes checkin time because only blocks not
already in the db need to be added, and checkout could be simple too because
of locally cached blocks.
So just a few gotchas I know about this approach - any file format which
uses internal compression will have issues in the redundancy department,
it's just the nature of compression. Similarly the overhead generated by
even the fastest versions of this approach could make it too slow. A final
gotcha comes from potential hash collisions, but this seems a remote
possibility to me, and there are probably ways around it.
None of those gotchas are enough to make me not want to try it... anyone
This is actually a basis for some data de-duplication techniques, where
most of my idea here comes from. Similarly instead of hashes such a thing
could be build on other common identifiers such as those found by
compression techniques. No matter what the algorithm for finding blocks,
the rest of my idea remains the exact same :)
On Wed, Sep 29, 2010 at 11:27 PM, Joey Hess <j...@kitenet.net> wrote:
> My pain point for checking files into git is around 64 mb, and it's
> partly a disk-space based pain, so stuff like bup or git large file
> patches don't help much. So despite having multi-gb git repos,
> I still have no way to version ie, videos.
> It occurs to me that I'd be reasonably happy with something that let me
> manage the filenames (and possibly in some cases content checksums) of
> large files without actually storing their content in .git/.
> So I could delete files, move them around, rename them, and add new
> ones, and commit the changes (plus take some other action to transfer
> file contents) to propigate those actions to other checkouts.
> Does anyone know of any tools in that space?
> see shy jo
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.10 (GNU/Linux)
> -----END PGP SIGNATURE-----
> vcs-home mailing list
vcs-home mailing list