Hi
Mostly as a proof of concept, I've created two scripts to sign and
verify Git checkouts (I'm saying checkouts since it (both for
simplicity, and probably trust) is based on the working directory
contents, not the tree referred to by the signed commit). Like some
other such solutions, this adds
Hi
I've got a repository where "git log --raw > _somefile" took a few
seconds in the past, but after an attempt at merging some commits that
were collected in a clone of the same repo that was created about a
year ago, I noticed that this command was now taking 3 minutes 7
seconds. "git gc", "git
2014-02-18 9:45 GMT+00:00 Duy Nguyen :
> Christian can try "git repack -adf"
That's what I already mentioned in my first mail is what I used to fix
the problem.
Here are some 'hard' numbers, FWIW:
- both ~/scr and swap are on the same SSD;
$ free
total used free sha
2014-02-19 10:14 GMT+00:00 Duy Nguyen :
> Christian, if you
> want to experiment this, update MAX_DELTA_CACHE in sha1_file.c and
> rebuild.
I don't have the time right now. (Perhaps next week?)
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@
2014-02-20 23:35 GMT+00:00 Duy Nguyen :
> does it make sense to apply
> --depth=250 for old commits only
Just wondering: would it be difficult to fix the problems that lead to
worse than linear slowdown with the --depth? (I.e. adaptive cache/hash
table size.) If the performance difference between
5 matches
Mail list logo