Fwd: Question: .idx without .pack causes performance issues?
Hi all, (re-sending because my first e-mail was rejected due to html formatting) While debugging a git fetch performance problem on Windows I came across this thread. The problem in our case was also caused by orphaned .idx files. On Tue, Jul 21, 2015 at 9:15 PM, Junio C Hamanowrote: > > Junio C Hamano writes: > > > I however do not think that we mark the in-core structure that > > corresponds to an open ".idx" file in any way when such a failure > > happens. If we really cared enough, we could do so, saying "we know > > there is .idx file, but do not bother looking at it again, as we > > know the corresponding .pack is missing", and that would speed things > > up a bit, essentially bringing us back to a sane situation without > > any ".idx" without corresponding ".pack". > > > > I do not think it is worth the effort, though. It would be more > > fruitful to find out how you end up with ".idx exists but not > > corresponding .pack" and if that is some systemic failure, see if > > there is a way to prevent that from happening in the first place. > > While I still think that it is more important to prevent such a > situation from occurring in the first place, ignoring .idx that lack > corresponding .pack should be fairly simple, perhaps like this. I have observed the following: if garbage collection is triggered during a git fetch, I always get messages like this: $ git fetch origin > Auto packing the repository for optimum performance. You may also > run "git gc" manually. See "git help gc" for more information. > Counting objects: 396468, done. > Delta compression using up to 12 threads. > Compressing objects: 100% (98683/98683), done. > Writing objects: 100% (396468/396468), done. > Total 396468 (delta 289422), reused 395212 (delta 288289) > Unlink of file > '.git/objects/pack/pack-343b6cfdf58171f53c235b900a75d09bd9219e06.pack' > failed. Should I try again? (y/n) n > Unlink of file > '.git/objects/pack/pack-343b6cfdf58171f53c235b900a75d09bd9219e06.idx' failed. > Should I try again? (y/n) n > Unlink of file > '.git/objects/pack/pack-63a6cb5e2a9f72eea72b02ac74a167e1d71d417f.idx' failed. > Should I try again? (y/n) n > Unlink of file > '.git/objects/pack/pack-9b616a2501bb9c13acecf3e981c39868dd2f5ff7.pack' > failed. Should I try again? (y/n) n > Unlink of file > '.git/objects/pack/pack-9b616a2501bb9c13acecf3e981c39868dd2f5ff7.idx' failed. > Should I try again? (y/n) n > Checking connectivity: 396468, done. Windows has the property that if a file is open it can't be deleted. If so, it could be that git fetch needs to close the files first. I can't remember observing this problem when running git gc by itself. In the repos where we have problems I observed both unnecessary .pack files and .idx files, but way more .idx files. Maybe, over time, unnecessary pack files have been cleaned up but not .idx files? If so, this would explain how we get into this situation. I have been testing this with very old git versions on Windows (1.7.4 and 1.8.4), sorry if these problems are already fixed in later versions. - Thomas -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: git-p4: Importing a Git repository into Perforce without rebasing
Hi, On Tue, Feb 19, 2013 at 3:40 AM, Russell Myers mez...@russellmyers.com wrote: I'm trying to take a Git repository which has never been in Perforce and push it to Perforce and having difficulty. [...] I know that I could create another Git repository that has some commits in it cloned from Perforce and rebase on top of that; however, the repository I'm trying to import is rather large and rebasing would require me to change many merge commits. I'd like to avoid doing this. The repository has many thousands of commits in it. So your history is not linear and contains merges. In short my question is this: Using git-p4, is there a way to push a Git repository into Perforce without rebasing on top of commits coming from Perforce? No, this is not supported. Non-linear history would be a problem for git-p4 too, so that alone wouldn't solve your problem. git-p4 does not have the logic needed to submit merges back to Perforce. - Thomas -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Replace git-cvsimport with a rewrite that fixes major bugs.
On Wed, Jan 2, 2013 at 5:41 PM, Eric S. Raymond e...@thyrsus.com wrote: Martin Langhoff martin.langh...@gmail.com: Replacement with something more solid is welcome, but until you are extremely confident of its handling of legacy setups... I would still provide the old cvsimport, perhaps in contrib. I am extremely confident. I built a test suite so I could be. I too am glad to see some work go into the cvsimport script. So just to clear things up, previously you said this: Yes, they must install an updated cvsps. This is the problem, and one that is easily solved by just keeping a copy of the old command. Remember that for many users of these tools it doesn't matter if the history is correct or not, as long as the head checkout contains the right files and they are able to submit new changes. With this definition of works git-cvsimport is not that broken I think. Cheers, - Thomas -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Millisecond precision in timestamps?
On Wed, Nov 28, 2012 at 8:29 AM, Junio C Hamano gits...@pobox.com wrote: Jeff King p...@peff.net writes: There is room for new headers, and older versions of git will ignore them. You could add a new committer-timestamp field that elaborates on the timestamp included on the committer line. Newer versions of git would respect it, and older versions would fall back to using the committer timestamp. But I really wonder if anybody actually cares about adding sub-second timestamp support, or if it is merely because SVN has it. Roundtrip conversions may benefit from sub-second timestamps, but personally I think negative timestamps are more interesting and of practical use. Prehistoric projects need them even if they intend to switch to Git, never to go back to their original tarballs and collection of RCS ,v files. If roundtripping to other version control systems is an argument, adding sub-second timestamps could potentially create as many problems as it solves. For example, I've been using the hg-git bridge, and it supports roundtripping between git and mercurial today (for most repos I've tried anyway). I may have missed something, but this could imply that mercurial doesn't care about sub-second timestamps either. If so, and if git suddenly were to record it, it would no longer be as straight forward to represent git history in hg. In my opinion it would be a shame to sacrifice this compatibility just to reduce the distance to svn, which is much larger anyway. - Thomas -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Millisecond precision in timestamps?
On Wed, Nov 28, 2012 at 9:44 AM, Felipe Contreras felipe.contre...@gmail.com wrote: If roundtripping to other version control systems is an argument, adding sub-second timestamps could potentially create as many problems as it solves. For example, I've been using the hg-git bridge, and it supports roundtripping between git and mercurial today (for most repos I've tried anyway). I may have missed something, but this could imply that mercurial doesn't care about sub-second timestamps either. If so, and if git suddenly were to record it, it would no longer be as straight forward to represent git history in hg. I'm not entirely sure. The API seems to return a float for the time, but at least as far I can see, it never has any decimals anyway. But it doesn't really matter, mercurial doesn't have a committer information either. This is solved by tools like hg-git by storing the information in an 'extra' field, which can store anything. True. For many commits though, hg-git doesn't need any extra fields, as far as I've seen. A timestamp incompatibility would require extra info on every commit. Either way, I don't see the point in changing git's commit format for external tools. The git-notes functionality works just fine for that, it just needs to be attached in the relevant places, like 'git fast-export'. I agree. Even encoding info in the commit message works fine, and git-svn already does that. BTW. Have you checked git's native support for hg?[1] That's been added after I played with this last, I'll have a look. Cheers, Thomas -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: git-p4 clone @all error
Hi, Sorry, forgot to reply-to-all, here is my response again: On Tue, Oct 30, 2012 at 11:44 AM, Arthur a.fou...@amesys.fr wrote: The problem : Importing revision 7727 (100%)Traceback (most recent call last): File /usr/bin/git-p4, line 3183, in module main() File /usr/bin/git-p4, line 3177, in main if not cmd.run(args): File /usr/bin/git-p4, line 3048, in run if not P4Sync.run(self, depotPaths): File /usr/bin/git-p4, line 2911, in run self.importChanges(changes) File /usr/bin/git-p4, line 2618, in importChanges self.initialParent) File /usr/bin/git-p4, line 2198, in commit epoch = details[time] KeyError: 'time' Are you permanently converting a project, or are you planning to continue submitting to perforce with git-p4? I have seen similar bugs myself when using the --detect-branches option. The branch detection in git-p4 is flaky anyway: it is limited what it can handle, and it used to require correct perforce branch specs at least, so I would recommend not using it unless you know what it is doing under the hood. Instead I would just clone a single branch at a time (drop the --detect-branches) and work on that. I do this even in the rare cases when I need more than one perforce branch in the same git repo - there are other ways to achieve the same thing. - Thomas -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html