Pandora uk, One of many Major Brands Inside Beaded Diamond jewelry
The newest craze inside components and also diamond jewelry will be [url=http://www.cheappandorabuyshop.co.uk/] pandora uk[/url] . Beaded Diamond jewelry continues to be approved my own folks around the globe for the basic components but complexity and also complexities inside layout. Diamond jewelry producers and also performers are already producing many different styles and also models to be able to entice the younger technology around and also they have been extremely productive inside this. Pandora will be one beaded diamond jewelry creator. Pandora makes, wholesales and also exports diamond jewelry, type beads, bracelet, goblet beads, and so forth. Pandora diamond jewelry has been proven inside 2000 and also right up until today Pandora Beads, Pandora type beads, Pandora bracelets, Pandora bracelet and also Pandora goblet beads are already acquired an excellent reply inside beaded diamond jewelry industry. http://www.cheappandorabuyshop.co.uk/ -- View this message in context: http://git.661346.n2.nabble.com/Pandora-uk-One-of-many-Major-Brands-Inside-Beaded-Diamond-jewelry-tp7580545.html Sent from the git mailing list archive at Nabble.com. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 0/4] attr directory matching regression
I think the fix is something like this. There is still one thing I'd like to do: make this code not rely on NUL for terminating the patterns. That should remove the ugly p[len] = '\0' in prepare_attr_stack() 4/4 and and the reallocation in add_exclude() (in current code). But let's deal with the regression first. Nguyễn Thái Ngọc Duy (4): wildmatch: do not require text to be NUL-terminated attr.c: fix pattern{,len} inconsistency in struct match_attr dir.c: make match_{base,path}name respect {basename,path}len attr.c: fix matching subdir without the trailing slash attr.c | 11 ++- dir.c | 13 - dir.h | 2 +- t/t5002-archive-attr-pattern.sh | 6 ++ wildmatch.c | 43 - wildmatch.h | 11 +-- 6 files changed, 59 insertions(+), 27 deletions(-) -- 1.8.2.82.gc24b958 -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 1/4] wildmatch: do not require text to be NUL-terminated
This may be helpful when we just want to match a part of text. nwildmatch can be used for this purpose. Signed-off-by: Nguyễn Thái Ngọc Duy pclo...@gmail.com --- wildmatch.c | 43 +-- wildmatch.h | 11 +-- 2 files changed, 34 insertions(+), 20 deletions(-) diff --git a/wildmatch.c b/wildmatch.c index 7192bdc..939bac8 100644 --- a/wildmatch.c +++ b/wildmatch.c @@ -52,7 +52,8 @@ typedef unsigned char uchar; #define ISXDIGIT(c) (ISASCII(c) isxdigit(c)) /* Match pattern p against text */ -static int dowild(const uchar *p, const uchar *text, unsigned int flags) +static int dowild(const uchar *p, const uchar *text, + const uchar *textend, unsigned int flags) { uchar p_ch; const uchar *pattern = p; @@ -60,8 +61,13 @@ static int dowild(const uchar *p, const uchar *text, unsigned int flags) for ( ; (p_ch = *p) != '\0'; text++, p++) { int matched, match_slash, negated; uchar t_ch, prev_ch; - if ((t_ch = *text) == '\0' p_ch != '*') - return WM_ABORT_ALL; + if (text = textend) { + if (p_ch != '*') + return WM_ABORT_ALL; + else + t_ch = '\0'; + } else + t_ch = *text; if ((flags WM_CASEFOLD) ISUPPER(t_ch)) t_ch = tolower(t_ch); if ((flags WM_CASEFOLD) ISUPPER(p_ch)) @@ -101,7 +107,7 @@ static int dowild(const uchar *p, const uchar *text, unsigned int flags) * both foo/bar and foo/a/bar. */ if (p[0] == '/' - dowild(p + 1, text, flags) == WM_MATCH) + dowild(p + 1, text, textend, flags) == WM_MATCH) return WM_MATCH; match_slash = 1; } else @@ -113,8 +119,9 @@ static int dowild(const uchar *p, const uchar *text, unsigned int flags) /* Trailing ** matches everything. Trailing * matches * only if there are no more slash characters. */ if (!match_slash) { - if (strchr((char*)text, '/') != NULL) - return WM_NOMATCH; + for (;text textend; text++) + if (*text == '/') + return WM_NOMATCH; } return WM_MATCH; } else if (!match_slash *p == '/') { @@ -123,16 +130,15 @@ static int dowild(const uchar *p, const uchar *text, unsigned int flags) * with WM_PATHNAME matches the next * directory */ - const char *slash = strchr((char*)text, '/'); - if (!slash) + for (;text textend; text++) + if (*text == '/') + break; + if (text == textend) return WM_NOMATCH; - text = (const uchar*)slash; /* the slash is consumed by the top-level for loop */ break; } - while (1) { - if (t_ch == '\0') - break; + while (text textend) { /* * Try to advance faster when an asterisk is * followed by a literal. We know in this case @@ -145,18 +151,18 @@ static int dowild(const uchar *p, const uchar *text, unsigned int flags) p_ch = *p; if ((flags WM_CASEFOLD) ISUPPER(p_ch)) p_ch = tolower(p_ch); - while ((t_ch = *text) != '\0' + while (text textend (match_slash || t_ch != '/')) { if ((flags WM_CASEFOLD) ISUPPER(t_ch)) t_ch = tolower(t_ch); if (t_ch == p_ch)
[PATCH 2/4] attr.c: fix pattern{,len} inconsistency in struct match_attr
When parse_exclude_pattern detects EXC_FLAG_MUSTBEDIR, it sets patternlen _not_ to include the trailing slash and expects the caller to trim it. Of the two callers, add_exclude() does, parse_attr_line() does not. Because of that, after parse_attr_line() returns, we may have pattern foo/ but its length is reported 3. Some functions do not care about patternlen and will see the pattern as foo/ while others may see it as foo. This patch makes patternlen reflect the true length of pattern. This is a bandage patch that's required for the next patch to pass the test suite as that patch will rely on patternlen's correctness. The true fix comes in the patch after the next one. Signed-off-by: Nguyễn Thái Ngọc Duy pclo...@gmail.com --- attr.c | 2 ++ dir.h | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/attr.c b/attr.c index e2f9377..1818ba5 100644 --- a/attr.c +++ b/attr.c @@ -255,6 +255,8 @@ static struct match_attr *parse_attr_line(const char *line, const char *src, res-u.pat.patternlen, res-u.pat.flags, res-u.pat.nowildcardlen); + if (res-u.pat.flags EXC_FLAG_MUSTBEDIR) + res-u.pat.patternlen++; if (res-u.pat.flags EXC_FLAG_NEGATIVE) { warning(_(Negative patterns are ignored in git attributes\n Use '\\!' for literal leading exclamation.)); diff --git a/dir.h b/dir.h index c3eb4b5..dc63fc8 100644 --- a/dir.h +++ b/dir.h @@ -40,7 +40,7 @@ struct exclude_list { struct exclude_list *el; const char *pattern; - int patternlen; + int patternlen; /* must equal strlen(pattern) */ int nowildcardlen; const char *base; int baselen; -- 1.8.2.82.gc24b958 -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 3/4] dir.c: make match_{base,path}name respect {basename,path}len
When match_basename was split out of excluded_from_list in 593cb88 (exclude: split basename matching code into a separate function - 2012-10-15), it took basenamelen only as a hint. basename was required to be NUL-terminated at the given length. This was fine until match_basename had a new caller (from attr.c) and therefore was no longer excluded_from_list's internal business. Make match_basename stop relying on the NUL assumption above. Do the same for match_pathname. From now on, only pattern is required to be NUL-terminated at the specified patternlen for both match_{base,path}name. Signed-off-by: Nguyễn Thái Ngọc Duy pclo...@gmail.com --- dir.c | 13 - 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/dir.c b/dir.c index 57394e4..30f5241 100644 --- a/dir.c +++ b/dir.c @@ -626,15 +626,18 @@ int match_basename(const char *basename, int basenamelen, int flags) { if (prefix == patternlen) { - if (!strcmp_icase(pattern, basename)) + if (patternlen == basenamelen + !strncmp_icase(pattern, basename, basenamelen)) return 1; } else if (flags EXC_FLAG_ENDSWITH) { if (patternlen - 1 = basenamelen - !strcmp_icase(pattern + 1, - basename + basenamelen - patternlen + 1)) + !strncmp_icase(pattern + 1, + basename + basenamelen - patternlen + 1, + patternlen - 1)) return 1; } else { - if (fnmatch_icase(pattern, basename, 0) == 0) + if (nwildmatch(pattern, basename, basenamelen, + ignore_case ? WM_CASEFOLD : 0, NULL) == 0) return 1; } return 0; @@ -684,7 +687,7 @@ int match_pathname(const char *pathname, int pathlen, namelen -= prefix; } - return wildmatch(pattern, name, + return nwildmatch(pattern, name, namelen, WM_PATHNAME | (ignore_case ? WM_CASEFOLD : 0), NULL) == 0; } -- 1.8.2.82.gc24b958 -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 4/4] attr.c: fix matching subdir without the trailing slash
The story goes back to 94bc671 (Add directory pattern matching to attributes - 2012-12-08). Before this commit, directories are passed to path_matches without the trailing slash. This is fine for matching pattern subdir with foo/subdir. Patterns like subdir/ (i.e. match _directory_ subdir) won't work though. So paths are now passed to path_matches with the trailing slash (i.e. subdir/). The trailing slash is used as the directory indicator (similar to dtype in exclude case). This makes pattern subdir/ match directory subdir/. Pattern subdir no longer match subdir, which is now subdir/. As the trailing slash in pathname is the directory indicator, we do not need to keep it in the pathname for matching. The trailing slash should be turned to dtype DT_DIR and stripped out of pathname. This keeps the code pattern similar to exclude. The same applies for the pattern subdir/. The trailing slash is converted to flag EXC_FLAG_MUSTBEDIR and should not remain in the pattern, as noted in parse_exclude_pattern(). prepare_attr_stack() breaks this and keeps the trailing slash anyway. To sum up, both patterns and pathnames should never have the trailing slash when it comes to match_basename. Reported-and-analyzed-by: Jeff King p...@peff.net Signed-off-by: Nguyễn Thái Ngọc Duy pclo...@gmail.com --- attr.c | 11 +-- t/t5002-archive-attr-pattern.sh | 6 ++ 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/attr.c b/attr.c index 1818ba5..a95c837 100644 --- a/attr.c +++ b/attr.c @@ -256,7 +256,7 @@ static struct match_attr *parse_attr_line(const char *line, const char *src, res-u.pat.flags, res-u.pat.nowildcardlen); if (res-u.pat.flags EXC_FLAG_MUSTBEDIR) - res-u.pat.patternlen++; + p[res-u.pat.patternlen] = '\0'; if (res-u.pat.flags EXC_FLAG_NEGATIVE) { warning(_(Negative patterns are ignored in git attributes\n Use '\\!' for literal leading exclamation.)); @@ -665,9 +665,16 @@ static int path_matches(const char *pathname, int pathlen, { const char *pattern = pat-pattern; int prefix = pat-nowildcardlen; + int dtype; + + if (pathlen pathname[pathlen-1] == '/') { + dtype = DT_DIR; + pathlen--; + } else + dtype = DT_REG; if ((pat-flags EXC_FLAG_MUSTBEDIR) - ((!pathlen) || (pathname[pathlen-1] != '/'))) + dtype != DT_DIR) return 0; if (pat-flags EXC_FLAG_NODIR) { diff --git a/t/t5002-archive-attr-pattern.sh b/t/t5002-archive-attr-pattern.sh index 0c847fb..98ccc3c 100755 --- a/t/t5002-archive-attr-pattern.sh +++ b/t/t5002-archive-attr-pattern.sh @@ -27,6 +27,10 @@ test_expect_success 'setup' ' echo ignored-only-if-dir/ export-ignore .git/info/attributes git add ignored-only-if-dir + mkdir -p ignored-without-slash + echo ignored without slash ignored-without-slash/foo + git add ignored-without-slash/foo + echo ignored-without-slash export-ignore .git/info/attributes mkdir -p one-level-lower/two-levels-lower/ignored-only-if-dir echo ignored by ignored dir one-level-lower/two-levels-lower/ignored-only-if-dir/ignored-by-ignored-dir @@ -49,6 +53,8 @@ test_expect_exists archive/not-ignored-dir/ignored-only-if-dir test_expect_exists archive/not-ignored-dir/ test_expect_missingarchive/ignored-only-if-dir/ test_expect_missingarchive/ignored-ony-if-dir/ignored-by-ignored-dir +test_expect_missingarchive/ignored-without-slash/ +test_expect_missingarchive/ignored-without-slash/foo test_expect_exists archive/one-level-lower/ test_expect_missing archive/one-level-lower/two-levels-lower/ignored-only-if-dir/ test_expect_missing archive/one-level-lower/two-levels-lower/ignored-ony-if-dir/ignored-by-ignored-dir -- 1.8.2.82.gc24b958 -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
submit git mailing list
-- Cheers, Huajian. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
links of london friendship bracelet simply created an iconic joint of jewellery
Finding a great pair of silver anklet bracelets is possible, regardless of the size of your budget. Sometimes when young people achieve certain things in life, personalized gifts pick up to compliment them for their accomplishment. Sometimes young people do things like graduate from school.[url=http://www.linksoflondonsweetieringsale.co.uk/] links of london sale[/url] marvelous gift to remember the occasion. Drop anklet bracelets with a piece that is teardrop or raindrop designed, usually made of silver, are called teardrop anklet bracelets. Drop anklet bracelets are a fashionable type of silver anklet bracelets that can come in many styles. ntage style anklet bracelets are just as attractive as the current styles but could be more interesting. Anklet bracelets from several historical periods can be found with a little research. Even if you find a pair which you want that is more expensive, you should be able to find a similar pair that is more reasonably priced for your budget. For the most part, you will only be compromising on the type of stone that is used in the anklet bracelets. Partially precious small stones can have a great look and often seem like more expensive small stones. Garnets don’t cost as much as rubies, for example, but they look a lot alike. cheap links of london bracelets simply took the childhood candy pendant and created an iconic joint of jewellery. Since its launch in 2002 the [url=http://www.linksoflondonsweetieringsale.co.uk/] links of london friendship bracelet[/url] favorite pendant has been must have; in the number of any girl. Its design is slinky, versatile and a great alternative to an old-fashioned charm pendant. If you want to add any of the Links of London charms to it, you can choose from over one hundred charms. You can also blend in gold charms on the silver favorite pendant. http://www.linksoflondonsweetieringsale.co.uk/ -- View this message in context: http://git.661346.n2.nabble.com/links-of-london-friendship-bracelet-simply-created-an-iconic-joint-of-jewellery-tp7580553.html Sent from the git mailing list archive at Nabble.com. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 3/3] t7800: run --dir-diff tests with and without symlinks
The series looks good, but I can't test it because it does not apply anywhere here. Am 3/23/2013 14:31, schrieb John Keeping: Currently the difftool --dir-diff tests may or may not use symlinks depending on the operating system on which they are run. In one case this has caused a test failure to be noticed only on Windows when the test also fails on Linux when difftool is invoked with --no-symlinks. Rewrite these tests so that they do not depend on the environment but run explicitly with both --symlinks and --no-symlinks, protecting the --symlinks version with a SYMLINKS prerequisite. At first, I wondered what the point of having --symlinks and --no-symlinks was when there is no discernable difference. But 1f229345 (difftool: Use symlinks when diffing against the worktree) makes it pretty clear: It's an optimization, and --no-symlinks is only intended as an escape hatch. -- Hannes -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 2/3] t7800: fix tests when difftool uses --no-symlinks
Am 3/24/2013 16:15, schrieb John Keeping: Subject: [PATCH] difftool: don't overwrite modified files After running the user's diff tool, git-difftool will copy any files that differ between the working tree and the temporary tree. This is useful when the user edits the file in their diff tool but is wrong if they edit the working tree file while examining the diff. Instead of copying unconditionally when the files differ, store the initial hash of the working tree file and only copy the temporary file back if it was modified and the working tree file was not. If both files have been modified, print a warning and exit with an error. Signed-off-by: John Keeping j...@keeping.me.uk --- git-difftool.perl | 35 +-- t/t7800-difftool.sh | 26 ++ 2 files changed, 47 insertions(+), 14 deletions(-) diff --git a/git-difftool.perl b/git-difftool.perl index c433e86..be82b5a 100755 --- a/git-difftool.perl +++ b/git-difftool.perl @@ -15,7 +15,6 @@ use strict; use warnings; use File::Basename qw(dirname); use File::Copy; -use File::Compare; use File::Find; use File::stat; use File::Path qw(mkpath rmtree); @@ -123,7 +122,7 @@ sub setup_dir_diff my $rindex = ''; my %submodule; my %symlink; - my @working_tree = (); + my %working_tree; my @rawdiff = split('\0', $diffrtn); my $i = 0; @@ -175,7 +174,9 @@ EOF if ($rmode ne $null_mode) { if (use_wt_file($repo, $workdir, $dst_path, $rsha1, $symlinks)) { - push(@working_tree, $dst_path); + $working_tree{$dst_path} = + $repo-command_oneline('hash-object', + $workdir/$dst_path); } else { $rindex .= $rmode $rsha1\t$dst_path\0; } @@ -227,7 +228,7 @@ EOF # not part of the index. Remove any trailing slash from $workdir # before starting to avoid double slashes in symlink targets. $workdir =~ s|/$||; - for my $file (@working_tree) { + for my $file (keys %working_tree) { my $dir = dirname($file); unless (-d $rdir/$dir) { mkpath($rdir/$dir) or @@ -278,7 +279,7 @@ EOF exit_cleanup($tmpdir, 1) if not $ok; } - return ($ldir, $rdir, $tmpdir, @working_tree); + return ($ldir, $rdir, $tmpdir, %working_tree); } sub write_to_file @@ -376,7 +377,7 @@ sub dir_diff my $error = 0; my $repo = Git-repository(); my $workdir = find_worktree($repo); - my ($a, $b, $tmpdir, @worktree) = + my ($a, $b, $tmpdir, %worktree) = setup_dir_diff($repo, $workdir, $symlinks); if (defined($extcmd)) { @@ -390,19 +391,25 @@ sub dir_diff # should be copied back to the working tree. # Do not copy back files when symlinks are used and the # external tool did not replace the original link with a file. - for my $file (@worktree) { + for my $file (keys %worktree) { next if $symlinks -l $b/$file; next if ! -f $b/$file; - my $diff = compare($b/$file, $workdir/$file); - if ($diff == 0) { - next; - } elsif ($diff == -1) { - my $errmsg = warning: Could not compare ; - $errmsg += '$b/$file' with '$workdir/$file'\n; + my $wt_hash = $repo-command_oneline('hash-object', + $workdir/$file); + my $tmp_hash = $repo-command_oneline('hash-object', + $b/$file); This is gross. Can't we do much better here? Difftool already keeps a GIT_INDEX of the files in the temporary tree ($tmpdir/rindex). Running git-diff-files should be sufficient to tell which ones where edited via the users's diff-tool. Then you can restrict calling hash-object to only those worktree files where an edit collision needs to be checked for. You could also keep a parallel index that keeps the state of the same set of files in the worktree. Then another git-diff-files call could replace the other half of hash-object calls. + my $wt_modified = $wt_hash ne $worktree{$file}; + my $tmp_modified = $tmp_hash ne $worktree{$file}; + + if ($wt_modified and $tmp_modified) { + my $errmsg = warning: Both files modified: ; + $errmsg .= '$workdir/$file' and '$b/$file'.\n; + $errmsg .= warning: Working tree file has been left.\n; + $errmsg .= warning:\n; warn $errmsg; $error = 1; - } elsif ($diff == 1) { + } elsif ($tmp_modified) { my $mode =
Re: Bug: `gitsubmodule` does not list modules with unicode characters
Am 23.03.2013 17:28, schrieb Ilya Kulakov: The `git submodule` commands seem to ignore modules which paths contain unicode characters. Consider the following steps to reproduce the problem: 1. Create a directory with name that contains at least one unicode character (e.g. ûñïçödé-rèpø) 2. Initialize git repository within this directory 3. Add this repository as a submodule to another repository so that unicode characters will appear in the path to the module (e.g. ../ûñïçödé-rèpø) 4. Check that .gitmodules file is updated and contains record about just added module 5. List submodules with using `git submodule` and find out that just added module is not listed Thanks for your report. It is known that git submodule does not behave very well when path names contain special characters. I'll look into that when I find some time to see if we can easily fix your problem. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Am 24.03.2013 18:38, schrieb Ramkumar Ramachandra: I find this behavior very inconsistent and annoying. Why would I want to commit the submodule change immediately? Maybe I want to batch it up with other changes and stage it at a later time. Why should I have to unstage them manually now? I get the other side of the argument: what if the user commits the .gitmodule change at a different time from the file change? In other words, the user should have a way of saying 'submodule stage' and 'submodule unstage'. Hmm, AFAIK you are the first to bring up such a feature, as in most use cases doing a git (submodule) add path is expected to stage what you added. Maybe you could teach the stage/unstage code to also stage/unstage the corresponding part of the .gitmodules file, but I'm not sure it is worth the hassle. Now, for the implementation. Do we have existing infrastructure to stage a hunk non-interactively? (The ability to select a hunk to stage/ unstage programmatically). If not, it might be quite a non-trivial thing to write. Have fun when adding two submodules and unstage only one of them later. I think this feature will not work unless you analyze .gitmodules and stage/unstage section-wise. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Jens Lehmann wrote: Am 24.03.2013 18:38, schrieb Ramkumar Ramachandra: I find this behavior very inconsistent and annoying. Why would I want to commit the submodule change immediately? Maybe I want to batch it up with other changes and stage it at a later time. Why should I have to unstage them manually now? I get the other side of the argument: what if the user commits the .gitmodule change at a different time from the file change? In other words, the user should have a way of saying 'submodule stage' and 'submodule unstage'. Hmm, AFAIK you are the first to bring up such a feature, as in most use cases doing a git (submodule) add path is expected to stage what you added. In my opinion, the 'git submodule add path' form is broken, because it doesn't relocate the object store of the submodule repository (a bug that we need to fix?). I always use the 'git submodule add repository path' form, which puts the object store of the submodule repository in .git/modules of the parent repository. This form is nothing like 'git add', but more like a 'git clone': arguably, 'submodule clone' is a better name for it. Maybe you could teach the stage/unstage code to also stage/unstage the corresponding part of the .gitmodules file, but I'm not sure it is worth the hassle. Maybe not. I'm still not an heavy user of submodules; I notice many breakages and inconsistencies, but I'm not sure what to fix first, and how to go about fixing it. I'm yet to look at git-subtree and draw inspiration from its design. Maybe the WTF You need to run this command from the toplevel of the working tree needs to be fixed first? I think it's a matter of a simple pushd, popd before the operation and building the correct relative path. I'm not sure how it'll work with nested submodules though. Then again, I think nested submodules are Wrong; submodules are probably not the right tool for the job. Now, for the implementation. Do we have existing infrastructure to stage a hunk non-interactively? (The ability to select a hunk to stage/ unstage programmatically). If not, it might be quite a non-trivial thing to write. Have fun when adding two submodules and unstage only one of them later. I think this feature will not work unless you analyze .gitmodules and stage/unstage section-wise. Yes, which is why I asked if we have existing infrastructure to make this possible. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Run git status in the background.
Fredrik Gustafsson wrote: If core.preload is set to a non-zero value, every time a git command is executed, git status will be runned in the background if the value of core.preload is lower than the number of seconds since last run. Counting the number of seconds since the last run is gross. This kind of setting makes sense in the credential-helper, where you specify how long to cache your password. I don't think you should use a time-based trigger for this feature. Please see this thread: http://article.gmane.org/gmane.comp.version-control.git/218587 This solution solves many of the problems discussed there, but introduces new ones. For example, it does have a bigger impact. With this solution beeing functional but a bit gross, it's not sure that it should be placed here at all. However, it's a good place to place it for all git-tools to be able to use it without knowing about it. (It would speed up all git wrappers and not just bash-prompt like the previous solution). Yes, but you're proposing including a very gross feature in core git. I'm sorry, but this is a non-starter. There's a few more things to address before shipping this if this is considered a good approach. Such as: * Don't run if a git status-like git command has been runned. Or a non-repo git command (lite git status or git help) is runned. * Better names for settings and files. * Better(?) invokation of git status (a forked internal call instead of a system call?). Signed-off-by: Fredrik Gustafsson iv...@iveqy.com --- git.c | 30 ++ 1 file changed, 30 insertions(+) I would argue that git.c is the wrong place to implement this feature. You're essentially doing fopen(), fclose(), stat(), and system(): shouldn't this be a shell script? I earlier suggested making it something we can hook to chpwd() in zsh, and I think this is the most sane suggestion. This is what z() [1] uses, and I would argue that your feature shares many similarities with it. [1]: https://github.com/rupa/z -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 4/4] attr.c: fix matching subdir without the trailing slash
On Mon, Mar 25, 2013 at 02:20:31PM +0700, Duy Nguyen wrote: On second thought, maybe we should not pass path subdir/ at all. Instead we create a fake dtype based on the trailing slash and pass it down to attr.c:fill() - path_matches(), just like how last_exclude_matching_from_list() is called. I was hoping to make a small patch, but as it turns out, collect_all_attrs() takes a const path that contains the trailing slash, we still need to ignore it in match_{base,path}name so the whole series is still required. The only difference is in the final patch, which is a bit longer: -- 8 -- diff --git a/attr.c b/attr.c index 1818ba5..0b2b716 100644 --- a/attr.c +++ b/attr.c @@ -256,7 +256,7 @@ static struct match_attr *parse_attr_line(const char *line, const char *src, res-u.pat.flags, res-u.pat.nowildcardlen); if (res-u.pat.flags EXC_FLAG_MUSTBEDIR) - res-u.pat.patternlen++; + p[res-u.pat.patternlen] = '\0'; if (res-u.pat.flags EXC_FLAG_NEGATIVE) { warning(_(Negative patterns are ignored in git attributes\n Use '\\!' for literal leading exclamation.)); @@ -659,15 +659,14 @@ static void prepare_attr_stack(const char *path, int dirlen) } static int path_matches(const char *pathname, int pathlen, - const char *basename, + const char *basename, int dtype, const struct pattern *pat, const char *base, int baselen) { const char *pattern = pat-pattern; int prefix = pat-nowildcardlen; - if ((pat-flags EXC_FLAG_MUSTBEDIR) - ((!pathlen) || (pathname[pathlen-1] != '/'))) + if ((pat-flags EXC_FLAG_MUSTBEDIR) dtype != DT_DIR) return 0; if (pat-flags EXC_FLAG_NODIR) { @@ -706,7 +705,7 @@ static int fill_one(const char *what, struct match_attr *a, int rem) } static int fill(const char *path, int pathlen, const char *basename, - struct attr_stack *stk, int rem) + int dtype, struct attr_stack *stk, int rem) { int i; const char *base = stk-origin ? stk-origin : ; @@ -715,7 +714,7 @@ static int fill(const char *path, int pathlen, const char *basename, struct match_attr *a = stk-attrs[i]; if (a-is_macro) continue; - if (path_matches(path, pathlen, basename, + if (path_matches(path, pathlen, basename, dtype, a-u.pat, base, stk-originlen)) rem = fill_one(fill, a, rem); } @@ -755,11 +754,17 @@ static void collect_all_attrs(const char *path) struct attr_stack *stk; int i, pathlen, rem, dirlen; const char *basename, *cp, *last_slash = NULL; + int dtype; for (cp = path; *cp; cp++) { if (*cp == '/' cp[1]) last_slash = cp; } + if (cp path cp[-1] == '/') { + dtype = DT_DIR; + cp--; + } else + dtype = DT_REG; pathlen = cp - path; if (last_slash) { basename = last_slash + 1; @@ -775,7 +780,7 @@ static void collect_all_attrs(const char *path) rem = attr_nr; for (stk = attr_stack; 0 rem stk; stk = stk-prev) - rem = fill(path, pathlen, basename, stk, rem); + rem = fill(path, pathlen, basename, dtype, stk, rem); } int git_check_attr(const char *path, int num, struct git_attr_check *check) diff --git a/t/t5002-archive-attr-pattern.sh b/t/t5002-archive-attr-pattern.sh index 0c847fb..98ccc3c 100755 --- a/t/t5002-archive-attr-pattern.sh +++ b/t/t5002-archive-attr-pattern.sh @@ -27,6 +27,10 @@ test_expect_success 'setup' ' echo ignored-only-if-dir/ export-ignore .git/info/attributes git add ignored-only-if-dir + mkdir -p ignored-without-slash + echo ignored without slash ignored-without-slash/foo + git add ignored-without-slash/foo + echo ignored-without-slash export-ignore .git/info/attributes mkdir -p one-level-lower/two-levels-lower/ignored-only-if-dir echo ignored by ignored dir one-level-lower/two-levels-lower/ignored-only-if-dir/ignored-by-ignored-dir @@ -49,6 +53,8 @@ test_expect_exists archive/not-ignored-dir/ignored-only-if-dir test_expect_exists archive/not-ignored-dir/ test_expect_missingarchive/ignored-only-if-dir/ test_expect_missingarchive/ignored-ony-if-dir/ignored-by-ignored-dir +test_expect_missingarchive/ignored-without-slash/ +test_expect_missingarchive/ignored-without-slash/foo test_expect_exists archive/one-level-lower/ test_expect_missing archive/one-level-lower/two-levels-lower/ignored-only-if-dir/ test_expect_missing
Re: [PATCH] sha1_file: remove recursion in packed_object_info
Junio C Hamano gits...@pobox.com writes: Thomas Rast tr...@student.ethz.ch writes: So here's a nonrecursive version. Dijkstra is probably turning over in his grave as we speak. I *think* I actually got it right. You seem to have lost the if we cannot get delta base, this object is BAD check where you measure the size of a deltified object, which would correspond to this check: -static int packed_delta_info(struct packed_git *p, - struct pack_window **w_curs, - off_t curpos, - enum object_type type, - off_t obj_offset, - unsigned long *sizep) -{ -off_t base_offset; - -base_offset = get_delta_base(p, w_curs, curpos, type, obj_offset); -if (!base_offset) -return OBJ_BAD; True, I'll think about this. The following comment is also lost but... -/* We choose to only get the type of the base object and - * ignore potentially corrupt pack file that expects the delta - * based on a base with a wrong size. This saves tons of - * inflate() calls. - */ -if (sizep) { -*sizep = get_size_from_delta(p, w_curs, curpos); -if (*sizep == 0) -type = OBJ_BAD; ... is this check correct? There is an equivalent check at the beginning of the new packed_object_info() to error out a deltified result. Why is an object whose size is 0 bad? Cc'ing Nicolas, but I think there are several reasons: If it's a delta, then according to docs[1] it starts with the SHA1 of the base object, plus the deflated data. So it is at least 20 bytes. If it's not a delta, then it must start with 'type size\0', which even after compression cannot possibly be 0 bytes. Either way, get_size_from_delta() also uses 0 as the error return. The stack/recursion is used _only_ for error recovery, no? If we do not care about retrying with a different copy of an object we find in the delta chain, we can just update obj_offset with base_offset and keep digging. It almost makes me wonder if a logical follow-up to this patch may be to do so, and rewrite the error recovery codepath to just mark the bad copy and jump back to the very top, retrying everything from scratch. I totally agree. I'll try this again -- my last attempt just didn't work out... [1] Documentation/technical/pack-format.txt -- Thomas Rast trast@{inf,student}.ethz.ch -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] git-web--browse: recognize iTerm as a GUI terminal on OS X
It turns out that the presence of SECURITYSESSIONID is not sufficient for detecting the presence of a GUI under Mac OS X. SECURITYSESSIONID appears to only be set when the user has Screen Sharing enabled. Disabling Screen Sharing and relaunching the shell showed that the variable was missing, at least under Mac OS X 10.6.8. As a result, let's check for iTerm directly via TERM_PROGRAM. Signed-off-by: John Szakmeister j...@szakmeister.net --- On Sun, Mar 24, 2013 at 10:05:53PM +0100, Christian Couder wrote: [snip] Your patch looks good to me, and I cannot really test it as I don't have a Mac. Could you just had some of the explanations you gave above to the commit message? Here's an updated patch. I also noticed that git-bisect.sh is also trying to determine if a GUI is present by looking for SECURITYSESSIONID as well. I wonder if it would be better to create a shell function in git-sh-setup.sh that the two scripts could use? -John git-web--browse.sh | 1 + 1 file changed, 1 insertion(+) diff --git a/git-web--browse.sh b/git-web--browse.sh index 1e82726..1ff5379 100755 --- a/git-web--browse.sh +++ b/git-web--browse.sh @@ -120,6 +120,7 @@ if test -z $browser ; then fi # SECURITYSESSIONID indicates an OS X GUI login session if test -n $SECURITYSESSIONID \ + -o $TERM_PROGRAM = iTerm.app \ -o $TERM_PROGRAM = Apple_Terminal ; then browser_candidates=open $browser_candidates fi -- 1.8.2 -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 3/3] t7800: run --dir-diff tests with and without symlinks
On Mon, Mar 25, 2013 at 08:26:52AM +0100, Johannes Sixt wrote: The series looks good, but I can't test it because it does not apply anywhere here. It's built on top of da/difftool-fixes, is there some problem that stops it applying cleanly on top of that? Am 3/23/2013 14:31, schrieb John Keeping: Currently the difftool --dir-diff tests may or may not use symlinks depending on the operating system on which they are run. In one case this has caused a test failure to be noticed only on Windows when the test also fails on Linux when difftool is invoked with --no-symlinks. Rewrite these tests so that they do not depend on the environment but run explicitly with both --symlinks and --no-symlinks, protecting the --symlinks version with a SYMLINKS prerequisite. At first, I wondered what the point of having --symlinks and --no-symlinks was when there is no discernable difference. But 1f229345 (difftool: Use symlinks when diffing against the worktree) makes it pretty clear: It's an optimization, and --no-symlinks is only intended as an escape hatch. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 2/3] t7800: fix tests when difftool uses --no-symlinks
On Mon, Mar 25, 2013 at 08:41:59AM +0100, Johannes Sixt wrote: Am 3/24/2013 16:15, schrieb John Keeping: Subject: [PATCH] difftool: don't overwrite modified files After running the user's diff tool, git-difftool will copy any files that differ between the working tree and the temporary tree. This is useful when the user edits the file in their diff tool but is wrong if they edit the working tree file while examining the diff. Instead of copying unconditionally when the files differ, store the initial hash of the working tree file and only copy the temporary file back if it was modified and the working tree file was not. If both files have been modified, print a warning and exit with an error. Signed-off-by: John Keeping j...@keeping.me.uk --- git-difftool.perl | 35 +-- t/t7800-difftool.sh | 26 ++ 2 files changed, 47 insertions(+), 14 deletions(-) diff --git a/git-difftool.perl b/git-difftool.perl index c433e86..be82b5a 100755 --- a/git-difftool.perl +++ b/git-difftool.perl @@ -15,7 +15,6 @@ use strict; use warnings; use File::Basename qw(dirname); use File::Copy; -use File::Compare; use File::Find; use File::stat; use File::Path qw(mkpath rmtree); @@ -123,7 +122,7 @@ sub setup_dir_diff my $rindex = ''; my %submodule; my %symlink; - my @working_tree = (); + my %working_tree; my @rawdiff = split('\0', $diffrtn); my $i = 0; @@ -175,7 +174,9 @@ EOF if ($rmode ne $null_mode) { if (use_wt_file($repo, $workdir, $dst_path, $rsha1, $symlinks)) { - push(@working_tree, $dst_path); + $working_tree{$dst_path} = + $repo-command_oneline('hash-object', + $workdir/$dst_path); } else { $rindex .= $rmode $rsha1\t$dst_path\0; } @@ -227,7 +228,7 @@ EOF # not part of the index. Remove any trailing slash from $workdir # before starting to avoid double slashes in symlink targets. $workdir =~ s|/$||; - for my $file (@working_tree) { + for my $file (keys %working_tree) { my $dir = dirname($file); unless (-d $rdir/$dir) { mkpath($rdir/$dir) or @@ -278,7 +279,7 @@ EOF exit_cleanup($tmpdir, 1) if not $ok; } - return ($ldir, $rdir, $tmpdir, @working_tree); + return ($ldir, $rdir, $tmpdir, %working_tree); } sub write_to_file @@ -376,7 +377,7 @@ sub dir_diff my $error = 0; my $repo = Git-repository(); my $workdir = find_worktree($repo); - my ($a, $b, $tmpdir, @worktree) = + my ($a, $b, $tmpdir, %worktree) = setup_dir_diff($repo, $workdir, $symlinks); if (defined($extcmd)) { @@ -390,19 +391,25 @@ sub dir_diff # should be copied back to the working tree. # Do not copy back files when symlinks are used and the # external tool did not replace the original link with a file. - for my $file (@worktree) { + for my $file (keys %worktree) { next if $symlinks -l $b/$file; next if ! -f $b/$file; - my $diff = compare($b/$file, $workdir/$file); - if ($diff == 0) { - next; - } elsif ($diff == -1) { - my $errmsg = warning: Could not compare ; - $errmsg += '$b/$file' with '$workdir/$file'\n; + my $wt_hash = $repo-command_oneline('hash-object', + $workdir/$file); + my $tmp_hash = $repo-command_oneline('hash-object', + $b/$file); This is gross. Can't we do much better here? Difftool already keeps a GIT_INDEX of the files in the temporary tree ($tmpdir/rindex). Running git-diff-files should be sufficient to tell which ones where edited via the users's diff-tool. Then you can restrict calling hash-object to only those worktree files where an edit collision needs to be checked for. That's only the case for files that are not copied from the working tree, so the temporary index doesn't contain the files that are of interest here. You could also keep a parallel index that keeps the state of the same set of files in the worktree. Then another git-diff-files call could replace the other half of hash-object calls. I like the idea of creating an index from the working tree files and using it here. If we create a starting state index for these files, we should be able to run git-diff-files against both the working tree and the temporary tree at this point and compare the output. I'll try this approach this evening. + my $wt_modified = $wt_hash ne $worktree{$file}; + my $tmp_modified = $tmp_hash ne
Re: [RFC/PATCH] Documentation/technical/api-fswatch.txt: start with outline
Just a small heads-up for people using Emacs. 24.4 has inotify support, and magit-inotify.el [1] has already started using it. From initial impressions, I'm quite impressed with it. [1]: https://github.com/magit/magit/blob/master/contrib/magit-inotify.el -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 2/3] t7800: fix tests when difftool uses --no-symlinks
On Sun, Mar 24, 2013 at 02:29:40PM -0700, David Aguilar wrote: This makes me wonder whether the modifiable mode should be made more explicit, either in the documentation or via a flag. Imagine if --dir-diff also honored --edit and --no-edit flags. Right now --edit is the default. If we had foreseen these various edge cases and unintended copy-backs then we may have initially chosen --no-edit as the default, but that's not really my point. I view --symlinks as the default, which avoids most of this pain ;-) I guess we're talking about three different working tree files modes here: symlink, copy-copyback and copy-readonly. I wonder if anyone uses --no-symlinks when they are not forced to by their operating system? What is the use case if they do? What I'm thinking is that it might be good for the tool to learn --edit/--no-edit so that the symlink/copy-back heuristic can be documented alongside that option. Users can then know what to expect when using this mode. --no-edit would also be faster since it can avoid all these extra steps. It could also learn difftool.dirDiffEditable to control the default, which would eliminate the pain in needing to supply the flag on every invocation. What do you think about officially supporting a read-only mode? How would that interoperate with symlink mode? Should --no-edit imply --no-symlinks or does the --[no-]edit option only have an effect if --no-symlinks is in effect? I don't think this is the first time this idea has been suggested, so that's some indicator that it's a good idea. I'm not sure about --edit/--no-edit for this though. The behaviour isn't really similar to the way that option works with git-commit, git-merge, etc. I don't have a better suggestion at the moment though. John -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 3/3] t7800: run --dir-diff tests with and without symlinks
Am 3/25/2013 11:35, schrieb John Keeping: On Mon, Mar 25, 2013 at 08:26:52AM +0100, Johannes Sixt wrote: The series looks good, but I can't test it because it does not apply anywhere here. It's built on top of da/difftool-fixes, is there some problem that stops it applying cleanly on top of that? Thanks. I had only tried trees that were contaminated by jk/difftool-dir-diff-edit-fix, which is in conflict with da/difftool-fixes. t7800 passes on Windows with these patches. -- Hannes -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC/PATCH] Documentation/technical/api-fswatch.txt: start with outline
On Mon, Mar 25, 2013 at 5:44 PM, Ramkumar Ramachandra artag...@gmail.com wrote: Just a small heads-up for people using Emacs. 24.4 has inotify support, and magit-inotify.el [1] has already started using it. From initial impressions, I'm quite impressed with it. Have you tried it? From a quick look, it seems to watch all directories. I wonder how it performs on webkit (at least 5k dirs) [1]: https://github.com/magit/magit/blob/master/contrib/magit-inotify.el -- Duy -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 2/3] t7800: fix tests when difftool uses --no-symlinks
David Aguilar dav...@gmail.com writes: This makes me wonder whether the modifiable mode should be made more explicit, either in the documentation or via a flag. Imagine if --dir-diff also honored --edit and --no-edit flags. Right now --edit is the default. If we had foreseen these various edge cases and unintended copy-backs then we may have initially chosen --no-edit as the default, but that's not really my point. What I'm thinking is that it might be good for the tool to learn --edit/--no-edit so that the symlink/copy-back heuristic can be documented alongside that option. Users can then know what to expect when using this mode. --no-edit would also be faster since it can avoid all these extra steps. It could also learn difftool.dirDiffEditable to control the default, which would eliminate the pain in needing to supply the flag on every invocation. What do you think about officially supporting a read-only mode? Yeah, actually that was what I've been assuming the default (not suggesting to change the default behaviour here). -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: propagating repo corruption across clone
On Mon, Mar 25, 2013 at 09:43:23AM -0400, Jeff Mitchell wrote: But I haven't seen exactly what the corruption is, nor exactly what commands they used to clone. I've invited the blog author to give more details in this thread. The syncing was performed via a clone with git clone --mirror (and a git:// URL) and updates with git remote update. OK. That should be resilient to corruption, then[1]. So I should mention that my experiments after the fact were using local paths, but with --no-hardlinks. Yeah, we will do a direct copy in that case, and there is nothing to prevent corruption propagating. If you're saying that the transport is where corruption is supposed to be caught, then it's possible that we shouldn't see corruption propagate on an initial mirror clone across git://, and that something else was responsible for the trouble we saw with the repositories that got cloned after-the-fact. Right. Either it was something else, or there is a bug in git's protections (but I haven't been able to reproduce anything likely). But then I'd argue that this is non-obvious. In particular, when using --no-hardlinks, I wouldn't expect that behavior to be different with a straight path and with file://. There are basically three levels of transport that can be used on a local machine: 1. Hard-linking (very fast, no redundancy). 2. Byte-for-byte copy (medium speed, makes a separate copy of the data, but does not check the integrity of the original). 3. Regular git transport, creating a pack (slowest, but should include redundancy checks). Using --no-hardlinks turns off (1), but leaves (2) as an option. I think the documentation in git clone could use some improvement in that area. Something else: apparently one of my statements prompted joeyh to think about potential issues with backing up live git repos (http://joeyh.name/blog/entry/difficulties_in_backing_up_live_git_repositories/). Looking at that post made me realize that, when we were doing our initial thinking about the system three years ago, we made an assumption that, in fact, taking a .tar.gz of a repo as it's in the process of being written to or garbage collected or repacked could be problematic. This isn't a totally baseless assumption, as I once had a git repository that I was in the process of updating when I had a sudden power outage that suffered corruption. (It could totally have been the filesystem, of course, although it was a journaled file system.) Yes, if you take a snapshot of a repository with rsync or tar, it may be in an inconsistent state. Using the git protocol should always be robust, but if you want to do it with other tools, you need to follow a particular order: 1. copy the refs (refs/ and packed-refs) first 2. copy everything else (including object/) That covers the case where somebody is pushing an update simultaneously (you _may_ get extra objects in step 2 that they have not yet referenced, but you will never end up with a case where you are referencing objects that you did not yet transfer). If it's possible that the repository might be repacked during your transfer, I think the issue a bit trickier, as there's a moment where the new packfile is renamed into place, and then the old ones are deleted. Depending on the timing and how your readdir() implementation behaves with respect to new and deleted entries, it might be possible to miss both the new one appearing and the old ones disappearing. This is quite a tight race to catch, I suspect, but if you were to rsync objects/pack twice in a row, that would be sufficient. For pruning, I think you could run into the opposite situation: you grab the refs, somebody updates them with a history rewind (or branch deletion), then somebody prunes and objects go away. Again, the timing on this race is quite tight and it's unlikely in practice. I'm not sure of a simple way to eliminate it completely. Yet another option is to simply rsync the whole thing and then git fsck the result. If it's not 100% good, just re-run the rsync. This is simple and should be robust, but is more CPU intensive (you'll end up re-checking all of the data on each update). So, we decided to use Git's built-in capabilities of consistency checking to our advantage (with, as it turns out, a flaw in our implementation). But the question remains: are we wrong about thinking that rsyncing or tar.gz live repositories in the middle of being pushed to/gc'd/repacked could result in a bogus backup? No, I think you are right. If you do the refs-then-objects ordering, that saves you from most of it, but I do think there are still some races that exist during repacking or pruning. -Peff [1] I mentioned that clone-over-git:// is resilient to corruption. I think that is true for bit corruption, but my tests did show that we are not as careful about checking graph connectivity during clone as we are with fetch. The circumstances in which that
Re: propagating repo corruption across clone
On Mon, Mar 25, 2013 at 9:56 PM, Jeff King p...@peff.net wrote: There are basically three levels of transport that can be used on a local machine: 1. Hard-linking (very fast, no redundancy). 2. Byte-for-byte copy (medium speed, makes a separate copy of the data, but does not check the integrity of the original). 3. Regular git transport, creating a pack (slowest, but should include redundancy checks). Using --no-hardlinks turns off (1), but leaves (2) as an option. I think the documentation in git clone could use some improvement in that area. Not only git-clone. How git-fetch and git-push verify the new pack should also be documented. I don't think many people outside the contributor circle know what is done (and maybe how) when data is received from outside. -- Duy -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 1/2] Make core.sharedRepository work under cygwin 1.7
When core.sharedRepository is used, set_shared_perm() in path.c needs lstat() to return the correct POSIX permissions. The default for cygwin is core.ignoreCygwinFSTricks = false, which means that the fast implementation in do_stat() is used instead of lstat(). lstat() under cygwin uses the Windows security model to implement POSIX-like permissions. The user, group or everyone bits can be set individually. do_stat() simplifes the file permission bits, and may return a wrong value: The read-only attribute of a file is used to calculate the permissions, resulting in either rw-r--r-- or r--r--r-- One effect of the simplified do_stat() is that t1301 fails. Add a function cygwin_get_st_mode_bits() which returns the POSIX permissions. When not compiling for cygwin, true_mode_bits() in path.c is used. Side note: t1301 passes under cygwin 1.5. The user write bit is synchronized with the read only attribute of a file: $ chmod 444 x $ attrib x AR C:\temp\pt\x cygwin 1.7 would show A C:\temp\pt\x Signed-off-by: Torsten Bögershausen tbo...@web.de --- compat/cygwin.c | 13 + compat/cygwin.h | 5 + git-compat-util.h | 1 + path.c| 20 +--- 4 files changed, 36 insertions(+), 3 deletions(-) diff --git a/compat/cygwin.c b/compat/cygwin.c index 5428858..871b41d 100644 --- a/compat/cygwin.c +++ b/compat/cygwin.c @@ -1,3 +1,4 @@ +#define CYGWIN_C #define WIN32_LEAN_AND_MEAN #ifdef CYGWIN_V15_WIN32API #include ../git-compat-util.h @@ -10,6 +11,18 @@ #endif #include ../cache.h /* to read configuration */ +/* + * Return POSIX permission bits, regardless of core.ignorecygwinfstricks + */ +int cygwin_get_st_mode_bits(const char *path, int *mode) +{ + struct stat st; + if (lstat(path, st) 0) + return -1; + *mode = st.st_mode; + return 0; +} + static inline void filetime_to_timespec(const FILETIME *ft, struct timespec *ts) { long long winTime = ((long long)ft-dwHighDateTime 32) + diff --git a/compat/cygwin.h b/compat/cygwin.h index a3229f5..c04965a 100644 --- a/compat/cygwin.h +++ b/compat/cygwin.h @@ -4,6 +4,11 @@ typedef int (*stat_fn_t)(const char*, struct stat*); extern stat_fn_t cygwin_stat_fn; extern stat_fn_t cygwin_lstat_fn; +int cygwin_get_st_mode_bits(const char *path, int *mode); +#define get_st_mode_bits(p,m) cygwin_get_st_mode_bits((p),(m)) +#ifndef CYGWIN_C +/* cygwin.c needs the original lstat() */ #define stat(path, buf) (*cygwin_stat_fn)(path, buf) #define lstat(path, buf) (*cygwin_lstat_fn)(path, buf) +#endif diff --git a/git-compat-util.h b/git-compat-util.h index 90e0372..cde442f 100644 --- a/git-compat-util.h +++ b/git-compat-util.h @@ -163,6 +163,7 @@ typedef long intptr_t; typedef unsigned long uintptr_t; #endif +int get_st_mode_bits(const char *path, int *mode); #if defined(__CYGWIN__) #undef _XOPEN_SOURCE #include grp.h diff --git a/path.c b/path.c index d3d3f8b..2fdccc2 100644 --- a/path.c +++ b/path.c @@ -14,6 +14,22 @@ #include strbuf.h #include string-list.h +#ifndef get_st_mode_bits +/* + * The replacement lstat(2) we use on Cygwin is incomplete and + * may return wrong permission bits. Most of the time we do not care, + * but the callsites of this wrapper do care. + */ +int get_st_mode_bits(const char *path, int *mode) +{ + struct stat st; + if (lstat(path, st) 0) + return -1; + *mode = st.st_mode; + return 0; +} +#endif + static char bad_path[] = /bad-path/; static char *get_pathname(void) @@ -391,7 +407,6 @@ const char *enter_repo(const char *path, int strict) int set_shared_perm(const char *path, int mode) { - struct stat st; int tweak, shared, orig_mode; if (!shared_repository) { @@ -400,9 +415,8 @@ int set_shared_perm(const char *path, int mode) return 0; } if (!mode) { - if (lstat(path, st) 0) + if (get_st_mode_bits(path, mode) 0) return -1; - mode = st.st_mode; orig_mode = mode; } else orig_mode = 0; -- 1.8.2.341.g543621f -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 1/4] Move commit GPG signature verification to commit.c
Sebastian Götte ja...@physik.tu-berlin.de writes: Signed-off-by: Sebastian Götte ja...@physik-pool.tu-berlin.de --- commit.c| 54 ++ commit.h| 9 gpg-interface.h | 6 ++ pretty.c| 67 + 4 files changed, 74 insertions(+), 62 deletions(-) diff --git a/commit.c b/commit.c index e8eb0ae..d0d9135 100644 --- a/commit.c +++ b/commit.c @@ -1023,6 +1023,60 @@ free_return: free(buf); } +static struct { + char result; + const char *check; +} signature_check[] = { + { 'G', : Good signature from }, + { 'B', : BAD signature from }, +}; This seems to be based on the old codebase. 4a868fd655a7 (pretty: parse the gpg status lines rather than the output, 2013-02-14) is already in 'master' for this cycle, and it is likely that we would want to have the same fix for 1.8.1.x maintenance track as well. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: propagating repo corruption across clone
On Mon, Mar 25, 2013 at 10:31:04PM +0700, Nguyen Thai Ngoc Duy wrote: On Mon, Mar 25, 2013 at 9:56 PM, Jeff King p...@peff.net wrote: There are basically three levels of transport that can be used on a local machine: 1. Hard-linking (very fast, no redundancy). 2. Byte-for-byte copy (medium speed, makes a separate copy of the data, but does not check the integrity of the original). 3. Regular git transport, creating a pack (slowest, but should include redundancy checks). Using --no-hardlinks turns off (1), but leaves (2) as an option. I think the documentation in git clone could use some improvement in that area. Not only git-clone. How git-fetch and git-push verify the new pack should also be documented. I don't think many people outside the contributor circle know what is done (and maybe how) when data is received from outside. I think it's less of a documentation issue there, though, because they _only_ do (3). There is no option to do anything else, so there is nothing to warn the user about in terms of tradeoffs. I agree that in general git's handling of corruption could be documented somewhere, but I'm not sure where. -Peff -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 2/2] optimize set_shared_perm()
optimize set_shared_perm() in path.c: - sometimes the chown() function is called even when not needed. (This can be provoced by running t1301, and adding some debug code) Save a chmod from 400 to 400, or from 600-600 on these files: .git/info/refs+ .git/objects/info/packs+ Save chmod on directories from 2770 to 2770: .git/refs .git/refs/heads .git/refs/tags - as all callers use mode == 0 when caling set_shared_perm(), the function can be simplified. - all callers use the macro adjust_shared_perm(path) from cache.h Convert adjust_shared_perm() from a macro into a function prototype Signed-off-by: Torsten Bögershausen tbo...@web.de --- cache.h | 3 +-- path.c | 71 +++-- 2 files changed, 39 insertions(+), 35 deletions(-) diff --git a/cache.h b/cache.h index 59e5b53..65a9db7 100644 --- a/cache.h +++ b/cache.h @@ -727,8 +727,7 @@ enum sharedrepo { PERM_EVERYBODY = 0664 }; int git_config_perm(const char *var, const char *value); -int set_shared_perm(const char *path, int mode); -#define adjust_shared_perm(path) set_shared_perm((path), 0) +int adjust_shared_perm(const char *path); int safe_create_leading_directories(char *path); int safe_create_leading_directories_const(const char *path); int mkdir_in_gitdir(const char *path); diff --git a/path.c b/path.c index 2fdccc2..4bc918a 100644 --- a/path.c +++ b/path.c @@ -1,14 +1,5 @@ /* - * I'm tired of doing vsnprintf() etc just to open a - * file, so here's a return static buffer with printf - * interface for paths. - * - * It's obviously not thread-safe. Sue me. But it's quite - * useful for doing things like - * - * f = open(mkpath(%s/%s.git, base, name), O_RDONLY); - * - * which is what it's designed for. + * Different utilitiy functions for path and path names */ #include cache.h #include strbuf.h @@ -105,6 +96,13 @@ char *git_pathdup(const char *fmt, ...) return xstrdup(ret); } +/* + * I'm tired of doing vsnprintf() etc just to open a + * file, so here's an interface for paths. + * + * f = open(mkpath(%s/%s.git, base, name), O_RDONLY); + * + */ char *mkpathdup(const char *fmt, ...) { char *path; @@ -405,22 +403,13 @@ const char *enter_repo(const char *path, int strict) return NULL; } -int set_shared_perm(const char *path, int mode) +static int calc_shared_perm(int mode) { - int tweak, shared, orig_mode; + int tweak, shared; - if (!shared_repository) { - if (mode) - return chmod(path, mode ~S_IFMT); - return 0; - } - if (!mode) { - if (get_st_mode_bits(path, mode) 0) - return -1; - orig_mode = mode; - } else - orig_mode = 0; - if (shared_repository 0) + if (!shared_repository) + return mode; + else if (shared_repository 0) shared = -shared_repository; else shared = shared_repository; @@ -436,16 +425,32 @@ int set_shared_perm(const char *path, int mode) else mode |= tweak; - if (S_ISDIR(mode)) { - /* Copy read bits to execute bits */ - mode |= (shared 0444) 2; - mode |= FORCE_DIR_SET_GID; - } + return mode; +} + +static int calc_shared_perm_dir(int mode) +{ + mode = calc_shared_perm(mode); + + /* Copy read bits to execute bits */ + mode |= (mode 0444) 2; + mode |= FORCE_DIR_SET_GID; + return mode; +} + +int adjust_shared_perm(const char *path) +{ + int old_mode, new_mode; + if (get_st_mode_bits(path, old_mode) 0) + return -1; + + if (S_ISDIR(old_mode)) + new_mode = calc_shared_perm_dir(old_mode); + else + new_mode = calc_shared_perm(old_mode); - if (((shared_repository 0 - ? (orig_mode (FORCE_DIR_SET_GID | 0777)) - : (orig_mode mode)) != mode) - chmod(path, (mode ~S_IFMT)) 0) + if (((old_mode ^ new_mode) ~S_IFMT) + chmod(path, (new_mode ~S_IFMT)) 0) return -2; return 0; } -- 1.8.2.341.g543621f -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/2] t5520 (pull): use test_config where appropriate
Ramkumar Ramachandra artag...@gmail.com writes: Configuration from test_config does not last beyond the end of the current test assertion, making each test easier to think about in isolation. Signed-off-by: Ramkumar Ramachandra artag...@gmail.com --- t/t5520-pull.sh | 22 +- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/t/t5520-pull.sh b/t/t5520-pull.sh index e5adee8..0fe935b 100755 --- a/t/t5520-pull.sh +++ b/t/t5520-pull.sh @@ -60,8 +60,8 @@ test_expect_success 'pulling into void does not overwrite untracked files' ' test_expect_success 'test . as a remote' ' git branch copy master - git config branch.copy.remote . - git config branch.copy.merge refs/heads/master + test_config branch.copy.remote . + test_config branch.copy.merge refs/heads/master echo updated file git commit -a -m updated git checkout copy I am not sure if this makes sense. The copy branch this test piece creates is used throughout the remainder of the test, and these configuration variables establish a known default for cases the later test checks when these various forms of git pull command omits from where and which branch. It feels actively wrong to discard that information after this test piece is done. @@ -96,8 +96,7 @@ test_expect_success '--rebase' ' ' test_expect_success 'pull.rebase' ' git reset --hard before-rebase - git config --bool pull.rebase true - test_when_finished git config --unset pull.rebase + test_config pull.rebase true git pull . copy test $(git rev-parse HEAD^) = $(git rev-parse copy) test new = $(git show HEAD:file2) @@ -105,8 +104,7 @@ test_expect_success 'pull.rebase' ' test_expect_success 'branch.to-rebase.rebase' ' git reset --hard before-rebase - git config --bool branch.to-rebase.rebase true - test_when_finished git config --unset branch.to-rebase.rebase + test_config branch.to-rebase.rebase true git pull . copy test $(git rev-parse HEAD^) = $(git rev-parse copy) test new = $(git show HEAD:file2) @@ -114,10 +112,8 @@ test_expect_success 'branch.to-rebase.rebase' ' test_expect_success 'branch.to-rebase.rebase should override pull.rebase' ' git reset --hard before-rebase - git config --bool pull.rebase true - test_when_finished git config --unset pull.rebase - git config --bool branch.to-rebase.rebase false - test_when_finished git config --unset branch.to-rebase.rebase + test_config pull.rebase true + test_config branch.to-rebase.rebase false git pull . copy test $(git rev-parse HEAD^) != $(git rev-parse copy) test new = $(git show HEAD:file2) @@ -171,9 +167,9 @@ test_expect_success 'pull --rebase dies early with dirty working directory' ' git update-ref refs/remotes/me/copy copy^ COPY=$(git rev-parse --verify me/copy) git rebase --onto $COPY copy - git config branch.to-rebase.remote me - git config branch.to-rebase.merge refs/heads/copy - git config branch.to-rebase.rebase true + test_config branch.to-rebase.remote me + test_config branch.to-rebase.merge refs/heads/copy + test_config branch.to-rebase.rebase true echo dirty file git add file test_must_fail git pull -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 8/4] match-trees: drop x = x initializations
René Scharfe rene.scha...@lsrfire.ath.cx writes: Am 24.03.2013 05:55, schrieb Junio C Hamano: So I like your change for readability, but for GCC 4.4.5 we still need the unnecessary initialization. Hrm, perhaps we can make it even simpler for the compiler. And the result is even simpler for human readers, I'd have to say. I'm a bit uneasy about this one because we lack proper tests for this code and I don't know how to write ones off the bat. This looks pretty much a straight-forward equivalent rewrite from your earlier one, which was also an obvious equivalent to the original, at least to me. The first four lines in the original were made into two tree_entry() calls (what a useful helper we haven't been using!) and that allows us to lose explicit update_tree_entry() calls. match-trees.c | 68 --- 1 file changed, 28 insertions(+), 40 deletions(-) diff --git a/match-trees.c b/match-trees.c index 26f7ed1..2bb734d 100644 --- a/match-trees.c +++ b/match-trees.c @@ -47,6 +47,13 @@ static int score_matches(unsigned mode1, unsigned mode2, const char *path) return score; } +static int base_name_entries_compare(const struct name_entry *a, + const struct name_entry *b) +{ + return base_name_compare(a-path, tree_entry_len(a), a-mode, + b-path, tree_entry_len(b), b-mode); +} + /* * Inspect two trees, and give a score that tells how similar they are. */ @@ -71,54 +78,35 @@ static int score_trees(const unsigned char *hash1, const unsigned char *hash2) if (type != OBJ_TREE) die(%s is not a tree, sha1_to_hex(hash2)); init_tree_desc(two, two_buf, size); - while (one.size | two.size) { - const unsigned char *elem1 = elem1; - const unsigned char *elem2 = elem2; - const char *path1 = path1; - const char *path2 = path2; - unsigned mode1 = mode1; - unsigned mode2 = mode2; + for (;;) { + struct name_entry e1, e2; + int got_entry_from_one = tree_entry(one, e1); + int got_entry_from_two = tree_entry(two, e2); int cmp; - if (one.size) - elem1 = tree_entry_extract(one, path1, mode1); - if (two.size) - elem2 = tree_entry_extract(two, path2, mode2); - - if (!one.size) { - /* two has more entries */ - score += score_missing(mode2, path2); - update_tree_entry(two); - continue; - } - if (!two.size) { + if (got_entry_from_one got_entry_from_two) + cmp = base_name_entries_compare(e1, e2); + else if (got_entry_from_one) /* two lacks this entry */ - score += score_missing(mode1, path1); - update_tree_entry(one); - continue; - } - cmp = base_name_compare(path1, strlen(path1), mode1, - path2, strlen(path2), mode2); - if (cmp 0) { + cmp = -1; + else if (got_entry_from_two) + /* two has more entries */ + cmp = 1; + else + break; + + if (cmp 0) /* path1 does not appear in two */ - score += score_missing(mode1, path1); - update_tree_entry(one); - continue; - } - else if (cmp 0) { + score += score_missing(e1.mode, e1.path); + else if (cmp 0) /* path2 does not appear in one */ - score += score_missing(mode2, path2); - update_tree_entry(two); - continue; - } - else if (hashcmp(elem1, elem2)) + score += score_missing(e2.mode, e2.path); + else if (hashcmp(e1.sha1, e2.sha1)) /* they are different */ - score += score_differs(mode1, mode2, path1); + score += score_differs(e1.mode, e2.mode, e1.path); else /* same subtree or blob */ - score += score_matches(mode1, mode2, path1); - update_tree_entry(one); - update_tree_entry(two); + score += score_matches(e1.mode, e2.mode, e1.path); } free(one_buf); free(two_buf); -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 2/3] t7800: fix tests when difftool uses --no-symlinks
Johannes Sixt j.s...@viscovery.net writes: Am 3/24/2013 16:15, schrieb John Keeping: ... +for my $file (keys %worktree) { next if $symlinks -l $b/$file; next if ! -f $b/$file; -my $diff = compare($b/$file, $workdir/$file); -if ($diff == 0) { -next; -} elsif ($diff == -1) { -my $errmsg = warning: Could not compare ; -$errmsg += '$b/$file' with '$workdir/$file'\n; +my $wt_hash = $repo-command_oneline('hash-object', +$workdir/$file); +my $tmp_hash = $repo-command_oneline('hash-object', +$b/$file); This is gross. Can't we do much better here? Difftool already keeps a GIT_INDEX of the files in the temporary tree ($tmpdir/rindex). Running git-diff-files should be sufficient to tell which ones where edited via the users's diff-tool. Then you can restrict calling hash-object to only those worktree files where an edit collision needs to be checked for. ;-). -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2] checkout: avoid unnecessary match_pathspec calls
Nguyễn Thái Ngọc Duy pclo...@gmail.com writes: In checkout_paths() we do this - for all updated items, call match_pathspec - for all items, call match_pathspec (inside unmerge_cache) - for all items, call match_pathspec (for showing path .. is unmerged) - for updated items, call match_pathspec and update paths That's a lot of duplicate match_pathspec(s) and the function is not exactly cheap to be called so many times, especially on large indexes. This patch makes it call match_pathspec once per updated index entry, save the result in ce_flags and reuse the results in the following loops. The changes in 0a1283b (checkout $tree $path: do not clobber local changes in $path not in $tree - 2011-09-30) limit the affected paths to ones we read from $tree. We do not do anything to other modified entries in this case, so the for all items above could be modified to for all updated items. But.. The command's behavior now is modified slightly: unmerged entries that match $path, but not updated by $tree, are now NOT touched. Although this should be considered a bug fix, not a regression. Could we have a test to show the difference please, especially if we are going to sell this as a fix? The change itself looks quite sane to me (I didn't apply or test it, though---just eyeballing). Thanks. And while at there, free ps_matched after use. The following command is tested on webkit, 215k entries. The pattern is chosen mainly to make match_pathspec sweat: git checkout -- *[a-zA-Z]*[a-zA-Z]*[a-zA-Z]* before after real0m3.493s0m2.737s user0m2.239s0m1.586s sys 0m1.252s0m1.151s Signed-off-by: Nguyễn Thái Ngọc Duy pclo...@gmail.com --- and worry about larger ones later, so if there were another choice, i.e. - eject nd/magic-pathspecs for now, cook this (and other small independent and clear improvements you may come up with, some of which might come out of nd/magic-pathspecs itself) and let it graduate first, and later revisit rerolld nd/magic-pathspecs that would be the ideal among the given choices ;-). Whichever is easier for you. The above is a faithful rewrite, but I have to wonder why you need two separate loops. Do you understand what the original loop is doing with ps_matched, and why the code excludes certain paths while doing so? Nope, I did not dig that deep. I expected you to do it ;-) j/k After commenting on the above, it makes me wonder if we even need to bother marking entries that were in the index that did not come from the tree-ish we are checking paths out of, though. What breaks if you did not do the rewrite above and dropped the second loop in your patch? The test suite says none. There is a behavior change regarding unmerged entries as mentioned in the commit message. But I think it's a good change. builtin/checkout.c | 34 +++--- cache.h| 1 + resolve-undo.c | 19 ++- resolve-undo.h | 1 + 4 files changed, 47 insertions(+), 8 deletions(-) diff --git a/builtin/checkout.c b/builtin/checkout.c index a9c1b5a..359b983 100644 --- a/builtin/checkout.c +++ b/builtin/checkout.c @@ -271,24 +271,46 @@ static int checkout_paths(const struct checkout_opts *opts, ; ps_matched = xcalloc(1, pos); + /* + * Make sure all pathspecs participated in locating the paths + * to be checked out. + */ for (pos = 0; pos active_nr; pos++) { struct cache_entry *ce = active_cache[pos]; + ce-ce_flags = ~CE_MATCHED; if (opts-source_tree !(ce-ce_flags CE_UPDATE)) + /* + * git checkout tree-ish -- path, but this entry + * is in the original index; it will not be checked + * out to the working tree and it does not matter + * if pathspec matched this entry. We will not do + * anything to this entry at all. + */ continue; - match_pathspec(opts-pathspec, ce-name, ce_namelen(ce), 0, ps_matched); + /* + * Either this entry came from the tree-ish we are + * checking the paths out of, or we are checking out + * of the index. + */ + if (match_pathspec(opts-pathspec, ce-name, ce_namelen(ce), +0, ps_matched)) + ce-ce_flags |= CE_MATCHED; } - if (report_path_error(ps_matched, opts-pathspec, opts-prefix)) + if (report_path_error(ps_matched, opts-pathspec, opts-prefix)) { + free(ps_matched); return 1; + } + free(ps_matched); /* checkout -m path to recreate conflicted state */ if (opts-merge) -
Re: propagating repo corruption across clone
On Mon, Mar 25, 2013 at 11:56 AM, Jeff King p...@peff.net wrote: On Mon, Mar 25, 2013 at 10:31:04PM +0700, Nguyen Thai Ngoc Duy wrote: On Mon, Mar 25, 2013 at 9:56 PM, Jeff King p...@peff.net wrote: There are basically three levels of transport that can be used on a local machine: 1. Hard-linking (very fast, no redundancy). 2. Byte-for-byte copy (medium speed, makes a separate copy of the data, but does not check the integrity of the original). 3. Regular git transport, creating a pack (slowest, but should include redundancy checks). Using --no-hardlinks turns off (1), but leaves (2) as an option. I think the documentation in git clone could use some improvement in that area. Not only git-clone. How git-fetch and git-push verify the new pack should also be documented. I don't think many people outside the contributor circle know what is done (and maybe how) when data is received from outside. I think it's less of a documentation issue there, though, because they _only_ do (3). There is no option to do anything else, so there is nothing to warn the user about in terms of tradeoffs. I agree that in general git's handling of corruption could be documented somewhere, but I'm not sure where. Hi there, First of all, thanks for the analysis, it's much appreciated. It's good to know that we weren't totally off-base in thinking that a naive copy may be out of sync, as small as the chance are (certainly we wouldn't have known the right ordering). I think what was conflating the issue in my testing is that with --mirror it implies --bare, so there would be checking of the objects when the working tree was being created, hence --mirror won't show the error a normal clone will -- it's not a transport question, it's just a matter of the normal clone doing more and so having more data run through checks. However, there are still problems. For blob corruptions, even in this --no-hardlinks, non --mirror case where an error was found, the exit code from the clone was 0. I can see this tripping up all sorts of automated scripts or repository GUIs that ignore the output and only check the error code, which is not an unreasonable thing to do. For commit corruptions, the --no-hardlinks, non --mirror case refused to create the new repository and exited with an error code of 128. The --no-hardlinks, --mirror case spewed errors to the console, yet *still* created the new clone *and* returned an error code of zero. It seems that when there is an error as opposed to a fatal it doesn't affect the status code on a clone; I'd argue that it ought to. If Git knows that the source repository has problems, it ought to be reflected in the status code so that scripts performing clones have a normal way to detect this and alert a user/sysadmin/whoever. Even if a particular cloning method doesn't perform all sanity checks, if it finds something in the sanity checks it *does* perform, this should be trumpeted, loudly, regardless of transport mechanism and regardless of whether a user is watching the process or a script is. Thanks, Jeff -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 3/3] git-merge-one-file: revise merge error reporting
Kevin Bracey ke...@bracey.fi writes: Commit 718135e improved the merge error reporting for the resolve strategy's merge conflict and permission conflict cases, but led to a malformed ERROR: in myfile.c message in the case of a file added differently. This commit reverts that change, and uses an alternative approach without this flaw. Signed-off-by: Kevin Bracey ke...@bracey.fi We used to treat Both added differently as a separate info message, just like the Auto-merging message, and let content conflict that is an error to happen naturally by doing such a merge, possibly followed by permission conflict which is another kind of error. We coalesced these two into a single message. And this patch breaks them into separate messages. I am not sure if that aspect of the change is desirable. The source of malformed message seems suspicious. Isn't the root cause of $msg being empty that merge-file can (sometimes) cleanly merge two files using the phoney base in the both added differently codepath? If you resolve that issue by forcing a conflicted failure when we handle add/add conflict, I think the behaviour of the remainder of the code is better in the original than the updated one. Perhaps something like this (I am applying these on 'maint')? git-merge-one-file.sh | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/git-merge-one-file.sh b/git-merge-one-file.sh index 25d7714..aa06282 100755 --- a/git-merge-one-file.sh +++ b/git-merge-one-file.sh @@ -107,6 +107,7 @@ case ${1:-.}${2:-.}${3:-.} in ;; esac + add_add_conflict= src2=`git-unpack-file $3` case $1 in '') @@ -121,6 +122,7 @@ case ${1:-.}${2:-.}${3:-.} in # If we do not have enough common material, it is not # worth trying two-file merge using common subsections. expr $sz0 \ $sz1 \* 2 /dev/null || : $orig + add_add_conflict=yes ;; *) echo Auto-merging $4 @@ -128,15 +130,13 @@ case ${1:-.}${2:-.}${3:-.} in ;; esac - # Be careful for funny filename such as -L in $4, which - # would confuse merge greatly. src1=`git-unpack-file $2` - git merge-file $src1 $orig $src2 - ret=$? - msg= - if test $ret != 0 + + ret=0 msg= + if git merge-file $src1 $orig $src2 || test -n $add_add_conflict then msg='content conflict' + ret=1 fi # Create the working tree file, using our tree version from the -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 3/3] git-merge-one-file: revise merge error reporting
Junio C Hamano gits...@pobox.com writes: Kevin Bracey ke...@bracey.fi writes: Commit 718135e improved the merge error reporting for the resolve strategy's merge conflict and permission conflict cases, but led to a malformed ERROR: in myfile.c message in the case of a file added differently. This commit reverts that change, and uses an alternative approach without this flaw. Signed-off-by: Kevin Bracey ke...@bracey.fi We used to treat Both added differently as a separate info message, just like the Auto-merging message, and let content conflict that is an error to happen naturally by doing such a merge, possibly followed by permission conflict which is another kind of error. We coalesced these two into a single message. And this patch breaks them into separate messages. I am not sure if that aspect of the change is desirable. The source of malformed message seems suspicious. Isn't the root cause of $msg being empty that merge-file can (sometimes) cleanly merge two files using the phoney base in the both added differently codepath? If you resolve that issue by forcing a conflicted failure when we handle add/add conflict, I think the behaviour of the remainder of the code is better in the original than the updated one. Perhaps something like this (I am applying these on 'maint')? Actually, this one is even better, I think. Again on top of your two patches applied on 'maint'. Alternatively, we can remove the whole if $1 is empty, error the merge out logic, which would be more in line with the spirit of f7d24bbefb06 (merge with /dev/null as base, instead of punting O==empty case, 2005-11-07), but that will be a change in behaviour (a both side added, slightly differently case that can cleanly merge will no longer fail), so I am not sure if it is worth it. -- 8 -- Subject: [PATCH] merge-one-file: force content conflict for both side added case Historically, we tried to be lenient to both side added, slightly differently case and as long as the files can be merged using a made-up common ancestor cleanly, since f7d24bbefb06 (merge with /dev/null as base, instead of punting O==empty case, 2005-11-07). This was later further refined to use a better made-up common file with fd66dbf5297a (merge-one-file: use empty- or common-base condintionally in two-stage merge., 2005-11-10), but the spirit has been the same. But the original fix in f7d24bbefb06 to avoid punging on both sides added case had a code to unconditionally error out the merge. When this triggers, even though the content-level merge can be done cleanly, we end up not saying content conflict in the message, but still issue the error message, showing ERROR: in pathname. Signed-off-by: Junio C Hamano gits...@pobox.com --- git-merge-one-file.sh | 1 + 1 file changed, 1 insertion(+) diff --git a/git-merge-one-file.sh b/git-merge-one-file.sh index 25d7714..62016f4 100755 --- a/git-merge-one-file.sh +++ b/git-merge-one-file.sh @@ -155,6 +155,7 @@ case ${1:-.}${2:-.}${3:-.} in fi if test -z $1 then + msg='content conflict' ret=1 fi -- 1.8.2-297-g51e0fcd -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 3/3] git-merge-one-file: revise merge error reporting
Junio C Hamano gits...@pobox.com writes: Actually, this one is even better, I think. Again on top of your two patches applied on 'maint'. Scratch that one. The if test -z $1 block needs to be moved a bit higher, like this (the log message can stay the same): diff --git a/git-merge-one-file.sh b/git-merge-one-file.sh index 62016f4..a4ecf33 100755 --- a/git-merge-one-file.sh +++ b/git-merge-one-file.sh @@ -134,9 +134,10 @@ case ${1:-.}${2:-.}${3:-.} in git merge-file $src1 $orig $src2 ret=$? msg= - if test $ret != 0 + if test $ret != 0 || test -z $1 then msg='content conflict' + ret=1 fi # Create the working tree file, using our tree version from the @@ -153,11 +154,6 @@ case ${1:-.}${2:-.}${3:-.} in msg=${msg}permissions conflict: $5-$6,$7 ret=1 fi - if test -z $1 - then - msg='content conflict' - ret=1 - fi if test $ret != 0 then -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v4 2/2] mergetools/p4merge: create a base if none available
Kevin Bracey ke...@bracey.fi writes: Minor change from v3: that version moved initialisation of src1 higher up, detaching it from its associated comment. This move was only required by earlier versions, so v4 leaves src1 in its original position. The funny filename comment was from b539c5e8fbd3 (git-merge-one: new merge world order., 2005-12-07) where the removed code just before that new comment ended with: merge $4 $orig $src2 (yes, we used to use merge program from the RCS suite). The comment refers to one of the bad side effect the old code used to have and warns against such a practice, i.e. it was talking about the code that no longer existed. I think the two-line comment should simply go. Given that, I _think_ it is OK to move the initialization of src1 next to that of src2; that may make the result easier to read. Thanks. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Ramkumar Ramachandra artag...@gmail.com writes: Git 2.0 is coming soon, so I'm excited about breaking a lot of backward compatibility ;) Don't. I lack the sense of humor normal people seem to have, so even with the smiley, anybody making such a comment will be thrown into my do not pay attention to them box ;-). -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/3] contrib/subtree: stop explicitly using a bash shell
Paul Campbell pcampb...@kemitix.net writes: Don't explicitly use the Bash shell but allow the system to provide a hopefully POSIX compatible shell at /bin/sh. Signed-off-by: Paul Campbell pcampb...@kemitix.net --- Only the system's I was able to test this on (Debian squeeze) /bin/sh is the dash shell. contrib/subtree/git-subtree.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/subtree/git-subtree.sh b/contrib/subtree/git-subtree.sh index 8a23f58..5701376 100755 --- a/contrib/subtree/git-subtree.sh +++ b/contrib/subtree/git-subtree.sh @@ -1,4 +1,4 @@ -#!/bin/bash +#!/bin/sh # # git-subtree.sh: split/join git repositories in subdirectories of this one # Interesting. I'll leave the final yeah, this is safe declaration to David and Avery, but I've always assumed without checking that this script relied on bash-isms like local variable semantics, arrays, regexp/substring variable substitutions, etc. With a quick scan, however, I do not seem to find anythning glaringly unportable. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
stuck and need unstuck (git checkout)
I was on a branch (local tracked with remote), and I wanted to checkout a remote branch so did: $git co myRemoteBranch and got a message that a lot of jar files were being untracked (files were locked). I had a server running that had some of the jar files locked, so it could not update and untracked them all. What I want to do now is: 1. switch branches 2. delete this locally created branch 3. re-check out again so all will be well. I cannot switch branches because it says my untracked files will be overwritten. How do I switch branches? I have no commits to make and simply want to go back. git reset and git stash do not get me there. thanks J.V. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Junio C Hamano wrote: Ramkumar Ramachandra artag...@gmail.com writes: Git 2.0 is coming soon, so I'm excited about breaking a lot of backward compatibility ;) Don't. push.default is the necessary exception? -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 1/3] sha1_file: remove recursion in packed_object_info
packed_object_info() and packed_delta_info() were mutually recursive. The former would handle ordinary types and defer deltas to the latter; the latter would use the former to resolve the delta base. This arrangement, however, leads to trouble with threaded index-pack and long delta chains on platforms where thread stacks are small, as happened on OS X (512kB thread stacks by default) with the chromium repo. The task of the two functions is not all that hard to describe without any recursion, however. It proceeds in three steps: - determine the representation type and size, based on the outermost object (delta or not) - follow through the delta chain, if any - determine the object type from what is found at the end of the delta chain The only complication stems from the error recovery. If parsing fails at any step, we want to mark that object (within the pack) as bad and try getting the corresponding SHA1 from elsewhere. If that also fails, we want to repeat this process back up the delta chain until we find a reasonable solution or conclude that there is no way to reconstruct the object. (This is conveniently checked by t5303.) To achieve that within the pack, we keep track of the entire delta chain in a stack. When things go sour, we process that stack from the top, marking entries as bad and attempting to re-resolve by sha1. To avoid excessive malloc(), the stack starts out with a small stack-allocated array. The choice of 64 is based on the default of pack.depth, which is 50, in the hope that it covers most delta chains without any need for malloc(). It's much harder to make the actual re-resolving by sha1 nonrecursive, so we skip that. If you can't afford *that* recursion, your corruption problems are more serious than your stack size problems. Reported-by: Stefan Zager sza...@google.com Signed-off-by: Thomas Rast tr...@student.ethz.ch --- sha1_file.c | 135 +--- 1 file changed, 84 insertions(+), 51 deletions(-) diff --git a/sha1_file.c b/sha1_file.c index 40b2329..71877a7 100644 --- a/sha1_file.c +++ b/sha1_file.c @@ -1560,50 +1560,6 @@ static off_t get_delta_base(struct packed_git *p, return base_offset; } -/* forward declaration for a mutually recursive function */ -static int packed_object_info(struct packed_git *p, off_t offset, - unsigned long *sizep, int *rtype); - -static int packed_delta_info(struct packed_git *p, -struct pack_window **w_curs, -off_t curpos, -enum object_type type, -off_t obj_offset, -unsigned long *sizep) -{ - off_t base_offset; - - base_offset = get_delta_base(p, w_curs, curpos, type, obj_offset); - if (!base_offset) - return OBJ_BAD; - type = packed_object_info(p, base_offset, NULL, NULL); - if (type = OBJ_NONE) { - struct revindex_entry *revidx; - const unsigned char *base_sha1; - revidx = find_pack_revindex(p, base_offset); - if (!revidx) - return OBJ_BAD; - base_sha1 = nth_packed_object_sha1(p, revidx-nr); - mark_bad_packed_object(p, base_sha1); - type = sha1_object_info(base_sha1, NULL); - if (type = OBJ_NONE) - return OBJ_BAD; - } - - /* We choose to only get the type of the base object and -* ignore potentially corrupt pack file that expects the delta -* based on a base with a wrong size. This saves tons of -* inflate() calls. -*/ - if (sizep) { - *sizep = get_size_from_delta(p, w_curs, curpos); - if (*sizep == 0) - type = OBJ_BAD; - } - - return type; -} - int unpack_object_header(struct packed_git *p, struct pack_window **w_curs, off_t *curpos, @@ -1630,6 +1586,25 @@ int unpack_object_header(struct packed_git *p, return type; } +static int retry_bad_packed_offset(struct packed_git *p, off_t obj_offset) +{ + int type; + struct revindex_entry *revidx; + const unsigned char *sha1; + revidx = find_pack_revindex(p, obj_offset); + if (!revidx) + return OBJ_BAD; + sha1 = nth_packed_object_sha1(p, revidx-nr); + mark_bad_packed_object(p, sha1); + type = sha1_object_info(sha1, NULL); + if (type = OBJ_NONE) + return OBJ_BAD; + return type; +} + + +#define POI_STACK_PREALLOC 64 + static int packed_object_info(struct packed_git *p, off_t obj_offset, unsigned long *sizep, int *rtype) { @@ -1637,31 +1612,89 @@ static int packed_object_info(struct packed_git *p, off_t obj_offset, unsigned long size; off_t curpos =
[PATCH v2 3/3] sha1_file: remove recursion in unpack_entry
Similar to the recursion in packed_object_info(), this leads to problems on stack-space-constrained systems in the presence of long delta chains. We proceed in three phases: 1. Dig through the delta chain, saving each delta object's offsets and size on an ad-hoc stack. 2. Unpack the base object at the bottom. 3. Apply the deltas from the stack. Signed-off-by: Thomas Rast tr...@student.ethz.ch --- sha1_file.c | 231 +++- 1 file changed, 150 insertions(+), 81 deletions(-) diff --git a/sha1_file.c b/sha1_file.c index bd054d1..1b685b9 100644 --- a/sha1_file.c +++ b/sha1_file.c @@ -1864,68 +1864,6 @@ static void add_delta_base_cache(struct packed_git *p, off_t base_offset, static void *read_object(const unsigned char *sha1, enum object_type *type, unsigned long *size); -static void *unpack_delta_entry(struct packed_git *p, - struct pack_window **w_curs, - off_t curpos, - unsigned long delta_size, - off_t obj_offset, - enum object_type *type, - unsigned long *sizep) -{ - void *delta_data, *result, *base; - unsigned long base_size; - off_t base_offset; - - base_offset = get_delta_base(p, w_curs, curpos, *type, obj_offset); - if (!base_offset) { - error(failed to validate delta base reference - at offset %PRIuMAX from %s, - (uintmax_t)curpos, p-pack_name); - return NULL; - } - unuse_pack(w_curs); - base = cache_or_unpack_entry(p, base_offset, base_size, type, 0); - if (!base) { - /* -* We're probably in deep shit, but let's try to fetch -* the required base anyway from another pack or loose. -* This is costly but should happen only in the presence -* of a corrupted pack, and is better than failing outright. -*/ - struct revindex_entry *revidx; - const unsigned char *base_sha1; - revidx = find_pack_revindex(p, base_offset); - if (!revidx) - return NULL; - base_sha1 = nth_packed_object_sha1(p, revidx-nr); - error(failed to read delta base object %s - at offset %PRIuMAX from %s, - sha1_to_hex(base_sha1), (uintmax_t)base_offset, - p-pack_name); - mark_bad_packed_object(p, base_sha1); - base = read_object(base_sha1, type, base_size); - if (!base) - return NULL; - } - - delta_data = unpack_compressed_entry(p, w_curs, curpos, delta_size); - if (!delta_data) { - error(failed to unpack compressed delta - at offset %PRIuMAX from %s, - (uintmax_t)curpos, p-pack_name); - free(base); - return NULL; - } - result = patch_delta(base, base_size, -delta_data, delta_size, -sizep); - if (!result) - die(failed to apply delta); - free(delta_data); - add_delta_base_cache(p, base_offset, base, base_size, *type); - return result; -} - static void write_pack_access_log(struct packed_git *p, off_t obj_offset) { static FILE *log_file; @@ -1946,48 +1884,179 @@ static void write_pack_access_log(struct packed_git *p, off_t obj_offset) int do_check_packed_object_crc; +#define UNPACK_ENTRY_STACK_PREALLOC 64 +struct unpack_entry_stack_ent { + off_t obj_offset; + off_t curpos; + unsigned long size; +}; + void *unpack_entry(struct packed_git *p, off_t obj_offset, - enum object_type *type, unsigned long *sizep) + enum object_type *final_type, unsigned long *final_size) { struct pack_window *w_curs = NULL; off_t curpos = obj_offset; - void *data; + void *data = NULL; + unsigned long size; + enum object_type type; + struct unpack_entry_stack_ent small_delta_stack[UNPACK_ENTRY_STACK_PREALLOC]; + struct unpack_entry_stack_ent *delta_stack = small_delta_stack; + int delta_stack_nr = 0, delta_stack_alloc = UNPACK_ENTRY_STACK_PREALLOC; + int base_from_cache = 0; if (log_pack_access) write_pack_access_log(p, obj_offset); - if (do_check_packed_object_crc p-index_version 1) { - struct revindex_entry *revidx = find_pack_revindex(p, obj_offset); - unsigned long len = revidx[1].offset - obj_offset; - if (check_pack_crc(p, w_curs, obj_offset, len, revidx-nr)) { - const unsigned char *sha1 = -
[PATCH v2 2/3] Refactor parts of in_delta_base_cache/cache_or_unpack_entry
The delta base cache lookup and test were shared. Refactor them; we'll need both parts again. Also, we'll use the clearing routine later. Signed-off-by: Thomas Rast tr...@student.ethz.ch --- sha1_file.c | 45 - 1 file changed, 32 insertions(+), 13 deletions(-) diff --git a/sha1_file.c b/sha1_file.c index 71877a7..bd054d1 100644 --- a/sha1_file.c +++ b/sha1_file.c @@ -1756,32 +1756,51 @@ static unsigned long pack_entry_hash(struct packed_git *p, off_t base_offset) return hash % MAX_DELTA_CACHE; } -static int in_delta_base_cache(struct packed_git *p, off_t base_offset) +static struct delta_base_cache_entry * +get_delta_base_cache_entry(struct packed_git *p, off_t base_offset) { unsigned long hash = pack_entry_hash(p, base_offset); - struct delta_base_cache_entry *ent = delta_base_cache + hash; + return delta_base_cache + hash; +} + +static int cmp_delta_base_cache_entry(struct delta_base_cache_entry *ent, + struct packed_git *p, off_t base_offset) +{ return (ent-data ent-p == p ent-base_offset == base_offset); } +static int in_delta_base_cache(struct packed_git *p, off_t base_offset) +{ + struct delta_base_cache_entry *ent; + ent = get_delta_base_cache_entry(p, base_offset); + return cmp_delta_base_cache_entry(ent, p, base_offset); +} + +static void clear_delta_base_cache_entry(struct delta_base_cache_entry *ent) +{ + ent-data = NULL; + ent-lru.next-prev = ent-lru.prev; + ent-lru.prev-next = ent-lru.next; + delta_base_cached -= ent-size; +} + static void *cache_or_unpack_entry(struct packed_git *p, off_t base_offset, unsigned long *base_size, enum object_type *type, int keep_cache) { + struct delta_base_cache_entry *ent; void *ret; - unsigned long hash = pack_entry_hash(p, base_offset); - struct delta_base_cache_entry *ent = delta_base_cache + hash; - ret = ent-data; - if (!ret || ent-p != p || ent-base_offset != base_offset) + ent = get_delta_base_cache_entry(p, base_offset); + + if (!cmp_delta_base_cache_entry(ent, p, base_offset)) return unpack_entry(p, base_offset, type, base_size); - if (!keep_cache) { - ent-data = NULL; - ent-lru.next-prev = ent-lru.prev; - ent-lru.prev-next = ent-lru.next; - delta_base_cached -= ent-size; - } else { + ret = ent-data; + + if (!keep_cache) + clear_delta_base_cache_entry(ent); + else ret = xmemdupz(ent-data, ent-size); - } *type = ent-type; *base_size = ent-size; return ret; -- 1.8.2.266.g8176668 -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] log: make show --show-signature use gpg.program setting
show --show-signature doesn't currently use the gpg.program setting. Commit signing, tag signing, and tag verification currently use this setting properly, so the logic has been added to handle it here as well. Signed-off-by: Hans Brigman hbrig...@openspan.com --- builtin/log.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/builtin/log.c b/builtin/log.c index 8f0b2e8..a6c5576 100644 --- a/builtin/log.c +++ b/builtin/log.c @@ -23,6 +23,7 @@ #include streaming.h #include version.h #include mailmap.h +#include gpg-interface.h /* Set a default date-time format for git log (log.date config variable) */ static const char *default_date_mode = NULL; @@ -364,7 +365,8 @@ static int git_log_config(const char *var, const char *value, void *cb) use_mailmap_config = git_config_bool(var, value); return 0; } - + if (!prefixcmp(var, gpg.)) + return git_gpg_config(var, value, NULL); if (grep_config(var, value, cb) 0) return -1; return git_diff_ui_config(var, value, cb); -- 1.7.11.msysgit.0 0001-log-make-show-show-signature-use-gpg.program-setting.patch Description: 0001-log-make-show-show-signature-use-gpg.program-setting.patch
Re: [PATCH] sha1_file: remove recursion in packed_object_info
thomas tr...@student.ethz.ch writes: Junio C Hamano gits...@pobox.com writes: The following comment is also lost but... - /* We choose to only get the type of the base object and -* ignore potentially corrupt pack file that expects the delta -* based on a base with a wrong size. This saves tons of -* inflate() calls. -*/ - if (sizep) { - *sizep = get_size_from_delta(p, w_curs, curpos); - if (*sizep == 0) - type = OBJ_BAD; ... is this check correct? There is an equivalent check at the beginning of the new packed_object_info() to error out a deltified result. Why is an object whose size is 0 bad? Cc'ing Nicolas, but I think there are several reasons: If it's a delta, then according to docs[1] it starts with the SHA1 of the base object, plus the deflated data. So it is at least 20 bytes. get_size_from_delta() grabs the size, the number you would get in the third parameter of read_sha1_file(), of the result of applying the delta we are looking at. The part that stores this information is called the compressed delta data in the document you are looking at. The function you want to look at is patch_delta(), where it grabs two such sizes from the delta stream with get_delta_hdr_size(). A delta stream begins with: * preimage length, expressed as a 7-bit-per-byte varint; * postimage length, expressed as a 7-bit-per-byte varint; followed by number of records, each prefixed by a command byte. * Command byte with its 8th bit set records source offset and size (max 32 and 24 bits, respectively---other 7 bits in the command byte tells us how large the offset and size are) and tells us to insert a copy of that region at the current point. * Command byte between 1-127 (inclusive) tells us to add that many bytes that follow the command byte from the delta stream at the current point. * Command byte 0 is an error. And get_size_from_delta() skips the preimage length, grabs postimage length and returns the latter. It is how we decide how many bytes we need to allocate to hold the result of applying the delta. If it's not a delta, then it must start with 'type size\0', which even after compression cannot possibly be 0 bytes. Either way, get_size_from_delta() also uses 0 as the error return. Yes, that is why I said is this check correct?. As I already said, I think the only two things that protects us from creating a delta whose postimage size is 0 are the fact that we do not even attempt to deltify anything smaller than 50 bytes in pack-objects, and create_delta() refuses to create a delta to produce an empty postimage. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Ramkumar Ramachandra wrote: Junio C Hamano wrote: Ramkumar Ramachandra artag...@gmail.com writes: Git 2.0 is coming soon, so I'm excited about breaking a lot of backward compatibility ;) Don't. push.default is the necessary exception? A while ago there was a discussion of changes of the If we were starting over today, it would be obvious we should have made it work the other way kind and potential transition plans for them leading up to 2.0. It's way too late to throw new breakage in. More generally, it doesn't make a lot of sense to save thinking about such questions for the last minute before a new major release. If an important change will require breaking compatibility and can only be done using a painful 5-year transition, start today (and send patches to the ML when an appropriate quiet moment comes to get review) so the 5-year counter can start ticking. Hoping that clarifies, Jonathan -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Jonathan Nieder wrote: Ramkumar Ramachandra wrote: Junio C Hamano wrote: Ramkumar Ramachandra artag...@gmail.com writes: Git 2.0 is coming soon, so I'm excited about breaking a lot of backward compatibility ;) Don't. push.default is the necessary exception? A while ago there was a discussion of changes of the If we were starting over today, it would be obvious we should have made it work the other way kind and potential transition plans for them leading up to 2.0. It's way too late to throw new breakage in. More generally, it doesn't make a lot of sense to save thinking about such questions for the last minute before a new major release. If an important change will require breaking compatibility and can only be done using a painful 5-year transition, start today (and send patches to the ML when an appropriate quiet moment comes to get review) so the 5-year counter can start ticking. I agree that big important changes that break backward compatibility (like adding generation numbers to commit objects) will require this kind of time and effort to stabilize. Does a saner push.default value, or even tweaking the output of 'git status' to show what 'git status -sb' shows now (git status is porcelain, and no person in the right mind will write a script to parse it), require this? I'm talking about slightly better defaults at the porcelain level, at most. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Ramkumar Ramachandra artag...@gmail.com writes: I'm talking about slightly better defaults at the porcelain level, at most. If a change does not even have to break backward compatibilty, that is a regular enhancement and independent from 2.0, no? -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Am 25.03.2013 09:59, schrieb Ramkumar Ramachandra: Jens Lehmann wrote: Am 24.03.2013 18:38, schrieb Ramkumar Ramachandra: I find this behavior very inconsistent and annoying. Why would I want to commit the submodule change immediately? Maybe I want to batch it up with other changes and stage it at a later time. Why should I have to unstage them manually now? I get the other side of the argument: what if the user commits the .gitmodule change at a different time from the file change? In other words, the user should have a way of saying 'submodule stage' and 'submodule unstage'. Hmm, AFAIK you are the first to bring up such a feature, as in most use cases doing a git (submodule) add path is expected to stage what you added. In my opinion, the 'git submodule add path' form is broken, because it doesn't relocate the object store of the submodule repository (a bug that we need to fix?). I don't think it is broken. Leaving the .git directory inside the submodule makes perfect sense for some users (e.g. those which never intend to push their submodule somewhere else) and doesn't do any harm unless you want to remove it again in the future (or need to go back to an older commit where the submodule directory would be in the way). We have to think very hard before making such changes to behavior quite some people may rely on, even though I agree some use cases would profit from it and I think we would do it differently if we could start over again. What I think we need rather soonish is git submodule to-gitfile, which would help your case too. We should then hint that in those cases where we refuse to remove a submodule when using git rm or git submodule deinit (or the git mv for submodules I'm currently preparing). I always use the 'git submodule add repository path' form, which puts the object store of the submodule repository in .git/modules of the parent repository. This form is nothing like 'git add', but more like a 'git clone': arguably, 'submodule clone' is a better name for it. Hmm, it does a clone first and then adds the submodule and the change to .gitmodules, so I still believe add is the correct term for it. So I really like the color the shed currently has ;-) Maybe the WTF You need to run this command from the toplevel of the working tree needs to be fixed first? I think it's a matter of a simple pushd, popd before the operation and building the correct relative path. I won't object such a change (even though I suspect it'll turn out more complicated than that). I'm not sure how it'll work with nested submodules though. Then again, I think nested submodules are Wrong; submodules are probably not the right tool for the job. How did you come to that conclusion? Nested submodules make perfect sense and most people agree that in hindsight --recursive should have been the default, but again we can't simply change that now. Now, for the implementation. Do we have existing infrastructure to stage a hunk non-interactively? (The ability to select a hunk to stage/ unstage programmatically). If not, it might be quite a non-trivial thing to write. Have fun when adding two submodules and unstage only one of them later. I think this feature will not work unless you analyze .gitmodules and stage/unstage section-wise. Yes, which is why I asked if we have existing infrastructure to make this possible. Not that I know of. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Junio C Hamano wrote: Ramkumar Ramachandra artag...@gmail.com writes: I'm talking about slightly better defaults at the porcelain level, at most. If a change does not even have to break backward compatibilty, that is a regular enhancement and independent from 2.0, no? Okay, I'm confused: why are you waiting for 2.0 to change push.default then? Isn't that just a slightly better default at the porcelain level and hence a regular enhancement? You're not worried about breaking everyone's muscle memory (is that not part of backward compatibility)? So, if I understand correctly, I can do changes like the following at any time (provided the reviewers agree that they're sane): 1. Don't make 'git submodule add' stage (which is what this thread was originally about). 2. Set pull.autostash to true (I still have to re-roll and finish it ofcourse). 3. Set color.ui to auto. 4. Alias 'git status' to 'git status -sb'. 5. Set help.autocorrect to 20. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Ramkumar Ramachandra wrote: Okay, I'm confused: why are you waiting for 2.0 to change push.default then? Isn't that just a slightly better default at the porcelain level and hence a regular enhancement? No. Among other aspects, git push is used heavily by scripts. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Jonathan Nieder wrote: Ramkumar Ramachandra wrote: Okay, I'm confused: why are you waiting for 2.0 to change push.default then? Isn't that just a slightly better default at the porcelain level and hence a regular enhancement? No. Among other aspects, git push is used heavily by scripts. Oh, I see. I would've expected scripts to specify the refspec explicitly, instead of relying on a default like this. Then again, I saw jc/push-2.0-default-to-simple -- massive changes to scripts in our own testsuite. This whole backward compatibility thing is not black-or-white: it's shades of gray. Can I ask about how backward incompatible the other suggestions I listed were, just so I get a rough idea of your scale? Junio seems to be very extremist today. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Ramkumar Ramachandra wrote: This whole backward compatibility thing is not black-or-white: it's shades of gray. Can I ask about how backward incompatible the other suggestions I listed were, just so I get a rough idea of your scale? It depends on how important the change is and how painful the proposed transition is. Please don't gratuitously break things. If there is a smooth way to accomplish the intended effect without much downside, that is generally preferred, even if it is harder to write the code. There are no absolutes here. It is about helping people in the real world who never asked for such-and-such feature to avoid suffering real breakage. Hope that helps, Jonathan -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 3/3] git-merge-one-file: revise merge error reporting
On Mon, Mar 25, 2013 at 1:17 PM, Junio C Hamano gits...@pobox.com wrote: Subject: [PATCH] merge-one-file: force content conflict for both side added case s/both side/both sides/ Historically, we tried to be lenient to both side added, slightly Ditto. differently case and as long as the files can be merged using a made-up common ancestor cleanly, since f7d24bbefb06 (merge with /dev/null as base, instead of punting O==empty case, 2005-11-07). This was later further refined to use a better made-up common file with fd66dbf5297a (merge-one-file: use empty- or common-base condintionally in two-stage merge., 2005-11-10), but the spirit has been the same. But the original fix in f7d24bbefb06 to avoid punging on both sides s/punging/punting/ added case had a code to unconditionally error out the merge. When this triggers, even though the content-level merge can be done cleanly, we end up not saying content conflict in the message, but still issue the error message, showing ERROR: in pathname. Signed-off-by: Junio C Hamano gits...@pobox.com -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] Documentation/commit-tree: mention -S option
Commit ba3c69a9 (commit: teach --gpg-sign option, 2011-10-05) added the -S option and documented it in the command usage. Then commit 098bbdc3 (Add -S, --gpg-sign option to manpage of git commit, 2012-10-21) documented it in the porcelain manpage. Use wording from the porcelain to document the option in the plumbing manpage too. --- Documentation/git-commit-tree.txt |7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/Documentation/git-commit-tree.txt b/Documentation/git-commit-tree.txt index 86ef56e..62f7b53 100644 --- a/Documentation/git-commit-tree.txt +++ b/Documentation/git-commit-tree.txt @@ -10,7 +10,9 @@ SYNOPSIS [verse] 'git commit-tree' tree [(-p parent)...] changelog -'git commit-tree' [(-p parent)...] [(-m message)...] [(-F file)...] tree +'git commit-tree' [(-p parent)...] [-Skeyid] [(-m message)...] + [(-F file)...] tree + DESCRIPTION --- @@ -52,6 +54,9 @@ OPTIONS Read the commit log message from the given file. Use `-` to read from the standard input. +-Skeyid:: + GPG-sign commit. + Commit Information -- -- 1.7.10.4 -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 4/4] transport: drop int cmp = cmp hack
Jeff King p...@peff.net writes: On Sat, Mar 23, 2013 at 09:00:05PM -0700, Junio C Hamano wrote: On Thu, Mar 21, 2013 at 4:13 AM, Jeff King p...@peff.net wrote: According to 47ec794, this initialization is meant to squelch an erroneous uninitialized variable warning from gcc 4.0.1. That version is quite old at this point, and gcc 4.1 and up handle it fine, with one exception. There seems to be a regression in gcc 4.6.3, which produces the warning; however, gcc versions 4.4.7 and 4.7.2 do not. transport.c: In function 'get_refs_via_rsync': transport.c:127:29: error: 'cmp' may be used uninitialized in this function [-Werror=uninitialized] transport.c:109:7: note: 'cmp' was declared here gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 Right, that's the same version I noted above. Is 4.6.3 the default compiler under a particular release of Ubuntu, or did you use their gcc-4.6 package? I'll check later with one of my VMs. The copy of U 12.04 I happened to have handy has that version installed. By the way, I find this piece of code less than pleasant: * It uses struct ref dummy = { NULL }, *tail = dummy, and then accumulates things by appending to tail and then returns dummy.next. Why doesn't it do struct ref *retval = NULL, **tail = retval; and pass tail around to append things, like everybody else? Is this another instance of People do not understand linked list problem? Perhaps fixing that may unconfuse the compiler? * Its read_loose_refs() is a recursive function that sorts the results from readdir(3) and iterates over them, expecting its recursive call to fail _only_ when the entry it read is not a directory that it needs to recurse into. It is not obvious if the resulting list is sorted correctly with this loop structure when you have branches foo.bar, foo/bar, and foo=bar. I think the loop first reads foo, foo.bar and foo=bar, sorts them in that order, and starts reading recursively, ending up with foo/bar first and then foo.bar and finally foo=bar. Later, the tail of the same list is passed to insert_packed_refs(), which does in-place merging of this list and the contents of the packed_refs file. These two data sources have to be sorted the same way for this merge to work correctly, but there is no validating the order of the entries it reads from the packed-refs file. At least, it should barf when the file is not sorted. It could be lenient and accept a mal-sorted input, but I do not think that is advisable. I'll apply the attached on 'maint' for now, as rsync is not worth spending too many cycles on worrying about; I need to go to the bathroom to wash my eyes after staring this code for 20 minutes X-. transport.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/transport.c b/transport.c index 87b8f14..e6f9346 100644 --- a/transport.c +++ b/transport.c @@ -106,7 +106,8 @@ static void insert_packed_refs(const char *packed_refs, struct ref **list) return; for (;;) { - int cmp, len; + int cmp = 0; /* assigned before used */ + int len; if (!fgets(buffer, sizeof(buffer), f)) { fclose(f); -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Jens Lehmann wrote: Am 25.03.2013 09:59, schrieb Ramkumar Ramachandra: In my opinion, the 'git submodule add path' form is broken, because it doesn't relocate the object store of the submodule repository (a bug that we need to fix?). I don't think it is broken. Leaving the .git directory inside the submodule makes perfect sense for some users (e.g. those which never intend to push their submodule somewhere else) and doesn't do any harm unless you want to remove it again in the future (or need to go back to an older commit where the submodule directory would be in the way). We have to think very hard before making such changes to behavior quite some people may rely on, even though I agree some use cases would profit from it and I think we would do it differently if we could start over again. Doesn't that sound horribly crippled to you? Is there any advantage to leaving the .git directory inside the submodule? Isn't it always better to relocate it? What I think we need rather soonish is git submodule to-gitfile, which would help your case too. We should then hint that in those cases where we refuse to remove a submodule when using git rm or git submodule deinit (or the git mv for submodules I'm currently preparing). Why a new subcommand? Is there a problem if we do the relocation at the time of 'add'? Will some user expectation break? I always use the 'git submodule add repository path' form, which puts the object store of the submodule repository in .git/modules of the parent repository. This form is nothing like 'git add', but more like a 'git clone': arguably, 'submodule clone' is a better name for it. Hmm, it does a clone first and then adds the submodule and the change to .gitmodules, so I still believe add is the correct term for it. So I really like the color the shed currently has ;-) I meant a variant of add that would clone, but not stage. I was arguing for a new subcommand so that I don't have to touch 'submodule add', not for a rename. Maybe the WTF You need to run this command from the toplevel of the working tree needs to be fixed first? I think it's a matter of a simple pushd, popd before the operation and building the correct relative path. I won't object such a change (even though I suspect it'll turn out more complicated than that). I'll have to investigate. I'm not sure how it'll work with nested submodules though. Then again, I think nested submodules are Wrong; submodules are probably not the right tool for the job. How did you come to that conclusion? Nested submodules make perfect sense and most people agree that in hindsight --recursive should have been the default, but again we can't simply change that now. Okay, I'll do it step-by-step now, with a live example: 1. git clone gh:artagnon/dotfiles, a repository with submodules. 2. git submodule update --init .elisp/auto-complete, a repository that contains submodules. 3. .elisp/auto-complete/.gitmodules lists the submodules (lib/fuzzy, lib/ert, and lib/popup). Let's try to initialize them from that directory ... No! go back to toplevel. 4. Fine. Where are those submodules? git submodule status doesn't list them. 5. Okay, let's join the paths by hand and try anyway: git submodule update --init .elisp/auto-complete/lib/fuzzy. Did you forget to 'git add'? Fantastic. 6. How am I supposed to initialize the darn submodules?! 7. git submodule update --init --recursive .elisp/auto-complete is the ONLY way (is this even documented anywhere?). But I just wanted to initialize one, not all three! 8. Okay, now I want to execute a 'git submodule foreach' on just those three submodules. Seems impossible. Fine, I'll do it myself: for i in lib/ert lib/fuzzy lib/popup; do cd $i; git checkout master; git reset --hard origin/master; cd ../..; done 9. Whew. git status. Changes in auto-complete. git diff. Submodule .elisp/auto-complete contains modified content. I'm not allowed to see what changed now? 10. git checkout master; git reset --hard origin/master in auto-complete. git status. How do I stage the changes in just auto-complete, and not in auto-complete's submodules? What is going on, seriously? This is just two levels of nesting: with more levels of nesting, things only get worse. Now, for the implementation. Do we have existing infrastructure to stage a hunk non-interactively? (The ability to select a hunk to stage/ unstage programmatically). If not, it might be quite a non-trivial thing to write. [...] Not that I know of. Damn. Then, it's certainly not worth the effort. Atleast not now, when there are bigger problems. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: propagating repo corruption across clone
Jeff King p...@peff.net writes: We _do_ see a problem during the checkout phase, but we don't propagate a checkout failure to the exit code from clone. That is bad in general, and should probably be fixed. Though it would never find corruption of older objects in the history, anyway, so checkout should not be relied on for robustness. It is obvious that we should exit with non-zero status when we see a failure from the checkout, but do we want to nuke the resulting repository as in the case of normal transport failure? A checkout failure might be due to being under quota for object store but running out of quota upon populating the working tree, in which case we probably do not want to. We do not notice the sha1 mis-match on the sending side (which we could, if we checked the sha1 as we were sending). We do not notice the broken object graph during the receive process either. I would have expected check_everything_connected to handle this, but we don't actually call it during clone! If you do this: $ git init non-local cd non-local git fetch .. remote: Counting objects: 3, done. remote: Total 3 (delta 0), reused 3 (delta 0) Unpacking objects: 100% (3/3), done. fatal: missing blob object 'd95f3ad14dee633a758d2e331151e950dd13e4ed' error: .. did not send all necessary objects we do notice. Yes, it is OK to add connectedness check to git clone. And one final check: $ git -c transfer.fsckobjects=1 clone --no-local . fsck Cloning into 'fsck'... remote: Counting objects: 3, done. remote: Total 3 (delta 0), reused 3 (delta 0) Receiving objects: 100% (3/3), done. error: unable to find d95f3ad14dee633a758d2e331151e950dd13e4ed fatal: object of unexpected type fatal: index-pack failed Fscking the incoming objects does work, but of course it comes at a cost in the normal case (for linux-2.6, I measured an increase in CPU time with index-pack --strict from ~2.5 minutes to ~4 minutes). And I think it is probably overkill for finding corruption; index-pack already recognizes bit corruption inside an object, and check_everything_connected can detect object graph problems much more cheaply. One thing I didn't check is bit corruption inside a packed object that still correctly zlib inflates. check_everything_connected will end up reading all of the commits and trees (to walk them), but not the blobs. Correct. So I think at the very least we should: 1. Make sure clone propagates errors from checkout to the final exit code. 2. Teach clone to run check_everything_connected. I agree with both but with a slight reservation on the former one (see above). Thanks. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: propagating repo corruption across clone
On Mon, Mar 25, 2013 at 01:01:59PM -0700, Junio C Hamano wrote: Jeff King p...@peff.net writes: We _do_ see a problem during the checkout phase, but we don't propagate a checkout failure to the exit code from clone. That is bad in general, and should probably be fixed. Though it would never find corruption of older objects in the history, anyway, so checkout should not be relied on for robustness. It is obvious that we should exit with non-zero status when we see a failure from the checkout, but do we want to nuke the resulting repository as in the case of normal transport failure? A checkout failure might be due to being under quota for object store but running out of quota upon populating the working tree, in which case we probably do not want to. I'm just running through my final tests on a large-ish patch series which deals with this (among other issues). I had the same thought, though we do already die on a variety of checkout errors. I left it as a die() for now, but I think we should potentially address it with a further patch. $ git init non-local cd non-local git fetch .. remote: Counting objects: 3, done. remote: Total 3 (delta 0), reused 3 (delta 0) Unpacking objects: 100% (3/3), done. fatal: missing blob object 'd95f3ad14dee633a758d2e331151e950dd13e4ed' error: .. did not send all necessary objects we do notice. Yes, it is OK to add connectedness check to git clone. That's in my series, too. Unfortunately, in the local clone case, it slows down the clone considerably (since we otherwise would not have to traverse the objects at all). -Peff -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Documentation/commit-tree: mention -S option
Brad King brad.k...@kitware.com writes: Commit ba3c69a9 (commit: teach --gpg-sign option, 2011-10-05) added the -S option and documented it in the command usage. Then commit 098bbdc3 (Add -S, --gpg-sign option to manpage of git commit, 2012-10-21) documented it in the porcelain manpage. Use wording from the porcelain to document the option in the plumbing manpage too. --- Thanks; sign-off? Documentation/git-commit-tree.txt |7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/Documentation/git-commit-tree.txt b/Documentation/git-commit-tree.txt index 86ef56e..62f7b53 100644 --- a/Documentation/git-commit-tree.txt +++ b/Documentation/git-commit-tree.txt @@ -10,7 +10,9 @@ SYNOPSIS [verse] 'git commit-tree' tree [(-p parent)...] changelog -'git commit-tree' [(-p parent)...] [(-m message)...] [(-F file)...] tree +'git commit-tree' [(-p parent)...] [-Skeyid] [(-m message)...] + [(-F file)...] tree + DESCRIPTION --- @@ -52,6 +54,9 @@ OPTIONS Read the commit log message from the given file. Use `-` to read from the standard input. +-Skeyid:: + GPG-sign commit. + Commit Information -- -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: propagating repo corruption across clone
On Mon, Mar 25, 2013 at 12:32:50PM -0400, Jeff Mitchell wrote: I think what was conflating the issue in my testing is that with --mirror it implies --bare, so there would be checking of the objects when the working tree was being created, hence --mirror won't show the error a normal clone will -- it's not a transport question, it's just a matter of the normal clone doing more and so having more data run through checks. Exactly. However, there are still problems. For blob corruptions, even in this --no-hardlinks, non --mirror case where an error was found, the exit code from the clone was 0. I can see this tripping up all sorts of automated scripts or repository GUIs that ignore the output and only check the error code, which is not an unreasonable thing to do. Yes, this is a bug. I'll post a series in a minute which fixes it. For commit corruptions, the --no-hardlinks, non --mirror case refused to create the new repository and exited with an error code of 128. The --no-hardlinks, --mirror case spewed errors to the console, yet *still* created the new clone *and* returned an error code of zero. I wasn't able to reproduce this; can you post a succint test case? It seems that when there is an error as opposed to a fatal it doesn't affect the status code on a clone; I'd argue that it ought to. Agreed completely. The current behavior is buggy. -Peff -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] git-web--browse: recognize iTerm as a GUI terminal on OS X
On Mon, Mar 25, 2013 at 11:13 AM, John Szakmeister j...@szakmeister.net wrote: Here's an updated patch. Thank you for it. For what it's worth: Acked-by: Christian Couder chrisc...@tuxfamily.org I also noticed that git-bisect.sh is also trying to determine if a GUI is present by looking for SECURITYSESSIONID as well. I wonder if it would be better to create a shell function in git-sh-setup.sh that the two scripts could use? Yeah, it might be a good idea to have some common functions to determine if a GUI is present. Maybe you could start with an os_x_gui_present() function in another patch on top of this one. Thanks, Christian. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 0/9] corrupt object potpourri
I started these patches with the intent of improving clone's behavior on corrupt objects, but my testing uncovered some other nastiness, including two infinite loops in the streaming code!. Yikes. I think 1-7 are good. We might want to tweak the die() behavior of patch 8, but I think it should come on top. Patch 9 has some pretty ugly performance implications. At the end of the series, all of the introduced tests pass except for one, which is that git clone may silently write out a bogus working tree entry. I haven't tracked that one down yet. [1/9]: stream_blob_to_fd: detect errors reading from stream [2/9]: check_sha1_signature: check return value from read_istream [3/9]: read_istream_filtered: propagate read error from upstream [4/9]: avoid infinite loop in read_istream_loose [5/9]: add test for streaming corrupt blobs [6/9]: streaming_write_entry: propagate streaming errors [7/9]: add tests for cloning corrupted repositories [8/9]: clone: die on errors from unpack_trees [9/9]: clone: run check_everything_connected -Peff -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Documentation/commit-tree: mention -S option
On 03/25/2013 04:06 PM, Junio C Hamano wrote: Brad King brad.k...@kitware.com writes: Commit ba3c69a9 (commit: teach --gpg-sign option, 2011-10-05) added the -S option and documented it in the command usage. Then commit 098bbdc3 (Add -S, --gpg-sign option to manpage of git commit, 2012-10-21) documented it in the porcelain manpage. Use wording from the porcelain to document the option in the plumbing manpage too. --- Thanks; sign-off? Oops! Signed-off-by: Brad King brad.k...@kitware.com -Brad -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 1/9] stream_blob_to_fd: detect errors reading from stream
We call read_istream, but never check its return value for errors. This can lead to us looping infinitely, as we just keep trying to write -1 bytes (and we do not notice the error, as we simply check that write_in_full reports the same number of bytes we fed it, which of course is also -1). Signed-off-by: Jeff King p...@peff.net --- No test yet, as my method for triggering this causes _another_ infinite loop. So the test comes after the fixes, to avoid infinite loops when bisecting the history later. :) streaming.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/streaming.c b/streaming.c index 4d978e5..f4126a7 100644 --- a/streaming.c +++ b/streaming.c @@ -514,6 +514,8 @@ int stream_blob_to_fd(int fd, unsigned const char *sha1, struct stream_filter *f ssize_t wrote, holeto; ssize_t readlen = read_istream(st, buf, sizeof(buf)); + if (readlen 0) + goto close_and_exit; if (!readlen) break; if (can_seek sizeof(buf) == readlen) { -- 1.8.2.13.g0f18d3c -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 2/9] check_sha1_signature: check return value from read_istream
It's possible for read_istream to return an error, in which case we just end up in an infinite loop (aside from EOF, we do not even look at the result, but just feed it straight into our running hash). Signed-off-by: Jeff King p...@peff.net --- I didn't actually trigger this code path in any of my tests, but I audited all of the callers of read_istream after the last patch, and noticed this one (the rest looked fine to me). sha1_file.c | 4 1 file changed, 4 insertions(+) diff --git a/sha1_file.c b/sha1_file.c index 16967d3..0b99f33 100644 --- a/sha1_file.c +++ b/sha1_file.c @@ -1266,6 +1266,10 @@ int check_sha1_signature(const unsigned char *sha1, void *map, char buf[1024 * 16]; ssize_t readlen = read_istream(st, buf, sizeof(buf)); + if (readlen 0) { + close_istream(st); + return -1; + } if (!readlen) break; git_SHA1_Update(c, buf, readlen); -- 1.8.2.13.g0f18d3c -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 3/9] read_istream_filtered: propagate read error from upstream
The filter istream pulls data from an upstream stream, running it through a filter function. However, we did not properly notice when the upstream filter yielded an error, and just returned what we had read. Instead, we should propagate the error. Signed-off-by: Jeff King p...@peff.net --- I don't know if we should be preserving fs-i_end from getting a negative value. I would think the internal state of the istream after an error is undefined. streaming.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/streaming.c b/streaming.c index f4126a7..f4ab12b 100644 --- a/streaming.c +++ b/streaming.c @@ -237,7 +237,7 @@ static read_method_decl(filtered) if (!fs-input_finished) { fs-i_end = read_istream(fs-upstream, fs-ibuf, FILTER_BUFFER); if (fs-i_end 0) - break; + return -1; if (fs-i_end) continue; } -- 1.8.2.13.g0f18d3c -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 4/9] avoid infinite loop in read_istream_loose
The read_istream_loose function loops on inflating a chunk of data from an mmap'd loose object. We end the loop when we run out of space in our output buffer, or if we see a zlib error. We need to treat Z_BUF_ERROR specially, though, as it is not fatal; it is just zlib's way of telling us that we need to either feed it more input or give it more output space. It is perfectly normal for us to hit this when we are at the end of our buffer. However, we may also get Z_BUF_ERROR because we have run out of input. In a well-formed object, this should not happen, because we have fed the whole mmap'd contents to zlib. But if the object is truncated or corrupt, we will loop forever, never giving zlib any more data, but continuing to ask it to inflate. We can fix this by considering it an error when zlib returns Z_BUF_ERROR but we still have output space left (which means it must want more input, which we know is a truncation error). It would not be sufficient to just check whether zlib had consumed all the input at the start of the loop, as it might still want to generate output from what is in its internal state. Signed-off-by: Jeff King p...@peff.net --- The read_istream_pack_non_delta function does not suffer from the same issue, because it continually feeds more data via use_pack(). Although it may run into problems if it reads to the very end of a pack. I also didn't audit the other zlib code paths for similar problems; we may want to do that. streaming.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/streaming.c b/streaming.c index f4ab12b..cabcd9d 100644 --- a/streaming.c +++ b/streaming.c @@ -309,7 +309,7 @@ static read_method_decl(loose) st-z_state = z_done; break; } - if (status != Z_OK status != Z_BUF_ERROR) { + if (status != Z_OK (status != Z_BUF_ERROR || total_read sz)) { git_inflate_end(st-z); st-z_state = z_error; return -1; -- 1.8.2.13.g0f18d3c -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 5/9] add test for streaming corrupt blobs
We do not have many tests for handling corrupt objects. This new test at least checks that we detect a byte error in a corrupt blob object while streaming it out with cat-file. Signed-off-by: Jeff King p...@peff.net --- t/t1060-object-corruption.sh | 34 ++ 1 file changed, 34 insertions(+) create mode 100755 t/t1060-object-corruption.sh diff --git a/t/t1060-object-corruption.sh b/t/t1060-object-corruption.sh new file mode 100755 index 000..d36994a --- /dev/null +++ b/t/t1060-object-corruption.sh @@ -0,0 +1,34 @@ +#!/bin/sh + +test_description='see how we handle various forms of corruption' +. ./test-lib.sh + +# convert 1234abcd to .git/objects/12/34abcd +obj_to_file() { + echo $(git rev-parse --git-dir)/objects/$(git rev-parse $1 | sed 's,..,/,') +} + +# Convert byte at offset $2 of object $1 into '\0' +corrupt_byte() { + obj_file=$(obj_to_file $1) + chmod +w $obj_file + printf '\0' | dd of=$obj_file bs=1 seek=$2 +} + +test_expect_success 'setup corrupt repo' ' + git init bit-error + ( + cd bit-error + test_commit content + corrupt_byte HEAD:content.t 10 + ) +' + +test_expect_success 'streaming a corrupt blob fails' ' + ( + cd bit-error + test_must_fail git cat-file blob HEAD:content.t + ) +' + +test_done -- 1.8.2.13.g0f18d3c -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 6/9] streaming_write_entry: propagate streaming errors
When we are streaming an index blob to disk, we store the error from stream_blob_to_fd in the result variable, and then immediately overwrite that with the return value of close. That means we catch errors on close (e.g., problems committing the file to disk), but miss anything which happened before then. Signed-off-by: Jeff King p...@peff.net --- entry.c | 6 -- t/t1060-object-corruption.sh | 25 + 2 files changed, 29 insertions(+), 2 deletions(-) diff --git a/entry.c b/entry.c index 17a6bcc..002b2f2 100644 --- a/entry.c +++ b/entry.c @@ -126,8 +126,10 @@ static int streaming_write_entry(struct cache_entry *ce, char *path, fd = open_output_fd(path, ce, to_tempfile); if (0 = fd) { result = stream_blob_to_fd(fd, ce-sha1, filter, 1); - *fstat_done = fstat_output(fd, state, statbuf); - result = close(fd); + if (!result) { + *fstat_done = fstat_output(fd, state, statbuf); + result = close(fd); + } } if (result 0 = fd) unlink(path); diff --git a/t/t1060-object-corruption.sh b/t/t1060-object-corruption.sh index d36994a..0792132 100755 --- a/t/t1060-object-corruption.sh +++ b/t/t1060-object-corruption.sh @@ -24,6 +24,15 @@ test_expect_success 'setup corrupt repo' ' ) ' +test_expect_success 'setup repo with missing object' ' + git init missing + ( + cd missing + test_commit content + rm -f $(obj_to_file HEAD:content.t) + ) +' + test_expect_success 'streaming a corrupt blob fails' ' ( cd bit-error @@ -31,4 +40,20 @@ test_expect_success 'streaming a corrupt blob fails' ' ) ' +test_expect_success 'read-tree -u detects bit-errors in blobs' ' + ( + cd bit-error + rm content.t + test_must_fail git read-tree --reset -u FETCH_HEAD + ) +' + +test_expect_success 'read-tree -u detects missing objects' ' + ( + cd missing + rm content.t + test_must_fail git read-tree --reset -u FETCH_HEAD + ) +' + test_done -- 1.8.2.13.g0f18d3c -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 7/9] add tests for cloning corrupted repositories
We try not to let corruption pass unnoticed over fetches and clones. For the most part, this works, but there are some broken corner cases, including: 1. We do not detect missing objects over git-aware transports. This is a little hard to test, because the sending side will actually complain about the missing object. To fool it, we corrupt a repository such that we have a misnamed object: it claims to be sha1 X, but is really Y. This lets the sender blindly transmit it, but it is the receiver's responsibility to verify that what it got is sane (and it does not). 2. We do not detect missing or misnamed blobs during the checkout phase of clone. Signed-off-by: Jeff King p...@peff.net --- t/t1060-object-corruption.sh | 41 + 1 file changed, 41 insertions(+) diff --git a/t/t1060-object-corruption.sh b/t/t1060-object-corruption.sh index 0792132..eb285c0 100755 --- a/t/t1060-object-corruption.sh +++ b/t/t1060-object-corruption.sh @@ -33,6 +33,19 @@ test_expect_success 'setup repo with missing object' ' ) ' +test_expect_success 'setup repo with misnamed object' ' + git init misnamed + ( + cd misnamed + test_commit content + good=$(obj_to_file HEAD:content.t) + blob=$(echo corrupt | git hash-object -w --stdin) + bad=$(obj_to_file $blob) + rm -f $good + mv $bad $good + ) +' + test_expect_success 'streaming a corrupt blob fails' ' ( cd bit-error @@ -56,4 +69,32 @@ test_expect_success 'read-tree -u detects missing objects' ' ) ' +# We use --bare to make sure that the transport detects it, not the checkout +# phase. +test_expect_success 'clone --no-local --bare detects corruption' ' + test_must_fail git clone --no-local --bare bit-error corrupt-transport +' + +test_expect_success 'clone --no-local --bare detects missing object' ' + test_must_fail git clone --no-local --bare missing missing-transport +' + +test_expect_failure 'clone --no-local --bare detects misnamed object' ' + test_must_fail git clone --no-local --bare misnamed misnamed-transport +' + +# We do not expect --local to detect corruption at the transport layer, +# so we are really checking the checkout() code path. +test_expect_success 'clone --local detects corruption' ' + test_must_fail git clone --local bit-error corrupt-checkout +' + +test_expect_failure 'clone --local detects missing objects' ' + test_must_fail git clone --local missing missing-checkout +' + +test_expect_failure 'clone --local detects misnamed objects' ' + test_must_fail git clone --local misnamed misnamed-checkout +' + test_done -- 1.8.2.13.g0f18d3c -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 8/9] clone: die on errors from unpack_trees
When clone is populating the working tree, it ignores the return status from unpack_trees; this means we may report a successful clone, even when the checkout fails. When checkout fails, we may want to leave the $GIT_DIR in place, as it might be possible to recover the data through further use of git checkout (e.g., if the checkout failed due to a transient error, disk full, etc). However, we already die on a number of other checkout-related errors, so this patch follows that pattern. In addition to marking a now-passing test, we need to adjust t5710, which blindly assumed it could make bogus clones of very deep alternates hierarchies. By using --bare, we can avoid it actually touching any objects. Signed-off-by: Jeff King p...@peff.net --- I think the leave the data behind fix may be to just set junk_pid = 0 a little sooner in cmd_clone (i.e., before checkout()). Then we would still die, but at least leave the fetched objects intact. builtin/clone.c | 3 ++- t/t1060-object-corruption.sh | 2 +- t/t5710-info-alternate.sh| 2 +- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/builtin/clone.c b/builtin/clone.c index e0aaf13..7d48ef3 100644 --- a/builtin/clone.c +++ b/builtin/clone.c @@ -579,7 +579,8 @@ static int checkout(void) tree = parse_tree_indirect(sha1); parse_tree(tree); init_tree_desc(t, tree-buffer, tree-size); - unpack_trees(1, t, opts); + if (unpack_trees(1, t, opts) 0) + die(_(unable to checkout working tree)); if (write_cache(fd, active_cache, active_nr) || commit_locked_index(lock_file)) diff --git a/t/t1060-object-corruption.sh b/t/t1060-object-corruption.sh index eb285c0..05ba4e7 100755 --- a/t/t1060-object-corruption.sh +++ b/t/t1060-object-corruption.sh @@ -89,7 +89,7 @@ test_expect_success 'clone --local detects corruption' ' test_must_fail git clone --local bit-error corrupt-checkout ' -test_expect_failure 'clone --local detects missing objects' ' +test_expect_success 'clone --local detects missing objects' ' test_must_fail git clone --local missing missing-checkout ' diff --git a/t/t5710-info-alternate.sh b/t/t5710-info-alternate.sh index aa04529..5a6e49d 100755 --- a/t/t5710-info-alternate.sh +++ b/t/t5710-info-alternate.sh @@ -58,7 +58,7 @@ git clone -l -s F G git clone -l -s D E git clone -l -s E F git clone -l -s F G -git clone -l -s G H' +git clone --bare -l -s G H' test_expect_success 'invalidity of deepest repository' \ 'cd H { -- 1.8.2.13.g0f18d3c -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 9/9] clone: run check_everything_connected
When we fetch from a remote, we do a revision walk to make sure that what we received is connected to our existing history. We do not do the same check for clone, which should be able to check that we received an intact history graph. The upside of this patch is that it will make clone more resilient against propagating repository corruption. The downside is that we will now traverse rev-list --objects --all down to the roots, which may take some time (it is especially noticeable for a --local --bare clone). Note that we need to adjust t5710, which tries to make such a bogus clone. Rather than checking after the fact that our clone is bogus, we can simplify it to just make sure git clone reports failure. Signed-off-by: Jeff King p...@peff.net --- The slowdown is really quite terrible if you try git clone --bare linux-2.6.git. Even with this, the local-clone case already misses blob corruption. So it probably makes sense to restrict it to just the non-local clone case, which already has to do more work. Even still, it adds a non-trivial amount of work (linux-2.6 takes something like a minute to check). I don't like the idea of declaring git clone non-safe unless you turn on transfer.fsckObjects, though. It should have the same safety as git fetch. builtin/clone.c | 26 ++ t/t1060-object-corruption.sh | 2 +- t/t5710-info-alternate.sh| 8 +--- 3 files changed, 28 insertions(+), 8 deletions(-) diff --git a/builtin/clone.c b/builtin/clone.c index 7d48ef3..eceaa74 100644 --- a/builtin/clone.c +++ b/builtin/clone.c @@ -23,6 +23,7 @@ #include branch.h #include remote.h #include run-command.h +#include connected.h /* * Overall FIXMEs: @@ -485,12 +486,37 @@ static void update_remote_refs(const struct ref *refs, } } +static int iterate_ref_map(void *cb_data, unsigned char sha1[20]) +{ + struct ref **rm = cb_data; + struct ref *ref = *rm; + + /* +* Skip anything missing a peer_ref, which we are not +* actually going to write a ref for. +*/ + while (ref !ref-peer_ref) + ref = ref-next; + /* Returning -1 notes end of list to the caller. */ + if (!ref) + return -1; + + hashcpy(sha1, ref-old_sha1); + *rm = ref-next; + return 0; +} + static void update_remote_refs(const struct ref *refs, const struct ref *mapped_refs, const struct ref *remote_head_points_at, const char *branch_top, const char *msg) { + const struct ref *rm = mapped_refs; + + if (check_everything_connected(iterate_ref_map, 0, rm)) + die(_(remote did not send all necessary objects)); + if (refs) { write_remote_refs(mapped_refs); if (option_single_branch) diff --git a/t/t1060-object-corruption.sh b/t/t1060-object-corruption.sh index 05ba4e7..fd314ef 100755 --- a/t/t1060-object-corruption.sh +++ b/t/t1060-object-corruption.sh @@ -79,7 +79,7 @@ test_expect_success 'clone --no-local --bare detects missing object' ' test_must_fail git clone --no-local --bare missing missing-transport ' -test_expect_failure 'clone --no-local --bare detects misnamed object' ' +test_expect_success 'clone --no-local --bare detects misnamed object' ' test_must_fail git clone --no-local --bare misnamed misnamed-transport ' diff --git a/t/t5710-info-alternate.sh b/t/t5710-info-alternate.sh index 5a6e49d..8956c21 100755 --- a/t/t5710-info-alternate.sh +++ b/t/t5710-info-alternate.sh @@ -58,13 +58,7 @@ git clone -l -s F G git clone -l -s D E git clone -l -s E F git clone -l -s F G -git clone --bare -l -s G H' - -test_expect_success 'invalidity of deepest repository' \ -'cd H { - test_valid_repo - test $? -ne 0 -}' +test_must_fail git clone --bare -l -s G H' cd $base_dir -- 1.8.2.13.g0f18d3c -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2] commit-tree: document -S option consistently
Commit ba3c69a9 (commit: teach --gpg-sign option, 2011-10-05) added the -S option but documented it in the command usage without indicating that the value is optional and forgot to mention it in the manpage. Later commit 098bbdc3 (Add -S, --gpg-sign option to manpage of git commit, 2012-10-21) documented the option in the porcelain manpage. Use wording from the porcelain manpage to document the option in the plumbing manpage. Also update the commit-tree usage summary to indicate that the -S value is optional to be consistent with the manpage and with the implementation. Signed-off-by: Brad King brad.k...@kitware.com --- On 03/25/2013 04:39 PM, Junio C Hamano wrote: This does not seem to use the same wording, though. git commit -S will pick up the signing key by calling get_signing_key() the same way git tag -s would, iow, keyid part is optional. Ahh, I was fooled by the commit-tree usage synopsis and didn't read deeply enough into the implementation. Here is an updated patch to cover that too. Documentation/git-commit-tree.txt |7 ++- builtin/commit-tree.c |2 +- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/Documentation/git-commit-tree.txt b/Documentation/git-commit-tree.txt index 86ef56e..cafdc96 100644 --- a/Documentation/git-commit-tree.txt +++ b/Documentation/git-commit-tree.txt @@ -10,7 +10,9 @@ SYNOPSIS [verse] 'git commit-tree' tree [(-p parent)...] changelog -'git commit-tree' [(-p parent)...] [(-m message)...] [(-F file)...] tree +'git commit-tree' [(-p parent)...] [-S[keyid]] [(-m message)...] + [(-F file)...] tree + DESCRIPTION --- @@ -52,6 +54,9 @@ OPTIONS Read the commit log message from the given file. Use `-` to read from the standard input. +-S[keyid]:: + GPG-sign commit. + Commit Information -- diff --git a/builtin/commit-tree.c b/builtin/commit-tree.c index eac901a..f641ff2 100644 --- a/builtin/commit-tree.c +++ b/builtin/commit-tree.c @@ -10,7 +10,7 @@ #include utf8.h #include gpg-interface.h -static const char commit_tree_usage[] = git commit-tree [(-p sha1)...] [-Ssigner] [-m message] [-F file] sha1 changelog; +static const char commit_tree_usage[] = git commit-tree [(-p sha1)...] [-S[keyid]] [-m message] [-F file] sha1 changelog; static void new_parent(struct commit *parent, struct commit_list **parents_p) { -- 1.7.10.4 -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 4/4] transport: drop int cmp = cmp hack
On Mon, Mar 25, 2013 at 12:50:54PM -0700, Junio C Hamano wrote: transport.c: In function 'get_refs_via_rsync': transport.c:127:29: error: 'cmp' may be used uninitialized in this function [-Werror=uninitialized] transport.c:109:7: note: 'cmp' was declared here gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 Right, that's the same version I noted above. Is 4.6.3 the default compiler under a particular release of Ubuntu, or did you use their gcc-4.6 package? I'll check later with one of my VMs. The copy of U 12.04 I happened to have handy has that version installed. Ah, if you didn't explicitly run gcc-4.6, then it was probably the default version in 12.04 (as it was for a while in Debian testing, but they never actually made a release with it, so everybody is now on 4.7 by default). By the way, I find this piece of code less than pleasant: * It uses struct ref dummy = { NULL }, *tail = dummy, and then accumulates things by appending to tail and then returns dummy.next. Why doesn't it do struct ref *retval = NULL, **tail = retval; and pass tail around to append things, like everybody else? Is this another instance of People do not understand linked list problem? Perhaps fixing that may unconfuse the compiler? Ugh, that is horrible. At first I thought it was even wrong, as we pass tail and not dummy.next to read_loose_refs. But two wrongs _do_ make a right, because read_loose_refs, rather than do: *tail = new; tail = new-next; does: (*tail)-next = new; *tail = new; Later, the tail of the same list is passed to insert_packed_refs(), which does in-place merging of this list and the contents of the packed_refs file. These two data sources have to be sorted the same way for this merge to work correctly, but there is no validating the order of the entries it reads from the packed-refs file. At least, it should barf when the file is not sorted. It could be lenient and accept a mal-sorted input, but I do not think that is advisable. Actually, it is the head of the loose list (though it is hard to realize, because it is called tail!). I'll apply the attached on 'maint' for now, as rsync is not worth spending too many cycles on worrying about; I need to go to the bathroom to wash my eyes after staring this code for 20 minutes X-. Yeah, it's quite ugly. I really wonder if it is time to drop rsync support. I'd be really surprised if anybody is actively using it. I wonder, though, what made you look at this. It did not come up in my list of -Wuninitialized warnings. Did it get triggered by one of the other gcc versions? diff --git a/transport.c b/transport.c index 87b8f14..e6f9346 100644 --- a/transport.c +++ b/transport.c @@ -106,7 +106,8 @@ static void insert_packed_refs(const char *packed_refs, struct ref **list) return; for (;;) { - int cmp, len; + int cmp = 0; /* assigned before used */ + int len; if (!fgets(buffer, sizeof(buffer), f)) { fclose(f); I think that's fine. -Peff -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 5/9] add test for streaming corrupt blobs
On Mon, Mar 25, 2013 at 02:10:38PM -0700, Jonathan Nieder wrote: +# convert 1234abcd to .git/objects/12/34abcd +obj_to_file() { + echo $(git rev-parse --git-dir)/objects/$(git rev-parse $1 | sed 's,..,/,') +} Maybe this would be clearer in multiple lines? commit=$(git rev-parse --verify $1) git_dir=$(git rev-parse --git-dir) tail=${commit#??} echo $git_dir/objects/${commit%$tail}/$tail Yeah, it started as: echo $1 | sed 's,..,/,' and kind of got out of hand as it grew features. I'd be fine with your version (though $commit is not right, as it is any object, and in fact the test uses blobs). + +# Convert byte at offset $2 of object $1 into '\0' +corrupt_byte() { + obj_file=$(obj_to_file $1) + chmod +w $obj_file + printf '\0' | dd of=$obj_file bs=1 seek=$2 Some other tests such as t4205 also rely on printf being binary-safe. Phew. Yeah, I think it should be fine, though the choice of character does not actually matter, as long as it is different from what is currently at that position (for the sake of simplicity, I just determined experimentally that the given object is corrupted with the offset and character I chose). -Peff -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] log: make show --show-signature use gpg.program setting
On Mon, Mar 25, 2013 at 01:03:52PM -0500, Hans Brigman wrote: show --show-signature doesn't currently use the gpg.program setting. Commit signing, tag signing, and tag verification currently use this setting properly, so the logic has been added to handle it here as well. Please wrap your commit messages at something reasonable (70 is probably as high as you want to go, given that log output is often shown indented). @@ -364,7 +365,8 @@ static int git_log_config(const char *var, const char *value, void *cb) use_mailmap_config = git_config_bool(var, value); return 0; } - + if (!prefixcmp(var, gpg.)) + return git_gpg_config(var, value, NULL); if (grep_config(var, value, cb) 0) return -1; The gpg config can also be other places than gpg.*. Right now it is just user.signingkey, which log would not care about, but it feels like we are depending an unnecessary detail here. We also don't know whether it would care about the callback data. Is there a reason not to do: if (git_gpg_config(var, value, cb) 0) return -1; just like the grep_config call below? -Peff -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 6/9] streaming_write_entry: propagate streaming errors
On Mon, Mar 25, 2013 at 4:22 PM, Jeff King p...@peff.net wrote: diff --git a/entry.c b/entry.c index 17a6bcc..002b2f2 100644 --- a/entry.c +++ b/entry.c @@ -126,8 +126,10 @@ static int streaming_write_entry(struct cache_entry *ce, char *path, fd = open_output_fd(path, ce, to_tempfile); if (0 = fd) { result = stream_blob_to_fd(fd, ce-sha1, filter, 1); - *fstat_done = fstat_output(fd, state, statbuf); - result = close(fd); + if (!result) { + *fstat_done = fstat_output(fd, state, statbuf); + result = close(fd); + } Is this intentionally leaking the opened 'fd' when stream_blob_to_fd() returns an error? } if (result 0 = fd) unlink(path); Won't the unlink() now fail on Windows since 'fd' is still open? -- ES -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 6/9] streaming_write_entry: propagate streaming errors
On Mon, Mar 25, 2013 at 05:35:51PM -0400, Eric Sunshine wrote: On Mon, Mar 25, 2013 at 4:22 PM, Jeff King p...@peff.net wrote: diff --git a/entry.c b/entry.c index 17a6bcc..002b2f2 100644 --- a/entry.c +++ b/entry.c @@ -126,8 +126,10 @@ static int streaming_write_entry(struct cache_entry *ce, char *path, fd = open_output_fd(path, ce, to_tempfile); if (0 = fd) { result = stream_blob_to_fd(fd, ce-sha1, filter, 1); - *fstat_done = fstat_output(fd, state, statbuf); - result = close(fd); + if (!result) { + *fstat_done = fstat_output(fd, state, statbuf); + result = close(fd); + } Is this intentionally leaking the opened 'fd' when stream_blob_to_fd() returns an error? Good catch. I was so focused on making sure we still called unlink that I forgot about the cleanup side-effect of close. I'll re-roll it. -Peff -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
git ate my home directory :-(
Hi! Today I've discovered that on the build server my home directory was empty. A post-mortem analysis showed that the git-clean command I've added to my kernel build script is the evil doer. In my scripts I'm setting GIT_DIR to use git-fetch and git-reset without changing the current working directory all the time. But calling git-clean with GIT_DIR acts basically like a rm -Rf .. Here a small demo: test@linux:~ git --version git version 1.8.1.4 test@linux:~ ls test@linux:~ touch a b c d e test@linux:~ mkdir x test@linux:~ cd x test@linux:~/x git init Initialized empty Git repository in /home/test/x/.git/ test@linux:~/x cd .. test@linux:~ ls a b c d e x test@linux:~ export GIT_DIR=/home/test/x/.git/ test@linux:~ git clean -d -f Removing a Removing b Removing c Removing d Removing e Removing x/ test@linux:~ ls test@linux:~ test@linux:~ # :-( Is this behavior intended? Thanks, //richard -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 6/9] streaming_write_entry: propagate streaming errors
Jeff King wrote: When we are streaming an index blob to disk, we store the error from stream_blob_to_fd in the result variable, and then immediately overwrite that with the return value of close. Good catch. [...] --- a/entry.c +++ b/entry.c @@ -126,8 +126,10 @@ static int streaming_write_entry(struct cache_entry *ce, char *path, fd = open_output_fd(path, ce, to_tempfile); if (0 = fd) { result = stream_blob_to_fd(fd, ce-sha1, filter, 1); - *fstat_done = fstat_output(fd, state, statbuf); - result = close(fd); + if (!result) { + *fstat_done = fstat_output(fd, state, statbuf); + result = close(fd); + } Should this do something like { int fd, result = 0; fd = open_output_fd(path, ce, to_tempfile); if (fd 0) return -1; result = stream_blob_to_fd(fd, ce-sha1, filter, 1); if (result) goto close_fd; *fstat_done = fstat_output(fd, state, statbuf); close_fd: result |= close(fd); unlink_path: if (result) unlink(path); return result; } to avoid leaking the file descriptor? @@ -31,4 +40,20 @@ test_expect_success 'streaming a corrupt blob fails' ' ) ' +test_expect_success 'read-tree -u detects bit-errors in blobs' ' + ( + cd bit-error + rm content.t + test_must_fail git read-tree --reset -u FETCH_HEAD + ) Makes sense. Might make sense to use rm -f instead of rm to avoid failures if content.t is removed already. +' + +test_expect_success 'read-tree -u detects missing objects' ' + ( + cd missing + rm content.t Especially here. Thanks, Jonathan -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: git ate my home directory :-(
Hi, Richard Weinberger wrote: In my scripts I'm setting GIT_DIR to use git-fetch and git-reset without changing the current working directory all the time. Yeah, for historical reasons GIT_WORK_TREE defaults to $(pwd) when GIT_DIR is explicitly set. In git versions including the patch 2cd83d10bb6b (setup: suppress implicit . work-tree for bare repos, 2013-03-08, currently in next but not master), you can set GIT_IMPLICIT_WORK_TREE=0 to avoid this behavior. Thanks for a useful example, and sorry for the trouble. Sincerely, Jonathan -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2 6/9] streaming_write_entry: propagate streaming errors
On Mon, Mar 25, 2013 at 02:39:34PM -0700, Jonathan Nieder wrote: --- a/entry.c +++ b/entry.c @@ -126,8 +126,10 @@ static int streaming_write_entry(struct cache_entry *ce, char *path, fd = open_output_fd(path, ce, to_tempfile); if (0 = fd) { result = stream_blob_to_fd(fd, ce-sha1, filter, 1); - *fstat_done = fstat_output(fd, state, statbuf); - result = close(fd); + if (!result) { + *fstat_done = fstat_output(fd, state, statbuf); + result = close(fd); + } Should this do something like [...] to avoid leaking the file descriptor? Yes, Eric Sunshine noticed this, too. Re-rolled patch is below, which I think is even a little cleaner. +test_expect_success 'read-tree -u detects bit-errors in blobs' ' + ( + cd bit-error + rm content.t + test_must_fail git read-tree --reset -u FETCH_HEAD + ) Makes sense. Might make sense to use rm -f instead of rm to avoid failures if content.t is removed already. Yeah, good point. My original test looked like: git init bit-error git fetch .. corrupt ... test_must_fail ... but I ended up refactoring it to re-use the corrupted directories, and added the rm after the fact. The use of FETCH_HEAD is also bogus (read-tree is failing, but because we are giving it a bogus ref, not because of the corruption, so we are not actually testing anything anymore, even though it still passes). Both fixed in my re-roll. -- 8 -- Subject: [PATCH] streaming_write_entry: propagate streaming errors When we are streaming an index blob to disk, we store the error from stream_blob_to_fd in the result variable, and then immediately overwrite that with the return value of close. That means we catch errors on close (e.g., problems committing the file to disk), but miss anything which happened before then. We can fix this by using bitwise-OR to accumulate errors in our result variable. While we're here, we can also simplify the error handling with an early return, which makes it easier to see under which circumstances we need to clean up. Signed-off-by: Jeff King p...@peff.net --- entry.c | 16 +--- t/t1060-object-corruption.sh | 25 + 2 files changed, 34 insertions(+), 7 deletions(-) diff --git a/entry.c b/entry.c index 17a6bcc..a20bcbc 100644 --- a/entry.c +++ b/entry.c @@ -120,16 +120,18 @@ static int streaming_write_entry(struct cache_entry *ce, char *path, const struct checkout *state, int to_tempfile, int *fstat_done, struct stat *statbuf) { - int result = -1; + int result = 0; int fd; fd = open_output_fd(path, ce, to_tempfile); - if (0 = fd) { - result = stream_blob_to_fd(fd, ce-sha1, filter, 1); - *fstat_done = fstat_output(fd, state, statbuf); - result = close(fd); - } - if (result 0 = fd) + if (fd 0) + return -1; + + result |= stream_blob_to_fd(fd, ce-sha1, filter, 1); + *fstat_done = fstat_output(fd, state, statbuf); + result |= close(fd); + + if (result) unlink(path); return result; } diff --git a/t/t1060-object-corruption.sh b/t/t1060-object-corruption.sh index d36994a..2945395 100755 --- a/t/t1060-object-corruption.sh +++ b/t/t1060-object-corruption.sh @@ -24,6 +24,15 @@ test_expect_success 'setup corrupt repo' ' ) ' +test_expect_success 'setup repo with missing object' ' + git init missing + ( + cd missing + test_commit content + rm -f $(obj_to_file HEAD:content.t) + ) +' + test_expect_success 'streaming a corrupt blob fails' ' ( cd bit-error @@ -31,4 +40,20 @@ test_expect_success 'streaming a corrupt blob fails' ' ) ' +test_expect_success 'read-tree -u detects bit-errors in blobs' ' + ( + cd bit-error + rm -f content.t + test_must_fail git read-tree --reset -u HEAD + ) +' + +test_expect_success 'read-tree -u detects missing objects' ' + ( + cd missing + rm -f content.t + test_must_fail git read-tree --reset -u HEAD + ) +' + test_done -- 1.8.2.13.g0f18d3c -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 4/4] transport: drop int cmp = cmp hack
Jeff King p...@peff.net writes: I wonder, though, what made you look at this. It did not come up in my list of -Wuninitialized warnings. Did it get triggered by one of the other gcc versions? No, but the function in question has that questionable construct written by somebody who does not understand linked list, and it dusgusted me enough to look at where that list came from, which inevitably made me notice that return dummy.next that made me go wat? diff --git a/transport.c b/transport.c index 87b8f14..e6f9346 100644 --- a/transport.c +++ b/transport.c @@ -106,7 +106,8 @@ static void insert_packed_refs(const char *packed_refs, struct ref **list) return; for (;;) { -int cmp, len; +int cmp = 0; /* assigned before used */ +int len; if (!fgets(buffer, sizeof(buffer), f)) { fclose(f); I think that's fine. -Peff -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: git ate my home directory :-(
Jonathan Nieder jrnie...@gmail.com writes: In git versions including the patch 2cd83d10bb6b (setup: suppress implicit . work-tree for bare repos, 2013-03-08, currently in next but not master), you can set GIT_IMPLICIT_WORK_TREE=0 to avoid this behavior. WAT? -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: git ate my home directory :-(
Jonathan Nieder jrnie...@gmail.com writes: Richard Weinberger wrote: In my scripts I'm setting GIT_DIR to use git-fetch and git-reset without changing the current working directory all the time. Yeah, for historical reasons GIT_WORK_TREE defaults to $(pwd) when GIT_DIR is explicitly set. And it *WILL* be that way til the end of time. Unless you are at the top level of your working tree, you are supposed to tell where the top level is with GIT_WORK_TREE when you use GIT_DIR. Always. And that is the answer you should be giving here, not implicit stuff, which is an implementation detail to help aliases. I do not know how things will break when the end user sets and exports it to the environment, and I do not think we would want to make any promise on how it works. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: git ate my home directory :-(
Junio C Hamano wrote: Jonathan Nieder jrnie...@gmail.com writes: In git versions including the patch 2cd83d10bb6b (setup: suppress implicit . work-tree for bare repos, 2013-03-08, currently in next but not master), you can set GIT_IMPLICIT_WORK_TREE=0 to avoid this behavior. WAT? Is that false? If I understand the history correctly, the ability to set the GIT_DIR envvar was meant to allow a person to keep their .git directory outside the worktree. So you can do: git init my-favorite-repo cd my-favorite-repo ...work as usual... # cleaning time! mv .git ~/my-favorite-repo-metadata.git GIT_DIR=$HOME/my-favorite-repo-metadata.git; export GIT_DIR ... work as usual... If you want to set GIT_DIR and treat it as a bare repository, the sane way to do that is simply cd ~/my-favorite-bare-repository.git ... use git as usual ... But if something (for example relative paths used by your script) ties your cwd somewhere else, you might really want to do GIT_DIR=~/my-favorite-bare-repository.git; export GIT_DIR ... work as usual ... and as a side effect of Jeff's patch there is now a mechanism to do that: GIT_IMPLICIT_WORK_TREE=0; export GIT_IMPLICIT_WORK_TREE GIT_DIR=~/my-favorite-bare-repository.git; export GIT_DIR ... work as usual ... This is of course unsafe because it ties your usage to a specific version of git. And the variable is not advertised in the documentation. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: git ate my home directory :-(
Am 25.03.2013 23:06, schrieb Junio C Hamano: Jonathan Nieder jrnie...@gmail.com writes: Richard Weinberger wrote: In my scripts I'm setting GIT_DIR to use git-fetch and git-reset without changing the current working directory all the time. Yeah, for historical reasons GIT_WORK_TREE defaults to $(pwd) when GIT_DIR is explicitly set. And it *WILL* be that way til the end of time. Unless you are at the top level of your working tree, you are supposed to tell where the top level is with GIT_WORK_TREE when you use GIT_DIR. Always. And that is the answer you should be giving here, not implicit stuff, which is an implementation detail to help aliases. I do not know how things will break when the end user sets and exports it to the environment, and I do not think we would want to make any promise on how it works. Okay, I have to set GIT_DIR _and_ GIT_WORK_TREE to make my scripts safe again? I've always set only GIT_DIR because it just worked (till today...). Thanks, //richard -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: git ate my home directory :-(
Junio C Hamano wrote: I do not know how things will break when the end user sets and exports it to the environment, and I do not think we would want to make any promise on how it works. That's a reasonable desire, and it means it's a good thing we noticed this before the envvar escaped to master. People *will* use such exposed interfaces unless they are clearly marked as internal. That's just a fact of life. Here's a rough patch to hopefully improve matters. Longer term, it would be nice to have something like GIT_IMPLICIT_WORK_TREE exposed to let scripts cache the result of the search for .git. Maybe something like GIT_BARE=(arbitrary value) would be a good interface. Signed-off-by: Jonathan Nieder jrnie...@gmail.com --- diff --git a/cache.h b/cache.h index 59e5b53..8f92b6d 100644 --- a/cache.h +++ b/cache.h @@ -377,7 +377,7 @@ static inline enum object_type object_type(unsigned int mode) * of this, but we use it internally to communicate to sub-processes that we * are in a bare repo. If not set, defaults to true. */ -#define GIT_IMPLICIT_WORK_TREE_ENVIRONMENT GIT_IMPLICIT_WORK_TREE +#define GIT_IMPLICIT_WORK_TREE_ENVIRONMENT GIT_INTERNAL_IMPLICIT_WORK_TREE /* * Repository-local GIT_* environment variables; these will be cleared -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: git ate my home directory :-(
Richard Weinberger wrote: Okay, I have to set GIT_DIR _and_ GIT_WORK_TREE to make my scripts safe again? I've always set only GIT_DIR because it just worked (till today...). chdir-ing into the git repo without setting any GIT_* vars is probably the simplest way to go. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: git ate my home directory :-(
Richard Weinberger rich...@nod.at writes: Okay, I have to set GIT_DIR _and_ GIT_WORK_TREE to make my scripts safe again? I've always set only GIT_DIR because it just worked (till today...). That means you never run your script inside a subdirectory ;-) If your $GIT_DIR is tied to a single working tree, a simpler way would be to add [core] worktree = /path/to/the/work/tree/ to $GIT_DIR/config. -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: git ate my home directory :-(
On Mon, Mar 25, 2013 at 3:13 PM, Jonathan Nieder jrnie...@gmail.com wrote: Junio C Hamano wrote: I do not know how things will break when the end user sets and exports it to the environment, and I do not think we would want to make any promise on how it works. That's a reasonable desire, and it means it's a good thing we noticed this before the envvar escaped to master. People *will* use such exposed interfaces unless they are clearly marked as internal. That's just a fact of life. Here's a rough patch to hopefully improve matters. Longer term, it would be nice to have something like GIT_IMPLICIT_WORK_TREE exposed to let scripts cache the result of the search for .git. Maybe something like GIT_BARE=(arbitrary value) would be a good interface. Signed-off-by: Jonathan Nieder jrnie...@gmail.com --- diff --git a/cache.h b/cache.h index 59e5b53..8f92b6d 100644 --- a/cache.h +++ b/cache.h @@ -377,7 +377,7 @@ static inline enum object_type object_type(unsigned int mode) * of this, but we use it internally to communicate to sub-processes that we * are in a bare repo. If not set, defaults to true. */ -#define GIT_IMPLICIT_WORK_TREE_ENVIRONMENT GIT_IMPLICIT_WORK_TREE +#define GIT_IMPLICIT_WORK_TREE_ENVIRONMENT GIT_INTERNAL_IMPLICIT_WORK_TREE Maybe the environment variable for internal-use-only should be prefixed with an underscore? -Brandon -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: git ate my home directory :-(
Am 25.03.2013 23:20, schrieb Junio C Hamano: Richard Weinberger rich...@nod.at writes: Okay, I have to set GIT_DIR _and_ GIT_WORK_TREE to make my scripts safe again? I've always set only GIT_DIR because it just worked (till today...). That means you never run your script inside a subdirectory ;-) If your $GIT_DIR is tied to a single working tree, a simpler way would be to add [core] worktree = /path/to/the/work/tree/ I've used GIT_DIR in my scripts because changing the current working directory within bash scripts often causes problem with other commands. That's why I've used patters like: export GIT_DIR=/path/to/repo/.git git fetch ... do_this do_that git reset --hard FETCH_HEAD do_foo git whatever export GIT_DIR=/path/to/another/repo/.git git fetch ... ... But from now on I'll simply cd into the git repo... Thanks, //richard -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Why does 'submodule add' stage the relevant portions?
Am 25.03.2013 20:57, schrieb Ramkumar Ramachandra: Jens Lehmann wrote: Am 25.03.2013 09:59, schrieb Ramkumar Ramachandra: In my opinion, the 'git submodule add path' form is broken, because it doesn't relocate the object store of the submodule repository (a bug that we need to fix?). I don't think it is broken. Leaving the .git directory inside the submodule makes perfect sense for some users (e.g. those which never intend to push their submodule somewhere else) and doesn't do any harm unless you want to remove it again in the future (or need to go back to an older commit where the submodule directory would be in the way). We have to think very hard before making such changes to behavior quite some people may rely on, even though I agree some use cases would profit from it and I think we would do it differently if we could start over again. Doesn't that sound horribly crippled to you? Is there any advantage to leaving the .git directory inside the submodule? Isn't it always better to relocate it? It's not crippled at all, that is just the way it was from submodule day one. And no, it isn't always better to relocate it. E.g. when you want to be able to just tar away work tree and history someplace else because you don't have (or don't want) an upstream to push to, you'd be very surprised a submodule add moved your .git directory someplace else effectively nuking the backup of your history and refs (guess under what circumstances you'll notice that). While I believe most submodule users would benefit from such a relocation, I consider the other use cases as valid and we would introduce silent breakage on them. On the other hand I made all relevant commands complain loudly about the .git directory in the submodule's work tree when it matters, so users can do something about it when they need it and are told so. I'm not against changing that per se (we already did that for the update case when cloning submodules), but I'm really not convinced it is worth the downsides here (which it definitely was in the update case). What I think we need rather soonish is git submodule to-gitfile, which would help your case too. We should then hint that in those cases where we refuse to remove a submodule when using git rm or git submodule deinit (or the git mv for submodules I'm currently preparing). Why a new subcommand? Is there a problem if we do the relocation at the time of 'add'? Will some user expectation break? For me relocation at the time of 'add' would be ok with a new option (and it might also make sense to have a config option changing the default for users who want that), but not as the default. And leaving aside 'add', there are tons of submodules out there which were cloned with older Git who have their .git directory inside the work tree. So a new subcommand (or at least a helper script in contrib) to relocate the .git directory would really help here to migrate these legacy submodules without users having to worry about data loss. I always use the 'git submodule add repository path' form, which puts the object store of the submodule repository in .git/modules of the parent repository. This form is nothing like 'git add', but more like a 'git clone': arguably, 'submodule clone' is a better name for it. Hmm, it does a clone first and then adds the submodule and the change to .gitmodules, so I still believe add is the correct term for it. So I really like the color the shed currently has ;-) I meant a variant of add that would clone, but not stage. I was arguing for a new subcommand so that I don't have to touch 'submodule add', not for a rename. Ah, now I get it, I was confused by your reference to 'git submodule add repository path'. I have to admit I still don't understand the use case you have for adding a submodule without staging it, but maybe it is just too late here. Maybe the WTF You need to run this command from the toplevel of the working tree needs to be fixed first? I think it's a matter of a simple pushd, popd before the operation and building the correct relative path. I won't object such a change (even though I suspect it'll turn out more complicated than that). I'll have to investigate. Ok, looking forward to that as I believe this would be a worthwhile improvement. I'm not sure how it'll work with nested submodules though. Then again, I think nested submodules are Wrong; submodules are probably not the right tool for the job. How did you come to that conclusion? Nested submodules make perfect sense and most people agree that in hindsight --recursive should have been the default, but again we can't simply change that now. Okay, I'll do it step-by-step now, with a live example: 1. git clone gh:artagnon/dotfiles, a repository with submodules. 2. git submodule update --init .elisp/auto-complete, a repository that contains submodules. 3. .elisp/auto-complete/.gitmodules lists the submodules (lib/fuzzy,
Re: [PATCH v2 3/3] sha1_file: remove recursion in unpack_entry
Thomas Rast tr...@student.ethz.ch writes: Similar to the recursion in packed_object_info(), this leads to problems on stack-space-constrained systems in the presence of long delta chains. We proceed in three phases: 1. Dig through the delta chain, saving each delta object's offsets and size on an ad-hoc stack. 2. Unpack the base object at the bottom. 3. Apply the deltas from the stack. Signed-off-by: Thomas Rast tr...@student.ethz.ch The above made me nervous but you are not pushing the actual delta data on the stack, so the patch looks good from a quick scan. Thanks. Will queue. --- sha1_file.c | 231 +++- 1 file changed, 150 insertions(+), 81 deletions(-) diff --git a/sha1_file.c b/sha1_file.c index bd054d1..1b685b9 100644 --- a/sha1_file.c +++ b/sha1_file.c @@ -1864,68 +1864,6 @@ static void add_delta_base_cache(struct packed_git *p, off_t base_offset, static void *read_object(const unsigned char *sha1, enum object_type *type, unsigned long *size); -static void *unpack_delta_entry(struct packed_git *p, - struct pack_window **w_curs, - off_t curpos, - unsigned long delta_size, - off_t obj_offset, - enum object_type *type, - unsigned long *sizep) -{ - void *delta_data, *result, *base; - unsigned long base_size; - off_t base_offset; - - base_offset = get_delta_base(p, w_curs, curpos, *type, obj_offset); - if (!base_offset) { - error(failed to validate delta base reference - at offset %PRIuMAX from %s, - (uintmax_t)curpos, p-pack_name); - return NULL; - } - unuse_pack(w_curs); - base = cache_or_unpack_entry(p, base_offset, base_size, type, 0); - if (!base) { - /* - * We're probably in deep shit, but let's try to fetch - * the required base anyway from another pack or loose. - * This is costly but should happen only in the presence - * of a corrupted pack, and is better than failing outright. - */ - struct revindex_entry *revidx; - const unsigned char *base_sha1; - revidx = find_pack_revindex(p, base_offset); - if (!revidx) - return NULL; - base_sha1 = nth_packed_object_sha1(p, revidx-nr); - error(failed to read delta base object %s -at offset %PRIuMAX from %s, - sha1_to_hex(base_sha1), (uintmax_t)base_offset, - p-pack_name); - mark_bad_packed_object(p, base_sha1); - base = read_object(base_sha1, type, base_size); - if (!base) - return NULL; - } - - delta_data = unpack_compressed_entry(p, w_curs, curpos, delta_size); - if (!delta_data) { - error(failed to unpack compressed delta - at offset %PRIuMAX from %s, - (uintmax_t)curpos, p-pack_name); - free(base); - return NULL; - } - result = patch_delta(base, base_size, - delta_data, delta_size, - sizep); - if (!result) - die(failed to apply delta); - free(delta_data); - add_delta_base_cache(p, base_offset, base, base_size, *type); - return result; -} - static void write_pack_access_log(struct packed_git *p, off_t obj_offset) { static FILE *log_file; @@ -1946,48 +1884,179 @@ static void write_pack_access_log(struct packed_git *p, off_t obj_offset) int do_check_packed_object_crc; +#define UNPACK_ENTRY_STACK_PREALLOC 64 +struct unpack_entry_stack_ent { + off_t obj_offset; + off_t curpos; + unsigned long size; +}; + void *unpack_entry(struct packed_git *p, off_t obj_offset, -enum object_type *type, unsigned long *sizep) +enum object_type *final_type, unsigned long *final_size) { struct pack_window *w_curs = NULL; off_t curpos = obj_offset; - void *data; + void *data = NULL; + unsigned long size; + enum object_type type; + struct unpack_entry_stack_ent small_delta_stack[UNPACK_ENTRY_STACK_PREALLOC]; + struct unpack_entry_stack_ent *delta_stack = small_delta_stack; + int delta_stack_nr = 0, delta_stack_alloc = UNPACK_ENTRY_STACK_PREALLOC; + int base_from_cache = 0; if (log_pack_access) write_pack_access_log(p, obj_offset); - if (do_check_packed_object_crc p-index_version 1) { - struct revindex_entry *revidx = find_pack_revindex(p, obj_offset); - unsigned long len = revidx[1].offset -
Re: [PATCH v2 6/9] streaming_write_entry: propagate streaming errors
Jeff King wrote: Both fixed in my re-roll. Thanks! This and the rest of the patches up to and including patch 8 look good to me. I haven't decided what to think about patch 9 yet, but I suspect it would be good, too. In the long term I suspect git clone --worktree-only repo (or some other standard interface for git-new-workdir functionality) is a better way to provide a convenient lightweight same-machine clone anyway. Jonathan -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 0/5] Verify GPG signatures when merging and extend %G? pretty string
Rebased it onto the current 'master'. The second patch fixes that the GPG status parser ignores the first line of GPG status output (that would be caught by the new merge signature verification test case). Sebastian Götte (5): Move commit GPG signature verification to commit.c commit.c/GPG signature verification: Also look at the first GPG status line merge/pull: verify GPG signatures of commits being merged merge/pull Check for untrusted good GPG signatures pretty printing: extend %G? to include 'N' and 'U' Documentation/merge-options.txt| 4 ++ Documentation/pretty-formats.txt | 3 +- builtin/merge.c| 35 - commit.c | 61 + commit.h | 10 + git-pull.sh| 10 - gpg-interface.h| 8 pretty.c | 77 - t/lib-gpg/pubring.gpg | Bin 1164 - 2359 bytes t/lib-gpg/random_seed | Bin 600 - 600 bytes t/lib-gpg/secring.gpg | Bin 1237 - 3734 bytes t/lib-gpg/trustdb.gpg | Bin 1280 - 1360 bytes t/t7612-merge-verify-signatures.sh | 61 + 13 files changed, 195 insertions(+), 74 deletions(-) create mode 100755 t/t7612-merge-verify-signatures.sh -- 1.8.1.5 -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html