Re: [PATCH 1/2] alternates: accept double-quoted paths
On Wed, Dec 14, 2016 at 1:08 AM, Junio C Hamanowrote: > Duy Nguyen writes: > >> At least attr has the same problem and is going the same direction >> [1]. Cool. (I actually thought the patch was in and evidence that this >> kind of backward compatibility breaking was ok, turns out the patch >> has stayed around for years) >> >> [1] >> http://public-inbox.org/git/%3c20161110203428.30512-18-sbel...@google.com%3E/ > > Seeing that again, I see this in the proposed log message: > > Full pattern must be quoted. So 'pat"t"ern attr' will give exactly > 'pat"t"ern', not 'pattern'. > > I cannot tell what it is trying to say. It's another (bad) way of saying that quoting must start at the beginning (so the closing quote must be at the end of pattern, i.e. the whole pattern is quoted). > The log message may need to be cleaned up. Yeah. Next time I'll write more or less "Look at Jeff's commit, this is basically it" when Stephan re-rolls sb/attr -- Duy
Re: "disabling bitmap writing, as some objects are not being packed"?
On Sat, Dec 17, 2016 at 4:28 AM, Junio C Hamanowrote: > David Turner writes: > >> I'm a bit confused by the message "disabling bitmap writing, as some >> objects are not being packed". I see it the my gc.log file on my git >> server. > >> 1. Its presence in the gc.log file prevents future automatic garbage >> collection. This seems bad. I understand the desire to avoid making >> things worse if a past gc has run into issues. But this warning is >> non-fatal; the only consequence is that many operations get slower. But >> a lack of gc when there are too many packs causes that consequence too >> (often a much worse slowdown than would be caused by the missing >> bitmap). >> >> So I wonder if it would be better for auto gc to grep gc.log for fatal >> errors (as opposed to warnings) and only skip running if any are found. >> Alternately, we could simply put warnings into gc.log.warning and >> reserve gc.log for fatal errors. I'm not sure which would be simpler. > > I am not sure if string matching is really a good idea, as I'd > assume that these messages are eligible for i18n. And we can't grep for fatal errors anyway. The problem that led to 329e6e8794 was this line warning: There are too many unreachable loose objects; run 'git prune' to remove them. which is not fatal. > 329e6e8794 ("gc: save log from daemonized gc --auto and print it > next time", 2015-09-19) wanted to notice that auto-gc is not > making progress and used the presense of error messages as a cue. > In your case, I think the auto-gc _is_ making progress, reducing > number of loose objects in the repository or consolidating many > packfiles into one Yeah the key point is making progress, and to reliably detect that we need some way for all the commands that git-gc executes to tell it about that, git-repack in this particular case but... > and the message is only about the fact that > packing is punting and not producing a bitmap as you asked, which > is different from not making any progress. I do not think log vs > warn is a good criteria to tell them apart, either. > > In any case, as the error message asks the user to do, the user > eventually wants to correct the root cause before removing the > gc.log; I am not sure report_last_gc_error() is the place to correct > this in the first place. > >> 2. I don't understand what would cause that message. That is, what bad >> thing am I doing that I should stop doing? I've briefly skimmed the >> code and commit message, but the answer isn't leaping out at me. > > Enabling bitmap generation for incremental packing that does not > cram everything into a single pack is triggering it, I would > presume. Perhaps we should ignore -b option in most of the cases > and enable it only for "repack -a -d -f" codepath? Or detect that > we are being run from "gc --auto" and automatically disable -b? ... since we have to change down in git-repack for that, perhaps doing this is better. We can pass --auto (or something) to repack to tell it about this special caller, so it only prints something to stderr in serious cases. Or we detect cases where background gc'ing won't work well and always do it in foreground (e.g. when bitmap generation is enabled). > I have a feeling that an approach along that line is closer to the > real solution than tweaking report_last_gc_error() and trying to > deduce if we are making any progress. -- Duy
Re: [RFC/PATCHv2] Makefile: add cppcheck target
On Fri, Dec 16, 2016 at 9:28 PM, Lars Schneiderwrote: > > On 14 Dec 2016, at 12:24, Jeff King wrote: > > On Wed, Dec 14, 2016 at 10:27:31PM +1300, Chris Packham wrote: > > Changes in v2: > > - only run over actual git source files. > > - omit any files in t/ > > > I actually wonder if FIND_SOURCE_FILES should be taking care of the "t/" > thing. I think "make tags" finds tags in t4051/appended1.c, which is > just silly. > > - introduce CPPCHECK_FLAGS which can be overridden in the make command > > line. This also uses a GNU make-ism to allow CPPCHECK_ADD to specify > > additional checks to be enabled. > > > The GNU-ism is fine; we already require GNU make to build. > > The patch itself is OK to me, I guess. The interesting part will be > whether people start actually _using_ cppcheck and squelching the false > positives. > > > @Chris: If this gets in then it would be great to run it as part of the > Travis-CI build: https://travis-ci.org/git/git/branches > Yeah I was thinking about this. Since as always with a new tool there are some doubts over it's usefulness. I could easily hook it up to a branch in my own fork of git and keep that branch rebased on top of pu. I'd need to keep an eye on it myself and report errors on the list. If that goes well, at some point someone will ask how I'm detecting these errors. Then I can point them at this patchset and if enough people want easy access to it then that may provide an incentive for this to be merged into git.git.
Re: [PATCH] pack-objects: don't warn about bitmaps on incremental pack
On Fri, Dec 16, 2016 at 06:59:35PM -0500, David Turner wrote: > When running git pack-objects --incremental, we do not expect to be > able to write a bitmap; it is very likely that objects in the new pack > will have references to objects outside of the pack. So we don't need > to warn the user about it. > [...] > diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c > index 0fd52bd..96de213 100644 > --- a/builtin/pack-objects.c > +++ b/builtin/pack-objects.c > @@ -1083,7 +1083,8 @@ static int add_object_entry(const unsigned char *sha1, > enum object_type type, > if (!want_object_in_pack(sha1, exclude, _pack, _offset)) { > /* The pack is missing an object, so it will not have closure */ > if (write_bitmap_index) { > - warning(_(no_closure_warning)); > + if (!incremental) > + warning(_(no_closure_warning)); > write_bitmap_index = 0; > } > return 0; I agree that the user doesn't need to be warned about it when running "gc --auto", but I wonder if somebody invoking "pack-objects --incremental --write-bitmap-index" ought to be. In other words, your patch is detecting at a low level that we've been given a nonsense combination of options, but should we perhaps stop passing nonsense in the first place? Either at the repack level, with something like: diff --git a/builtin/repack.c b/builtin/repack.c index 80dd06b4a2..6608a902b1 100644 --- a/builtin/repack.c +++ b/builtin/repack.c @@ -231,8 +231,6 @@ int cmd_repack(int argc, const char **argv, const char *prefix) argv_array_pushf(, "--no-reuse-delta"); if (no_reuse_object) argv_array_pushf(, "--no-reuse-object"); - if (write_bitmaps) - argv_array_push(, "--write-bitmap-index"); if (pack_everything & ALL_INTO_ONE) { get_non_kept_pack_filenames(_packs); @@ -256,8 +254,11 @@ int cmd_repack(int argc, const char **argv, const char *prefix) } else { argv_array_push(, "--unpacked"); argv_array_push(, "--incremental"); + write_bitmap_index = 0; } + if (write_bitmaps) + argv_array_push(, "--write-bitmap-index"); if (local) argv_array_push(, "--local"); if (quiet) Though that still means we do not warn on: git repack --write-bitmap-index which is nonsense (it is asking for an incremental repack with bitmaps). So maybe do it at the gc level, like: diff --git a/builtin/gc.c b/builtin/gc.c index 069950d0b4..d3c978c765 100644 --- a/builtin/gc.c +++ b/builtin/gc.c @@ -191,6 +191,11 @@ static void add_repack_all_option(void) } } +static void add_repack_incremental_option(void) +{ + argv_array_push(, "--no-write-bitmap-index"); +} + static int need_to_gc(void) { /* @@ -208,7 +213,9 @@ static int need_to_gc(void) */ if (too_many_packs()) add_repack_all_option(); - else if (!too_many_loose_objects()) + else if (too_many_loose_objects()) + add_repack_incremental_option(); + else return 0; if (run_hook_le(NULL, "pre-auto-gc", NULL)) -Peff
Re: [PATCH] git-p4: Fix multi-path changelist empty commits
On 17 December 2016 01:47:55 CET, Luke Diamandwrote: >On 15 December 2016 at 17:14, George Vanburgh >wrote: >> From: George Vanburgh >> >> When importing from multiple perforce paths - we may attempt to >import a changelist that contains files from two (or more) of these >depot paths. Currently, this results in multiple git commits - one >containing the changes, and the other(s) as empty commits. This >behavior was introduced in commit 1f90a64 ("git-p4: reduce number of >server queries for fetches", 2015-12-19). > >That's definitely a bug, thanks for spotting that! Even more so for >adding a test case. Not a problem - thanks to you guys for maintaining such an awesome tool! > >> >> Reproduction Steps: >> >> 1. Have a git repo cloned from a perforce repo using multiple depot >paths (e.g. //depot/foo and //depot/bar). >> 2. Submit a single change to the perforce repo that makes changes in >both //depot/foo and //depot/bar. >> 3. Run "git p4 sync" to sync the change from #2. >> >> Change is synced as multiple commits, one for each depot path that >was affected. >> >> Using a set, instead of a list inside p4ChangesForPaths() ensures >that each changelist is unique to the returned list, and therefore only >a single commit is generated for each changelist. > >The change looks good to me apart from one missing "&&" in the test >case (see below). Oops - I'll correct that and resubmit :) >Possibly need to rewrap the comment line (I think there's a 72 >character limit) ? Sure - I'll fix that in the resubmission > >Luke > > >> >> Reported-by: James Farwell >> Signed-off-by: George Vanburgh >> --- >> git-p4.py | 4 ++-- >> t/t9800-git-p4-basic.sh | 22 +- >> 2 files changed, 23 insertions(+), 3 deletions(-) >> >> diff --git a/git-p4.py b/git-p4.py >> index fd5ca52..6307bc8 100755 >> --- a/git-p4.py >> +++ b/git-p4.py >> @@ -822,7 +822,7 @@ def p4ChangesForPaths(depotPaths, changeRange, >requestedBlockSize): >> die("cannot use --changes-block-size with >non-numeric revisions") >> block_size = None >> >> -changes = [] >> +changes = set() >> >> # Retrieve changes a block at a time, to prevent running >> # into a MaxResults/MaxScanRows error from the server. >> @@ -841,7 +841,7 @@ def p4ChangesForPaths(depotPaths, changeRange, >requestedBlockSize): >> >> # Insert changes in chronological order >> for line in reversed(p4_read_pipe_lines(cmd)): >> -changes.append(int(line.split(" ")[1])) >> +changes.add(int(line.split(" ")[1])) >> >> if not block_size: >> break >> diff --git a/t/t9800-git-p4-basic.sh b/t/t9800-git-p4-basic.sh >> index 0730f18..4d72e0b 100755 >> --- a/t/t9800-git-p4-basic.sh >> +++ b/t/t9800-git-p4-basic.sh >> @@ -131,6 +131,26 @@ test_expect_success 'clone two dirs, @all, >conflicting files' ' >> ) >> ' >> >> +test_expect_success 'clone two dirs, each edited by submit, single >git commit' ' >> + ( >> + cd "$cli" && >> + echo sub1/f4 >sub1/f4 && >> + p4 add sub1/f4 && >> + echo sub2/f4 >sub2/f4 && >> + p4 add sub2/f4 && >> + p4 submit -d "sub1/f4 and sub2/f4" >> + ) && >> + git p4 clone --dest="$git" //depot/sub1@all //depot/sub2@all >&& >> + test_when_finished cleanup_git && >> + ( >> + cd "$git" > >Missing && > >> + git ls-files >lines && >> + test_line_count = 4 lines && >> + git log --oneline p4/master >lines && >> + test_line_count = 5 lines >> + ) >> +' >> + >> revision_ranges="2000/01/01,#head \ >> 1,2080/01/01 \ >> 2000/01/01,2080/01/01 \ >> @@ -147,7 +167,7 @@ test_expect_success 'clone using non-numeric >revision ranges' ' >> ( >> cd "$git" && >> git ls-files >lines && >> - test_line_count = 6 lines >> + test_line_count = 8 lines >> ) >> done >> ' >> >> -- >> https://github.com/git/git/pull/311
Re: [PATCH] diff: prefer indent heuristic over compaction heuristic
Jacob Kellerwrites: > From: Jacob Keller > > The current configuration code for enabling experimental heuristics > prefers the last-set heuristic in the configuration. However, it is not > necessarily easy to see what order the configuration will be read. This > means that it is possible for a user to have accidentally enabled both > heuristics, and end up only enabling the older compaction heuristic. > > Modify the code so that we do not clear the other heuristic when we set > each heuristic enabled. Then, during diff_setup() when we check the > configuration, we will first check the newer indent heuristic. This > ensures that we only enable the newer heuristic if both have been > enabled. > > Signed-off-by: Jacob Keller > --- > diff.c | 10 ++ > 1 file changed, 2 insertions(+), 8 deletions(-) Although I do not think we should spend too much braincycles on this one (we should rather just removing the older one soonish), I think this patch is going in a wrong direction. I agree that "the last one wins" is a bit hard to see (until you check with "git config -l" perhaps) but it at least is predictable. With this patch, you need to KNOW that indent wins over compaction, perhaps by knowing the order they were developed, which demands a lot more from the users. We probably should just keep one and remove the other. > diff --git a/diff.c b/diff.c > index ec8728362dae..48a5b2797e3d 100644 > --- a/diff.c > +++ b/diff.c > @@ -223,16 +223,10 @@ void init_diff_ui_defaults(void) > > int git_diff_heuristic_config(const char *var, const char *value, void *cb) > { > + if (!strcmp(var, "diff.indentheuristic")) > diff_indent_heuristic = git_config_bool(var, value); > + if (!strcmp(var, "diff.compactionheuristic")) > diff_compaction_heuristic = git_config_bool(var, value); > return 0; > }
[PATCH v2 5/5] describe: teach describe negative pattern matches
From: Jacob KellerTeach git-describe the `--discard` option which will allow specifying a glob pattern of tags to ignore. This can be combined with the `--match` patterns to enable more flexibility in determining which tags to consider. For example, suppose you wish to find the first official release tag that contains a certain commit. If we assume that official release tags are of the form "v*" and pre-release candidates include "*rc*" in their name, we can now find the first tag that introduces commit abcdef via: git describe --contains --match="v*" --discard="*rc*" Add documentation and tests for this change. Signed-off-by: Jacob Keller --- Documentation/git-describe.txt | 8 builtin/describe.c | 21 + 2 files changed, 29 insertions(+) diff --git a/Documentation/git-describe.txt b/Documentation/git-describe.txt index 7ad41e2f6ade..a89bbde207b2 100644 --- a/Documentation/git-describe.txt +++ b/Documentation/git-describe.txt @@ -88,6 +88,14 @@ OPTIONS patterns will be considered. Use `--no-match` to clear and reset the list of patterns. +--discard :: + Do not consider tags matching the given `glob(7)` pattern, excluding + the "refs/tags/" prefix. This can be used to narrow the tag space and + find only tags matching some meaningful criteria. If given multiple + times, a list of patterns will be accumulated and tags matching any + of the patterns will be discarded. Use `--no-discard` to clear and + reset the list of patterns. + --always:: Show uniquely abbreviated commit object as fallback. diff --git a/builtin/describe.c b/builtin/describe.c index 5cc9e9abe798..c09288ee6321 100644 --- a/builtin/describe.c +++ b/builtin/describe.c @@ -29,6 +29,7 @@ static int max_candidates = 10; static struct hashmap names; static int have_util; static struct string_list patterns = STRING_LIST_INIT_NODUP; +static struct string_list discard_patterns = STRING_LIST_INIT_NODUP; static int always; static const char *dirty; @@ -130,6 +131,22 @@ static int get_name(const char *path, const struct object_id *oid, int flag, voi return 0; /* +* If we're given discard patterns, first discard any tag which match +* any of the discard pattern. +*/ + if (discard_patterns.nr) { + struct string_list_item *item; + + if (!is_tag) + return 0; + + for_each_string_list_item(item, _patterns) { + if (!wildmatch(item->string, path + 10, 0, NULL)) + return 0; + } + } + + /* * If we're given patterns, accept only tags which match at least one * pattern. */ @@ -421,6 +438,8 @@ int cmd_describe(int argc, const char **argv, const char *prefix) N_("consider most recent tags (default: 10)")), OPT_STRING_LIST(0, "match", , N_("pattern"), N_("only consider tags matching ")), + OPT_STRING_LIST(0, "discard", _patterns, N_("pattern"), + N_("do not consider tags matching ")), OPT_BOOL(0, "always",, N_("show abbreviated commit object as fallback")), {OPTION_STRING, 0, "dirty", , N_("mark"), @@ -458,6 +477,8 @@ int cmd_describe(int argc, const char **argv, const char *prefix) argv_array_push(, "--tags"); for_each_string_list_item(item, ) argv_array_pushf(, "--refs=refs/tags/%s", item->string); + for_each_string_list_item(item, _patterns) + argv_array_pushf(, "--discard=refs/tags/%s", item->string); } if (argc) argv_array_pushv(, argv); -- 2.11.0.rc2.152.g4d04e67
[PATCH v2 4/5] describe: teach --match to accept multiple patterns
From: Jacob KellerTeach `--match` to be accepted multiple times, accumulating a list of patterns to match into a string list. Each pattern is inclusive, such that a tag need only match one of the provided patterns to be considered for matching. This extension is useful as it enables more flexibility in what tags match, and may avoid the need to run the describe command multiple times to get the same result. Add tests and update the documentation for this change. Signed-off-by: Jacob Keller --- Documentation/git-describe.txt | 5 - builtin/describe.c | 30 +++--- t/t6120-describe.sh| 19 +++ 3 files changed, 46 insertions(+), 8 deletions(-) diff --git a/Documentation/git-describe.txt b/Documentation/git-describe.txt index e4ac448ff565..7ad41e2f6ade 100644 --- a/Documentation/git-describe.txt +++ b/Documentation/git-describe.txt @@ -83,7 +83,10 @@ OPTIONS --match :: Only consider tags matching the given `glob(7)` pattern, excluding the "refs/tags/" prefix. This can be used to avoid - leaking private tags from the repository. + leaking private tags from the repository. If given multiple times, a + list of patterns will be accumulated, and tags matching any of the + patterns will be considered. Use `--no-match` to clear and reset the + list of patterns. --always:: Show uniquely abbreviated commit object as fallback. diff --git a/builtin/describe.c b/builtin/describe.c index 01490a157efc..5cc9e9abe798 100644 --- a/builtin/describe.c +++ b/builtin/describe.c @@ -28,7 +28,7 @@ static int abbrev = -1; /* unspecified */ static int max_candidates = 10; static struct hashmap names; static int have_util; -static const char *pattern; +static struct string_list patterns = STRING_LIST_INIT_NODUP; static int always; static const char *dirty; @@ -129,9 +129,24 @@ static int get_name(const char *path, const struct object_id *oid, int flag, voi if (!all && !is_tag) return 0; - /* Accept only tags that match the pattern, if given */ - if (pattern && (!is_tag || wildmatch(pattern, path + 10, 0, NULL))) - return 0; + /* +* If we're given patterns, accept only tags which match at least one +* pattern. +*/ + if (patterns.nr) { + struct string_list_item *item; + + if (!is_tag) + return 0; + + for_each_string_list_item(item, ) { + if (!wildmatch(item->string, path + 10, 0, NULL)) + break; + + /* If we get here, no pattern matched. */ + return 0; + } + } /* Is it annotated? */ if (!peel_ref(path, peeled.hash)) { @@ -404,7 +419,7 @@ int cmd_describe(int argc, const char **argv, const char *prefix) N_("only output exact matches"), 0), OPT_INTEGER(0, "candidates", _candidates, N_("consider most recent tags (default: 10)")), - OPT_STRING(0, "match", , N_("pattern"), + OPT_STRING_LIST(0, "match", , N_("pattern"), N_("only consider tags matching ")), OPT_BOOL(0, "always",, N_("show abbreviated commit object as fallback")), @@ -430,6 +445,7 @@ int cmd_describe(int argc, const char **argv, const char *prefix) die(_("--long is incompatible with --abbrev=0")); if (contains) { + struct string_list_item *item; struct argv_array args; argv_array_init(); @@ -440,8 +456,8 @@ int cmd_describe(int argc, const char **argv, const char *prefix) argv_array_push(, "--always"); if (!all) { argv_array_push(, "--tags"); - if (pattern) - argv_array_pushf(, "--refs=refs/tags/%s", pattern); + for_each_string_list_item(item, ) + argv_array_pushf(, "--refs=refs/tags/%s", item->string); } if (argc) argv_array_pushv(, argv); diff --git a/t/t6120-describe.sh b/t/t6120-describe.sh index 85f269411cb3..9e5db9b87a1f 100755 --- a/t/t6120-describe.sh +++ b/t/t6120-describe.sh @@ -182,6 +182,10 @@ check_describe "test2-lightweight-*" --tags --match="test2-*" check_describe "test2-lightweight-*" --long --tags --match="test2-*" HEAD^ +check_describe "test1-lightweight-*" --long --tags --match="test1-*" --match="test2-*" HEAD^ + +check_describe "test2-lightweight-*" --long --tags --match="test1-*" --no-match --match="test2-*" HEAD^ + test_expect_success 'name-rev with exact tags' ' echo A >expect &&
[PATCH v2 2/5] name-rev: extend --refs to accept multiple patterns
From: Jacob KellerTeach git name-rev to take a string list of patterns from --refs instead of only a single pattern. The list of patterns will be matched inclusively, such that a ref only needs to match one pattern to be included. If a ref will only be excluded if it does not match any of the patterns. Add tests and documentation for this change. Signed-off-by: Jacob Keller --- Documentation/git-name-rev.txt | 4 +++- builtin/name-rev.c | 41 +--- t/t6007-rev-list-cherry-pick-file.sh | 30 ++ 3 files changed, 62 insertions(+), 13 deletions(-) diff --git a/Documentation/git-name-rev.txt b/Documentation/git-name-rev.txt index ca28fb8e2a07..7433627db12d 100644 --- a/Documentation/git-name-rev.txt +++ b/Documentation/git-name-rev.txt @@ -26,7 +26,9 @@ OPTIONS --refs=:: Only use refs whose names match a given shell pattern. The pattern - can be one of branch name, tag name or fully qualified ref name. + can be one of branch name, tag name or fully qualified ref name. If + given multiple times, use refs whose names match any of the given shell + patterns. Use `--no-refs` to clear any previous ref patterns given. --all:: List all commits reachable from all refs diff --git a/builtin/name-rev.c b/builtin/name-rev.c index cd89d48b65e8..000a2a700ed3 100644 --- a/builtin/name-rev.c +++ b/builtin/name-rev.c @@ -108,7 +108,7 @@ static const char *name_ref_abbrev(const char *refname, int shorten_unambiguous) struct name_ref_data { int tags_only; int name_only; - const char *ref_filter; + struct string_list ref_filters; }; static struct tip_table { @@ -150,16 +150,33 @@ static int name_ref(const char *path, const struct object_id *oid, int flags, vo if (data->tags_only && !starts_with(path, "refs/tags/")) return 0; - if (data->ref_filter) { - switch (subpath_matches(path, data->ref_filter)) { - case -1: /* did not match */ - return 0; - case 0: /* matched fully */ - break; - default: /* matched subpath */ - can_abbreviate_output = 1; - break; + if (data->ref_filters.nr) { + struct string_list_item *item; + int matched = 0; + + /* See if any of the patterns match. */ + for_each_string_list_item(item, >ref_filters) { + /* +* We want to check every pattern even if we already +* found a match, just in case one of the later +* patterns could abbreviate the output. +*/ + switch (subpath_matches(path, item->string)) { + case -1: /* did not match */ + break; + case 0: /* matched fully */ + matched = 1; + break; + default: /* matched subpath */ + matched = 1; + can_abbreviate_output = 1; + break; + } } + + /* If none of the patterns matched, stop now */ + if (!matched) + return 0; } add_to_tip_table(oid->hash, path, can_abbreviate_output); @@ -306,11 +323,11 @@ int cmd_name_rev(int argc, const char **argv, const char *prefix) { struct object_array revs = OBJECT_ARRAY_INIT; int all = 0, transform_stdin = 0, allow_undefined = 1, always = 0, peel_tag = 0; - struct name_ref_data data = { 0, 0, NULL }; + struct name_ref_data data = { 0, 0, STRING_LIST_INIT_NODUP }; struct option opts[] = { OPT_BOOL(0, "name-only", _only, N_("print only names (no SHA-1)")), OPT_BOOL(0, "tags", _only, N_("only use tags to name the commits")), - OPT_STRING(0, "refs", _filter, N_("pattern"), + OPT_STRING_LIST(0, "refs", _filters, N_("pattern"), N_("only use refs matching ")), OPT_GROUP(""), OPT_BOOL(0, "all", , N_("list all commits reachable from all refs")), diff --git a/t/t6007-rev-list-cherry-pick-file.sh b/t/t6007-rev-list-cherry-pick-file.sh index 1408b608eb03..d072ec43b016 100755 --- a/t/t6007-rev-list-cherry-pick-file.sh +++ b/t/t6007-rev-list-cherry-pick-file.sh @@ -99,6 +99,36 @@ test_expect_success '--cherry-pick bar does not come up empty (II)' ' test_cmp actual.named expect ' +test_expect_success 'name-rev multiple --refs combine inclusive' ' + git rev-list --left-right --cherry-pick F...E -- bar > actual && + git
[PATCH v2 3/5] name-rev: add support to discard refs by pattern match
From: Jacob KellerExtend name-rev further to support matching refs by adding `--discard` patterns. These patterns will limit the scope of refs by discarding any ref that matches at least one discard pattern. Checking the discard refs shall happen first, before checking the include --refs patterns. This will allow more flexibility to matching certain kinds of references. Add tests and update Documentation for this change. Signed-off-by: Jacob Keller --- Documentation/git-name-rev.txt | 7 +++ builtin/name-rev.c | 14 +- 2 files changed, 20 insertions(+), 1 deletion(-) diff --git a/Documentation/git-name-rev.txt b/Documentation/git-name-rev.txt index 7433627db12d..9b46e5ea9aae 100644 --- a/Documentation/git-name-rev.txt +++ b/Documentation/git-name-rev.txt @@ -30,6 +30,13 @@ OPTIONS given multiple times, use refs whose names match any of the given shell patterns. Use `--no-refs` to clear any previous ref patterns given. +--discard=:: + Do not use any ref whose name matches a given shell pattern. The + pattern can be one of branch name, tag name or fully qualified ref + name. If given multiple times, discard refs that match any of the given + shell patterns. Use `--no-discards` to clear the list of discard + patterns. + --all:: List all commits reachable from all refs diff --git a/builtin/name-rev.c b/builtin/name-rev.c index 000a2a700ed3..86479c17a7c9 100644 --- a/builtin/name-rev.c +++ b/builtin/name-rev.c @@ -109,6 +109,7 @@ struct name_ref_data { int tags_only; int name_only; struct string_list ref_filters; + struct string_list discard_filters; }; static struct tip_table { @@ -150,6 +151,15 @@ static int name_ref(const char *path, const struct object_id *oid, int flags, vo if (data->tags_only && !starts_with(path, "refs/tags/")) return 0; + if (data->discard_filters.nr) { + struct string_list_item *item; + + for_each_string_list_item(item, >discard_filters) { + if (subpath_matches(path, item->string) >= 0) + return 0; + } + } + if (data->ref_filters.nr) { struct string_list_item *item; int matched = 0; @@ -323,12 +333,14 @@ int cmd_name_rev(int argc, const char **argv, const char *prefix) { struct object_array revs = OBJECT_ARRAY_INIT; int all = 0, transform_stdin = 0, allow_undefined = 1, always = 0, peel_tag = 0; - struct name_ref_data data = { 0, 0, STRING_LIST_INIT_NODUP }; + struct name_ref_data data = { 0, 0, STRING_LIST_INIT_NODUP, STRING_LIST_INIT_NODUP }; struct option opts[] = { OPT_BOOL(0, "name-only", _only, N_("print only names (no SHA-1)")), OPT_BOOL(0, "tags", _only, N_("only use tags to name the commits")), OPT_STRING_LIST(0, "refs", _filters, N_("pattern"), N_("only use refs matching ")), + OPT_STRING_LIST(0, "discard", _filters, N_("pattern"), + N_("ignore refs matching ")), OPT_GROUP(""), OPT_BOOL(0, "all", , N_("list all commits reachable from all refs")), OPT_BOOL(0, "stdin", _stdin, N_("read from stdin")), -- 2.11.0.rc2.152.g4d04e67
[PATCH v2 1/5] doc: add documentation for OPT_STRING_LIST
From: Jacob KellerCommit c8ba16391655 ("parse-options: add OPT_STRING_LIST helper", 2011-06-09) added the OPT_STRING_LIST as a way to accumulate a repeated list of strings. However, this was not documented in the api-parse-options documentation. Add documentation now so that future developers may learn of its existence. Signed-off-by: Jacob Keller --- Documentation/technical/api-parse-options.txt | 5 + 1 file changed, 5 insertions(+) diff --git a/Documentation/technical/api-parse-options.txt b/Documentation/technical/api-parse-options.txt index 27bd701c0d68..92791740aa64 100644 --- a/Documentation/technical/api-parse-options.txt +++ b/Documentation/technical/api-parse-options.txt @@ -168,6 +168,11 @@ There are some macros to easily define options: Introduce an option with string argument. The string argument is put into `str_var`. +`OPT_STRING_LIST(short long, , arg_str, description)`:: + Introduce an option with a string argument. Repeated invocations + accumulate into a list of strings. Reset and clear the list with + `--no-option`. + `OPT_INTEGER(short, long, _var, description)`:: Introduce an option with integer argument. The integer is put into `int_var`. -- 2.11.0.rc2.152.g4d04e67
[PATCH v2 0/5] extend describe and name-rev pattern matching
From: Jacob KellerThis series adds support for extending describe and name-rev pattern matching to allow (a) taking multiple patterns and (b) negative patterns which discard instead of keep. These changes increase the flexibility of the describe mechanism so that searching for specific tag formats can be done more easily. Changes since v1 * add new patches for negative pattern matching * tweak the documentation slightly -- interdiff since v1 -- diff --git c/Documentation/git-describe.txt w/Documentation/git-describe.txt index c85f2811ce28..a89bbde207b2 100644 --- c/Documentation/git-describe.txt +++ w/Documentation/git-describe.txt @@ -83,11 +83,18 @@ OPTIONS --match :: Only consider tags matching the given `glob(7)` pattern, excluding the "refs/tags/" prefix. This can be used to avoid - leaking private tags from the repository, or to shrink the scope of - searched tags to avoid -rc tags or similar. If given multiple a list - of patterns will be accumulated, and tags matching any of the patterns - will be considered. Use `--no-match` to clear and reset the list of - patterns. + leaking private tags from the repository. If given multiple times, a + list of patterns will be accumulated, and tags matching any of the + patterns will be considered. Use `--no-match` to clear and reset the + list of patterns. + +--discard :: + Do not consider tags matching the given `glob(7)` pattern, excluding + the "refs/tags/" prefix. This can be used to narrow the tag space and + find only tags matching some meaningful criteria. If given multiple + times, a list of patterns will be accumulated and tags matching any + of the patterns will be discarded. Use `--no-discard` to clear and + reset the list of patterns. --always:: Show uniquely abbreviated commit object as fallback. diff --git c/Documentation/git-name-rev.txt w/Documentation/git-name-rev.txt index 7433627db12d..9b46e5ea9aae 100644 --- c/Documentation/git-name-rev.txt +++ w/Documentation/git-name-rev.txt @@ -30,6 +30,13 @@ OPTIONS given multiple times, use refs whose names match any of the given shell patterns. Use `--no-refs` to clear any previous ref patterns given. +--discard=:: + Do not use any ref whose name matches a given shell pattern. The + pattern can be one of branch name, tag name or fully qualified ref + name. If given multiple times, discard refs that match any of the given + shell patterns. Use `--no-discards` to clear the list of discard + patterns. + --all:: List all commits reachable from all refs diff --git c/builtin/describe.c w/builtin/describe.c index e3ceab65e273..c09288ee6321 100644 --- c/builtin/describe.c +++ w/builtin/describe.c @@ -29,6 +29,7 @@ static int max_candidates = 10; static struct hashmap names; static int have_util; static struct string_list patterns = STRING_LIST_INIT_NODUP; +static struct string_list discard_patterns = STRING_LIST_INIT_NODUP; static int always; static const char *dirty; @@ -130,6 +131,22 @@ static int get_name(const char *path, const struct object_id *oid, int flag, voi return 0; /* +* If we're given discard patterns, first discard any tag which match +* any of the discard pattern. +*/ + if (discard_patterns.nr) { + struct string_list_item *item; + + if (!is_tag) + return 0; + + for_each_string_list_item(item, _patterns) { + if (!wildmatch(item->string, path + 10, 0, NULL)) + return 0; + } + } + + /* * If we're given patterns, accept only tags which match at least one * pattern. */ @@ -140,9 +157,8 @@ static int get_name(const char *path, const struct object_id *oid, int flag, voi return 0; for_each_string_list_item(item, ) { - if (!wildmatch(item->string, path + 10, 0, NULL)) { + if (!wildmatch(item->string, path + 10, 0, NULL)) break; - } /* If we get here, no pattern matched. */ return 0; @@ -422,6 +438,8 @@ int cmd_describe(int argc, const char **argv, const char *prefix) N_("consider most recent tags (default: 10)")), OPT_STRING_LIST(0, "match", , N_("pattern"), N_("only consider tags matching ")), + OPT_STRING_LIST(0, "discard", _patterns, N_("pattern"), + N_("do not consider tags matching ")), OPT_BOOL(0, "always",, N_("show abbreviated commit object as fallback")), {OPTION_STRING, 0, "dirty", , N_("mark"), @@
[PATCHv2] git-p4: handle symlinked directories
This is an updated version of my earlier git-p4 symlink fix. This one now treats addition of all symlinks in the same way, rather than displaying the target file if linking to a file, or just the link target name if it's a directory. That makes the git-p4 summary behave more like normal git commands (which also treat symlinks uniformaly). This is a very slight change in behaviour, but I don't think it can break anything since it is only when generating the summary that goes after the P4 change template. Luke Diamand (1): git-p4: avoid crash adding symlinked directory git-p4.py | 26 -- t/t9830-git-p4-symlink-dir.sh | 43 +++ 2 files changed, 63 insertions(+), 6 deletions(-) create mode 100755 t/t9830-git-p4-symlink-dir.sh -- 2.8.2.703.g78b384c.dirty
[PATCHv2] git-p4: avoid crash adding symlinked directory
When submitting to P4, if git-p4 came across a symlinked directory, then during the generation of the submit diff, it would try to open it as a normal file and fail. Spot symlinks (of any type) and output a description of the symlink instead. Add a test case. Signed-off-by: Luke Diamand--- git-p4.py | 26 -- t/t9830-git-p4-symlink-dir.sh | 43 +++ 2 files changed, 63 insertions(+), 6 deletions(-) create mode 100755 t/t9830-git-p4-symlink-dir.sh diff --git a/git-p4.py b/git-p4.py index fd5ca52..16d0b8a 100755 --- a/git-p4.py +++ b/git-p4.py @@ -25,6 +25,7 @@ import stat import zipfile import zlib import ctypes +import errno try: from subprocess import CalledProcessError @@ -1538,7 +1539,7 @@ class P4Submit(Command, P4UserMap): if response == 'n': return False -def get_diff_description(self, editedFiles, filesToAdd): +def get_diff_description(self, editedFiles, filesToAdd, symlinks): # diff if os.environ.has_key("P4DIFF"): del(os.environ["P4DIFF"]) @@ -1553,10 +1554,17 @@ class P4Submit(Command, P4UserMap): newdiff += " new file \n" newdiff += "--- /dev/null\n" newdiff += "+++ %s\n" % newFile -f = open(newFile, "r") -for line in f.readlines(): -newdiff += "+" + line -f.close() + +is_link = os.path.islink(newFile) +expect_link = newFile in symlinks + +if is_link and expect_link: +newdiff += "+%s\n" % os.readlink(newFile) +else: +f = open(newFile, "r") +for line in f.readlines(): +newdiff += "+" + line +f.close() return (diff + newdiff).replace('\r\n', '\n') @@ -1574,6 +1582,7 @@ class P4Submit(Command, P4UserMap): filesToDelete = set() editedFiles = set() pureRenameCopy = set() +symlinks = set() filesToChangeExecBit = {} for line in diff: @@ -1590,6 +1599,11 @@ class P4Submit(Command, P4UserMap): filesToChangeExecBit[path] = diff['dst_mode'] if path in filesToDelete: filesToDelete.remove(path) + +dst_mode = int(diff['dst_mode'], 8) +if dst_mode == 012: +symlinks.add(path) + elif modifier == "D": filesToDelete.add(path) if path in filesToAdd: @@ -1727,7 +1741,7 @@ class P4Submit(Command, P4UserMap): separatorLine = " everything below this line is just the diff ###\n" if not self.prepare_p4_only: submitTemplate += separatorLine -submitTemplate += self.get_diff_description(editedFiles, filesToAdd) +submitTemplate += self.get_diff_description(editedFiles, filesToAdd, symlinks) (handle, fileName) = tempfile.mkstemp() tmpFile = os.fdopen(handle, "w+b") diff --git a/t/t9830-git-p4-symlink-dir.sh b/t/t9830-git-p4-symlink-dir.sh new file mode 100755 index 000..3dc528b --- /dev/null +++ b/t/t9830-git-p4-symlink-dir.sh @@ -0,0 +1,43 @@ +#!/bin/sh + +test_description='git p4 symlinked directories' + +. ./lib-git-p4.sh + +test_expect_success 'start p4d' ' + start_p4d +' + +test_expect_success 'symlinked directory' ' + ( + cd "$cli" && + : >first_file.t && + p4 add first_file.t && + p4 submit -d "first change" + ) && + git p4 clone --dest "$git" //depot && + ( + cd "$git" && + mkdir -p some/sub/directory && + mkdir -p other/subdir2 && + : > other/subdir2/file.t && + (cd some/sub/directory && ln -s ../../../other/subdir2 .) && + git add some other && + git commit -m "symlinks" && + git config git-p4.skipSubmitEdit true && + git p4 submit -v + ) && + ( + cd "$cli" && + p4 sync && + test -L some/sub/directory/subdir2 + test_path_is_file some/sub/directory/subdir2/file.t + ) + +' + +test_expect_success 'kill p4d' ' + kill_p4d +' + +test_done -- 2.8.2.703.g78b384c.dirty
[PATCH] diff: prefer indent heuristic over compaction heuristic
From: Jacob KellerThe current configuration code for enabling experimental heuristics prefers the last-set heuristic in the configuration. However, it is not necessarily easy to see what order the configuration will be read. This means that it is possible for a user to have accidentally enabled both heuristics, and end up only enabling the older compaction heuristic. Modify the code so that we do not clear the other heuristic when we set each heuristic enabled. Then, during diff_setup() when we check the configuration, we will first check the newer indent heuristic. This ensures that we only enable the newer heuristic if both have been enabled. Signed-off-by: Jacob Keller --- diff.c | 10 ++ 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/diff.c b/diff.c index ec8728362dae..48a5b2797e3d 100644 --- a/diff.c +++ b/diff.c @@ -223,16 +223,10 @@ void init_diff_ui_defaults(void) int git_diff_heuristic_config(const char *var, const char *value, void *cb) { - if (!strcmp(var, "diff.indentheuristic")) { + if (!strcmp(var, "diff.indentheuristic")) diff_indent_heuristic = git_config_bool(var, value); - if (diff_indent_heuristic) - diff_compaction_heuristic = 0; - } - if (!strcmp(var, "diff.compactionheuristic")) { + if (!strcmp(var, "diff.compactionheuristic")) diff_compaction_heuristic = git_config_bool(var, value); - if (diff_compaction_heuristic) - diff_indent_heuristic = 0; - } return 0; } -- 2.11.0.rc2.152.g4d04e67
Re: [PATCH] git-p4: Fix multi-path changelist empty commits
On 15 December 2016 at 17:14, George Vanburghwrote: > From: George Vanburgh > > When importing from multiple perforce paths - we may attempt to import a > changelist that contains files from two (or more) of these depot paths. > Currently, this results in multiple git commits - one containing the changes, > and the other(s) as empty commits. This behavior was introduced in commit > 1f90a64 ("git-p4: reduce number of server queries for fetches", 2015-12-19). That's definitely a bug, thanks for spotting that! Even more so for adding a test case. > > Reproduction Steps: > > 1. Have a git repo cloned from a perforce repo using multiple depot paths > (e.g. //depot/foo and //depot/bar). > 2. Submit a single change to the perforce repo that makes changes in both > //depot/foo and //depot/bar. > 3. Run "git p4 sync" to sync the change from #2. > > Change is synced as multiple commits, one for each depot path that was > affected. > > Using a set, instead of a list inside p4ChangesForPaths() ensures that each > changelist is unique to the returned list, and therefore only a single commit > is generated for each changelist. The change looks good to me apart from one missing "&&" in the test case (see below). Possibly need to rewrap the comment line (I think there's a 72 character limit) ? Luke > > Reported-by: James Farwell > Signed-off-by: George Vanburgh > --- > git-p4.py | 4 ++-- > t/t9800-git-p4-basic.sh | 22 +- > 2 files changed, 23 insertions(+), 3 deletions(-) > > diff --git a/git-p4.py b/git-p4.py > index fd5ca52..6307bc8 100755 > --- a/git-p4.py > +++ b/git-p4.py > @@ -822,7 +822,7 @@ def p4ChangesForPaths(depotPaths, changeRange, > requestedBlockSize): > die("cannot use --changes-block-size with non-numeric > revisions") > block_size = None > > -changes = [] > +changes = set() > > # Retrieve changes a block at a time, to prevent running > # into a MaxResults/MaxScanRows error from the server. > @@ -841,7 +841,7 @@ def p4ChangesForPaths(depotPaths, changeRange, > requestedBlockSize): > > # Insert changes in chronological order > for line in reversed(p4_read_pipe_lines(cmd)): > -changes.append(int(line.split(" ")[1])) > +changes.add(int(line.split(" ")[1])) > > if not block_size: > break > diff --git a/t/t9800-git-p4-basic.sh b/t/t9800-git-p4-basic.sh > index 0730f18..4d72e0b 100755 > --- a/t/t9800-git-p4-basic.sh > +++ b/t/t9800-git-p4-basic.sh > @@ -131,6 +131,26 @@ test_expect_success 'clone two dirs, @all, conflicting > files' ' > ) > ' > > +test_expect_success 'clone two dirs, each edited by submit, single git > commit' ' > + ( > + cd "$cli" && > + echo sub1/f4 >sub1/f4 && > + p4 add sub1/f4 && > + echo sub2/f4 >sub2/f4 && > + p4 add sub2/f4 && > + p4 submit -d "sub1/f4 and sub2/f4" > + ) && > + git p4 clone --dest="$git" //depot/sub1@all //depot/sub2@all && > + test_when_finished cleanup_git && > + ( > + cd "$git" Missing && > + git ls-files >lines && > + test_line_count = 4 lines && > + git log --oneline p4/master >lines && > + test_line_count = 5 lines > + ) > +' > + > revision_ranges="2000/01/01,#head \ > 1,2080/01/01 \ > 2000/01/01,2080/01/01 \ > @@ -147,7 +167,7 @@ test_expect_success 'clone using non-numeric revision > ranges' ' > ( > cd "$git" && > git ls-files >lines && > - test_line_count = 6 lines > + test_line_count = 8 lines > ) > done > ' > > -- > https://github.com/git/git/pull/311
Re: indent-heuristic, compaction-heuristic combination
On Fri, Dec 16, 2016 at 4:44 PM, Jacob Kellerwrote: > On Fri, Dec 16, 2016 at 4:28 PM, Norbert Kiesel wrote: >> Hi, >> >> I started using compaction-heuristic with 2.9, and then also (or so I >> thought) enabled indent-heuristic with 2.11. >> Only after reading a comment in "Git rev news" I realized that these 2 >> options are mutually exclusive. I then >> checked the Git source code and saw that Git first checks the new >> indent-heuristic and then the old compaction-heuristic. >> Therefore, anyone who is as stupid as me and enabled both will always >> (and silently) end up with the older of the >> two. >> >> Apart from better documentation (I know that both are marked >> experimental, but nevertheless): could we not swap the >> order in which they are tested so that the newer heuristic wins? >> >> > > I looked at the code and I don't think this is the case. In > diff_setup() on line 3381, we check indent heuristic first. However, > when we check the compaction heuristic second, we use an "else if" so > we do not set both. I believe it already performs indent heuristic > correctly if you enable both options in configuration. > > Thanks, > Jake On further looking, I realized again that maybe you are right. I will send a patch to change the other spot where we might prefer the older heuristic. Thanks, Jake
Re: indent-heuristic, compaction-heuristic combination
On Fri, Dec 16, 2016 at 4:28 PM, Norbert Kieselwrote: > Hi, > > I started using compaction-heuristic with 2.9, and then also (or so I > thought) enabled indent-heuristic with 2.11. > Only after reading a comment in "Git rev news" I realized that these 2 > options are mutually exclusive. I then > checked the Git source code and saw that Git first checks the new > indent-heuristic and then the old compaction-heuristic. > Therefore, anyone who is as stupid as me and enabled both will always > (and silently) end up with the older of the > two. > > Apart from better documentation (I know that both are marked > experimental, but nevertheless): could we not swap the > order in which they are tested so that the newer heuristic wins? > > I looked at the code and I don't think this is the case. In diff_setup() on line 3381, we check indent heuristic first. However, when we check the compaction heuristic second, we use an "else if" so we do not set both. I believe it already performs indent heuristic correctly if you enable both options in configuration. Thanks, Jake
Re: indent-heuristic, compaction-heuristic combination
On Fri, Dec 16, 2016 at 4:28 PM, Norbert Kieselwrote: > Hi, > > I started using compaction-heuristic with 2.9, and then also (or so I > thought) enabled indent-heuristic with 2.11. > Only after reading a comment in "Git rev news" I realized that these 2 > options are mutually exclusive. I then > checked the Git source code and saw that Git first checks the new > indent-heuristic and then the old compaction-heuristic. > Therefore, anyone who is as stupid as me and enabled both will always > (and silently) end up with the older of the > two. > > Apart from better documentation (I know that both are marked > experimental, but nevertheless): could we not swap the > order in which they are tested so that the newer heuristic wins? > > This seems reasonable to me. Thanks, Jake
indent-heuristic, compaction-heuristic combination
Hi, I started using compaction-heuristic with 2.9, and then also (or so I thought) enabled indent-heuristic with 2.11. Only after reading a comment in "Git rev news" I realized that these 2 options are mutually exclusive. I then checked the Git source code and saw that Git first checks the new indent-heuristic and then the old compaction-heuristic. Therefore, anyone who is as stupid as me and enabled both will always (and silently) end up with the older of the two. Apart from better documentation (I know that both are marked experimental, but nevertheless): could we not swap the order in which they are tested so that the newer heuristic wins?
[PATCH] pack-objects: don't warn about bitmaps on incremental pack
When running git pack-objects --incremental, we do not expect to be able to write a bitmap; it is very likely that objects in the new pack will have references to objects outside of the pack. So we don't need to warn the user about it. This warning was making its way into gc.log because auto-gc will do an incremental repack when there are too many loose objects but not too many packs. When the gc.log was present, future auto gc runs would refuse to run. Signed-off-by: David Turner--- builtin/pack-objects.c | 3 ++- t/t5310-pack-bitmaps.sh | 12 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c index 0fd52bd..96de213 100644 --- a/builtin/pack-objects.c +++ b/builtin/pack-objects.c @@ -1083,7 +1083,8 @@ static int add_object_entry(const unsigned char *sha1, enum object_type type, if (!want_object_in_pack(sha1, exclude, _pack, _offset)) { /* The pack is missing an object, so it will not have closure */ if (write_bitmap_index) { - warning(_(no_closure_warning)); + if (!incremental) + warning(_(no_closure_warning)); write_bitmap_index = 0; } return 0; diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh index b4c7a6f..d81636e 100755 --- a/t/t5310-pack-bitmaps.sh +++ b/t/t5310-pack-bitmaps.sh @@ -247,6 +247,18 @@ test_expect_success 'pack-objects respects --incremental' ' test_cmp 4.objects objects ' +test_expect_success 'incremental repack does not create bitmaps' ' + test_commit 11 && + ls .git/objects/pack/ | grep bitmap >existing_bitmaps && + ls .git/objects/pack/ | grep -v bitmap >existing_packs && + git repack -d 2>err && + test_line_count = 0 err && + ls .git/objects/pack/ | grep bitmap >output && + ls .git/objects/pack/ | grep -v bitmap >post_packs && + test_cmp existing_bitmaps output && + ! test_cmp existing_packs post_packs +' + test_expect_success 'pack with missing blob' ' rm $(objpath $blob) && git pack-objects --stdout --revs /dev/null -- 2.8.0.rc4.22.g8ae061a
What's cooking in git.git (Dec 2016, #04; Fri, 16)
Here are the topics that have been cooking. Commits prefixed with '-' are only in 'pu' (proposed updates) while commits prefixed with '+' are in 'next'. The ones marked with '.' do not appear in any of the integration branches, but I am still holding onto them. You can find the changes described here in the integration branches of the repositories listed at http://git-blame.blogspot.com/p/git-public-repositories.html -- [Graduated to "master"] * ak/lazy-prereq-mktemp (2016-11-29) 1 commit (merged to 'next' on 2016-12-12 at f346d1f053) + t7610: clean up foo.XX tmpdir Test code clean-up. * bw/push-dry-run (2016-11-23) 2 commits (merged to 'next' on 2016-12-12 at bde7a0f9ae) + push: fix --dry-run to not push submodules + push: --dry-run updates submodules when --recurse-submodules=on-demand (this branch uses hv/submodule-not-yet-pushed-fix; is tangled with sb/push-make-submodule-check-the-default.) "git push --dry-run --recurse-submodule=on-demand" wasn't "--dry-run" in the submodules. * da/mergetool-trust-exit-code (2016-11-29) 2 commits (merged to 'next' on 2016-12-12 at 28ae202868) + mergetools/vimdiff: trust Vim's exit code + mergetool: honor mergetool.$tool.trustExitCode for built-in tools mergetool..trustExitCode configuration variable did not apply to built-in tools, but now it does. * dt/empty-submodule-in-merge (2016-11-17) 1 commit (merged to 'next' on 2016-12-12 at 6de2350b2b) + submodules: allow empty working-tree dirs in merge/cherry-pick An empty directory in a working tree that can simply be nuked used to interfere while merging or cherry-picking a change to create a submodule directory there, which has been fixed.. * hv/submodule-not-yet-pushed-fix (2016-11-16) 4 commits (merged to 'next' on 2016-12-05 at c9d729fca2) + submodule_needs_pushing(): explain the behaviour when we cannot answer + batch check whether submodule needs pushing into one call + serialize collection of refs that contain submodule changes + serialize collection of changed submodules (this branch is used by bw/push-dry-run and sb/push-make-submodule-check-the-default.) Originally merged to 'next' on 2016-11-21 The code in "git push" to compute if any commit being pushed in the superproject binds a commit in a submodule that hasn't been pushed out was overly inefficient, making it unusable even for a small project that does not have any submodule but have a reasonable number of refs. * jk/rev-parse-symbolic-parents-fix (2016-11-16) 1 commit (merged to 'next' on 2016-12-12 at 6839c1ea28) + rev-parse: fix parent shorthands with --symbolic "git rev-parse --symbolic" failed with a more recent notation like "HEAD^-1" and "HEAD^!". * ld/p4-update-shelve (2016-12-05) 1 commit (merged to 'next' on 2016-12-12 at 22f6bec94c) + git-p4: support updating an existing shelved changelist (this branch uses vk/p4-submit-shelve.) Will merge to 'master'. * ls/p4-empty-file-on-lfs (2016-12-05) 1 commit (merged to 'next' on 2016-12-12 at 1fce8e037a) + git-p4: fix empty file processing for large file system backend GitLFS "git p4" LFS support was broken when LFS stores an empty blob. * ls/p4-retry-thrice (2016-12-05) 1 commit (merged to 'next' on 2016-12-12 at 9462e660a8) + git-p4: add config to retry p4 commands; retry 3 times by default Will merge to 'master'. * nd/qsort-in-merge-recursive (2016-11-28) 1 commit (merged to 'next' on 2016-12-12 at e9700f5b93) + merge-recursive.c: use string_list_sort instead of qsort Code simplification. * nd/worktree-list-fixup (2016-11-28) 5 commits (merged to 'next' on 2016-12-12 at 1f46421a59) + worktree list: keep the list sorted + worktree.c: get_worktrees() takes a new flag argument + get_worktrees() must return main worktree as first item even on error + worktree: reorder an if statement + worktree.c: zero new 'struct worktree' on allocation (this branch is used by nd/worktree-move and sb/submodule-embed-gitdir.) The output from "git worktree list" was made in readdir() order, and was unstable. * vk/p4-submit-shelve (2016-11-29) 1 commit (merged to 'next' on 2016-12-12 at 3fce6df117) + git-p4: allow submit to create shelved changelists. (this branch is used by ld/p4-update-shelve.) Will merge to 'master'. -- [New Topics] * bw/pathspec-cleanup (2016-12-14) 16 commits - pathspec: rename prefix_pathspec to init_pathspec_item - pathspec: small readability changes - pathspec: create strip submodule slash helpers - pathspec: create parse_element_magic helper - pathspec: create parse_long_magic function - pathspec: create parse_short_magic function - pathspec: factor global magic into its own function - pathspec: simpler logic to prefix original pathspec elements - pathspec: always show mnemonic and name in unsupported_magic - pathspec: remove unused variable from unsupported_magic -
Re: Races on ref .lock files?
Andreas, Bitbucket Server developer here. Typically these errors on your client are indicative of git gc --auto being triggered by git-receive-pack on the server. Auto GC directly attached to a push in a repository with pull requests often fails due to concurrent ref updates linked to background pull request processing. If you'd like to investigate more in depth, I'd encourage you to create a ticket on support.atlassian.com so we can work with you. Otherwise, if you just want to prevent seeing these messages, you can either fork the relevant repository in Bitbucket Server (which disables auto GC), or run "git config gc.auto 0" in /opt/apps/.../repositories/68. Once auto GC is disabled, Bitbucket Server will automatically take over managing GC for the repository without any additional configuration required. Note that we're working on revamping our GC handling such that auto GC will always be disabled for all repositories and managed explicitly within Bitbucket Server instead, so a future upgrade should automatically prevent these messages from appearing on clients. Best regards, Bryan Turner On Fri, Dec 16, 2016 at 9:20 AM, Junio C Hamanowrote: > Andreas Krey writes: > >> We're occasionally seeing a lot of >> >> error: cannot lock ref 'stash-refs/pull-requests/18978/to': Unable to >> create >> '/opt/apps//repositories/68/stash-refs/pull-requests/18978/to.lock': >> File exists. >> >> from the server side with fetches as well as pushes. (Bitbucket server.) >> >> What I find strange is that neither the fetches nor the pushes even >> touch these refs (but the bitbucket triggers underneath might). >> >> But my question is whether there are race conditions that can cause >> such messages in regular operation - they continue with 'If no other git >> process is currently running, this probably means a git process crashed >> in this repository earlier.' which indicates some level of anticipation. > > I think (and I think you also think) these messages come from the > Bitbucket side, not your "git push" (or "git fetch"). Not having > seen Bitbucket's sources, I can only guess, but assuming that its > pull-request is triggered from their Web frontend like GitHub's > does, it is quite possible when you try to "push" into (or "fetch" > from, for that matter) a repository, somebody is clicking a button > to create that ref. We do not know what their "receive-pack" that > responds to your "git push" (or "upload-pack" for "git fetch") does > when there are locked refs. I'd naively think that unless you are > pushing to that ref you showed an error message for, the receiving > end shouldn't care if the ref is being written by somebody else, but > who knows ;-) They may have their own reasons wanting to lock that > ref that we think would be irrelevant for the operation, causing > errors. > > >
Re: [RFC/PATCH] Makefile: suppress some cppcheck false-positives
On Fri, Dec 16, 2016 at 10:43:48AM -0800, Junio C Hamano wrote: > > diff --git a/nedmalloc.supp b/nedmalloc.supp > [...] > > diff --git a/regcomp.supp b/regcomp.supp > > Yuck for both files for multiple reasons. > > I do not think it is a good idea to allow these files to clutter the > top-level of tree. How often do we expect that we may have to add > more of these files? Every time we start borrowing code from third > parties? > > What is the goal we want to achieve by running cppcheck? > > a. Our code must be clean but we do not bother "fixing" [*1*] the > code we borrow from third parties and squelch output instead? > > b. Both our own code and third party code we borrow need to be free > of errors and misdetections from cppcheck? > > c. Something else? > > If a. is what we aim for, perhaps a better option may be not to run > cppcheck for the code we borrowed from third-party at all in the > first place. > > If b. is our goal, we need to make sure that the false positive rate > of cppcheck is acceptably low. I think (b) is the goal; we'd hope that both our code and third party code would be bug-free. I think it's a fact of life with a static analysis tool that we're going to have to silence some false positives. I think Chris started with the ones in compat because we are pretty sure they won't change much, so suppressing them by line number is easy. But we'd need to revisit this for our code, too. So just turning it off for compat/ is only punting on the problem for a little while. :) I do think it would be less gross if we could put these files into compat/nedmalloc, etc. I don't know if you can specify --suppressions-list multiple times, but certainly we could do some pre-processing like: find . -name '*.cppcheck' | while read suppfile; do dir=$(dirname $suppfile) sed "s{^}{$dir/}" <$suppfile done >master-suppfile cppcheck --suppressions-file=master-suppfile That would at least let us drop individual suppression files into their respective directories. I do wonder, though, if the "inline" suppressions would be less painful. It looks like doing: // cppcheck-suppress uninitialized int var = var; would work. I'm not sure if it understands non-C99 comments, though maybe it is time for us to loosen that rule. And suppressing the false positives that way does avoid fighting with gcc's analyzer, since we're not changing the code. The real question is how often we'd have to sprinkle those comments, and how painful it would be. I see only 4 false positives that need suppressed in our code, but 2 of them rub me the wrong way. They are due to the tool failing to realize that die() is marked with NORETURN. Marking some site as "no, this isn't a double-free, the other code path would have died" feels like the wrong spot. The tool failure isn't where we're marking, but rather 10 lines above. -Peff
Re: index-pack outside of repository?
Jeff Kingwrites: > Ah, I only checked that it did not do anything terrible like write into > ".git". But it looks like it still looks at the git_dir value as part of > the collision check. > > Here's a patch, on top of the rest of the series. Thanks for a quick turnaround, as always. > -- >8 -- > Subject: [PATCH] index-pack: skip collision check when not in repository > > You can run "git index-pack path/to/foo.pack" outside of a > repository to generate an index file, or just to verify the > contents. There's no point in doing a collision check, since > we obviously do not have any objects to collide with. > > The current code will blindly look in .git/objects based on > the result of setup_git_env(). That effectively gives us the > right answer (since we won't find any objects), but it's a > waste of time, and it conflicts with our desire to > eventually get rid of the "fallback to .git" behavior of > setup_git_env(). > > Signed-off-by: Jeff King > --- > I didn't do a test, as this doesn't really have any user-visible > behavior change. > > I guess technically if you had a non-repo with > ".git/objects/12/3456..." in it we would erroneously read it, but that's > kind of bizarre. The interesting test is that when merged with > jk/no-looking-at-dotgit-outside-repo-final, the test in t5300 doesn't > die. Yes. > > builtin/index-pack.c | 10 ++ > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/builtin/index-pack.c b/builtin/index-pack.c > index d450a6ada2..f4b87c6c9f 100644 > --- a/builtin/index-pack.c > +++ b/builtin/index-pack.c > @@ -787,13 +787,15 @@ static void sha1_object(const void *data, struct > object_entry *obj_entry, > const unsigned char *sha1) > { > void *new_data = NULL; > - int collision_test_needed; > + int collision_test_needed = 0; > > assert(data || obj_entry); > > - read_lock(); > - collision_test_needed = has_sha1_file_with_flags(sha1, HAS_SHA1_QUICK); > - read_unlock(); > + if (startup_info->have_repository) { > + read_lock(); > + collision_test_needed = has_sha1_file_with_flags(sha1, > HAS_SHA1_QUICK); > + read_unlock(); > + } > > if (collision_test_needed && !data) { > read_lock();
Re: "disabling bitmap writing, as some objects are not being packed"?
On Fri, Dec 16, 2016 at 04:40:16PM -0500, David Turner wrote: > I would assume, based on the documentation, that auto gc would be doing > an all-into-one repack: > "If the number of packs exceeds the value of gc.autopacklimit, then > existing packs (except those marked with a .keep file) are > consolidated into a single pack by using the -A option of git > repack." > > I don't have any settings that limit the size of packs, either. And a > manual git repack -a -d creates only a single pack. Its loneliness > doesn't last long, because pretty soon a new pack is created by an > incoming push. The interesting code is in need_to_gc(): /* * If there are too many loose objects, but not too many * packs, we run "repack -d -l". If there are too many packs, * we run "repack -A -d -l". Otherwise we tell the caller * there is no need. */ if (too_many_packs()) add_repack_all_option(); else if (!too_many_loose_objects()) return 0; So if you have (say) 10 packs and 10,000 objects, we'll incrementally pack those objects into a single new pack. I never noticed this myself because we do not use auto-gc at GitHub at all. We only ever do a big all-into-one repack. > Unless this just means that some objects are being kept loose (perhaps > because they are unreferenced)? If they're unreferenced, they won't be part of the new pack. You might accumulate loose objects that are ejected from previous packs, which could trigger auto-gc to do an incremental pack (even though it wouldn't be productive, because they're unreferenced!). You may also get them from pushes (small pushes will be exploded into loose objects by default). -Peff
Re: index-pack outside of repository?
On Fri, Dec 16, 2016 at 10:54:13AM -0800, Junio C Hamano wrote: > I am tempted to suggest an intermediate step that comes before > b1ef400eec ("setup_git_env: avoid blind fall-back to ".git"", > 2016-10-20), which is the attached, and publish that as part of an > official release. That way, we'll see what is broken without > hurting people too much (unless they or their scripts care about > extra message given to the standard error stream). I suspect that > released Git has a slightly larger user base than what is cooked on > 'next'. > > environment.c | 5 - > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/environment.c b/environment.c > index 0935ec696e..88f857331e 100644 > --- a/environment.c > +++ b/environment.c > @@ -167,8 +167,11 @@ static void setup_git_env(void) > const char *replace_ref_base; > > git_dir = getenv(GIT_DIR_ENVIRONMENT); > - if (!git_dir) > + if (!git_dir) { > + if (!startup_info->have_repository) > + warning("BUG: please report this at > git@vger.kernel.org"); > git_dir = DEFAULT_GIT_DIR_ENVIRONMENT; > + } Yes, I think this is a nice way to ease into the change. I wish I had thought of it when doing the original series, and we could have shipped it in v2.11. :) -Peff
Re: index-pack outside of repository?
On Fri, Dec 16, 2016 at 10:01:49AM -0800, Junio C Hamano wrote: > >> I think 2/3 is a good change to ensure we get a reasonable error for > >> "index-pack --stdin", and 3/3 is a very good cleanup. Both of them > >> of course are enabled by 1/3. > >> > >> We still fail "nongit git index-pack tmp.pack" with a BUG: though. > > > > Wait. > > > > This only happens with a stalled-and-to-be-discarded topic on 'pu'. > > Please don't waste time digging it (yet). > > Don't wait ;-). My mistake. We can see t5300 broken with this > change and b1ef400eec ("setup_git_env: avoid blind fall-back to > ".git"", 2016-10-20) without anything else. We still need to > address it. Ah, I only checked that it did not do anything terrible like write into ".git". But it looks like it still looks at the git_dir value as part of the collision check. Here's a patch, on top of the rest of the series. -- >8 -- Subject: [PATCH] index-pack: skip collision check when not in repository You can run "git index-pack path/to/foo.pack" outside of a repository to generate an index file, or just to verify the contents. There's no point in doing a collision check, since we obviously do not have any objects to collide with. The current code will blindly look in .git/objects based on the result of setup_git_env(). That effectively gives us the right answer (since we won't find any objects), but it's a waste of time, and it conflicts with our desire to eventually get rid of the "fallback to .git" behavior of setup_git_env(). Signed-off-by: Jeff King--- I didn't do a test, as this doesn't really have any user-visible behavior change. I guess technically if you had a non-repo with ".git/objects/12/3456..." in it we would erroneously read it, but that's kind of bizarre. The interesting test is that when merged with jk/no-looking-at-dotgit-outside-repo-final, the test in t5300 doesn't die. builtin/index-pack.c | 10 ++ 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/builtin/index-pack.c b/builtin/index-pack.c index d450a6ada2..f4b87c6c9f 100644 --- a/builtin/index-pack.c +++ b/builtin/index-pack.c @@ -787,13 +787,15 @@ static void sha1_object(const void *data, struct object_entry *obj_entry, const unsigned char *sha1) { void *new_data = NULL; - int collision_test_needed; + int collision_test_needed = 0; assert(data || obj_entry); - read_lock(); - collision_test_needed = has_sha1_file_with_flags(sha1, HAS_SHA1_QUICK); - read_unlock(); + if (startup_info->have_repository) { + read_lock(); + collision_test_needed = has_sha1_file_with_flags(sha1, HAS_SHA1_QUICK); + read_unlock(); + } if (collision_test_needed && !data) { read_lock(); -- 2.11.0.348.g960a0b554
Re: [PATCH v7 0/7] recursively grep across submodules
Brandon Williamswrites: > Changes in v7: > * Rebased on 'origin/bw/realpath-wo-chdir' in order to fix the race condition > that occurs when verifying a submodule's gitdir. > * Reverted is_submodule_populated() to use resolve_gitdir() now that there is > no race condition. > * Added !MINGW to a test in t7814 so that it won't run on windows. This is > due > to testing if colons in filenames are still handled correctly, yet windows > doesn't allow colons in filenames. Nice. Will queue again to see if those on other platforms have troubles with it. I read it through again and think the series is ready for 'next'. Thanks.
Re: "disabling bitmap writing, as some objects are not being packed"?
On Fri, 2016-12-16 at 16:32 -0500, Jeff King wrote: > On Fri, Dec 16, 2016 at 01:28:00PM -0800, Junio C Hamano wrote: > > > > 2. I don't understand what would cause that message. That is, what bad > > > thing am I doing that I should stop doing? I've briefly skimmed the > > > code and commit message, but the answer isn't leaping out at me. > > > > Enabling bitmap generation for incremental packing that does not > > cram everything into a single pack is triggering it, I would > > presume. Perhaps we should ignore -b option in most of the cases > > and enable it only for "repack -a -d -f" codepath? Or detect that > > we are being run from "gc --auto" and automatically disable -b? I > > have a feeling that an approach along that line is closer to the > > real solution than tweaking report_last_gc_error() and trying to > > deduce if we are making any progress. > > Ah, indeed. I was thinking in my other response that "git gc" would > always kick off an all-into-one repack. But "gc --auto" will not in > certain cases. And yes, in those cases you definitely would want > --no-write-bitmap-index. I think it would be reasonable for "git repack" > to disable bitmap-writing automatically when not doing an all-into-one > repack. I do not have alternates and am not using --local. Nor do I have .keep packs. I would assume, based on the documentation, that auto gc would be doing an all-into-one repack: "If the number of packs exceeds the value of gc.autopacklimit, then existing packs (except those marked with a .keep file) are consolidated into a single pack by using the -A option of git repack." I don't have any settings that limit the size of packs, either. And a manual git repack -a -d creates only a single pack. Its loneliness doesn't last long, because pretty soon a new pack is created by an incoming push. Unless this just means that some objects are being kept loose (perhaps because they are unreferenced)?
Re: "disabling bitmap writing, as some objects are not being packed"?
On Fri, Dec 16, 2016 at 01:28:00PM -0800, Junio C Hamano wrote: > > 2. I don't understand what would cause that message. That is, what bad > > thing am I doing that I should stop doing? I've briefly skimmed the > > code and commit message, but the answer isn't leaping out at me. > > Enabling bitmap generation for incremental packing that does not > cram everything into a single pack is triggering it, I would > presume. Perhaps we should ignore -b option in most of the cases > and enable it only for "repack -a -d -f" codepath? Or detect that > we are being run from "gc --auto" and automatically disable -b? I > have a feeling that an approach along that line is closer to the > real solution than tweaking report_last_gc_error() and trying to > deduce if we are making any progress. Ah, indeed. I was thinking in my other response that "git gc" would always kick off an all-into-one repack. But "gc --auto" will not in certain cases. And yes, in those cases you definitely would want --no-write-bitmap-index. I think it would be reasonable for "git repack" to disable bitmap-writing automatically when not doing an all-into-one repack. -Peff
Re: "disabling bitmap writing, as some objects are not being packed"?
On Fri, Dec 16, 2016 at 04:05:31PM -0500, David Turner wrote: > 1. Its presence in the gc.log file prevents future automatic garbage > collection. This seems bad. I understand the desire to avoid making > things worse if a past gc has run into issues. But this warning is > non-fatal; the only consequence is that many operations get slower. But > a lack of gc when there are too many packs causes that consequence too > (often a much worse slowdown than would be caused by the missing > bitmap). > > So I wonder if it would be better for auto gc to grep gc.log for fatal > errors (as opposed to warnings) and only skip running if any are found. > Alternately, we could simply put warnings into gc.log.warning and > reserve gc.log for fatal errors. I'm not sure which would be simpler. Without thinking too hard on it, that seems like the appropriate solution to me, too. > 2. I don't understand what would cause that message. That is, what bad > thing am I doing that I should stop doing? I've briefly skimmed the > code and commit message, but the answer isn't leaping out at me. Do you have alternates and are using --local? Do you have .keep packs and have set repack.packKeptObjects to false? There are other ways (e.g., an incremental repack), but I think those are the likely ones to get via "git gc". -Peff
Re: "disabling bitmap writing, as some objects are not being packed"?
David Turnerwrites: > I'm a bit confused by the message "disabling bitmap writing, as some > objects are not being packed". I see it the my gc.log file on my git > server. > 1. Its presence in the gc.log file prevents future automatic garbage > collection. This seems bad. I understand the desire to avoid making > things worse if a past gc has run into issues. But this warning is > non-fatal; the only consequence is that many operations get slower. But > a lack of gc when there are too many packs causes that consequence too > (often a much worse slowdown than would be caused by the missing > bitmap). > > So I wonder if it would be better for auto gc to grep gc.log for fatal > errors (as opposed to warnings) and only skip running if any are found. > Alternately, we could simply put warnings into gc.log.warning and > reserve gc.log for fatal errors. I'm not sure which would be simpler. I am not sure if string matching is really a good idea, as I'd assume that these messages are eligible for i18n. 329e6e8794 ("gc: save log from daemonized gc --auto and print it next time", 2015-09-19) wanted to notice that auto-gc is not making progress and used the presense of error messages as a cue. In your case, I think the auto-gc _is_ making progress, reducing number of loose objects in the repository or consolidating many packfiles into one, and the message is only about the fact that packing is punting and not producing a bitmap as you asked, which is different from not making any progress. I do not think log vs warn is a good criteria to tell them apart, either. In any case, as the error message asks the user to do, the user eventually wants to correct the root cause before removing the gc.log; I am not sure report_last_gc_error() is the place to correct this in the first place. > 2. I don't understand what would cause that message. That is, what bad > thing am I doing that I should stop doing? I've briefly skimmed the > code and commit message, but the answer isn't leaping out at me. Enabling bitmap generation for incremental packing that does not cram everything into a single pack is triggering it, I would presume. Perhaps we should ignore -b option in most of the cases and enable it only for "repack -a -d -f" codepath? Or detect that we are being run from "gc --auto" and automatically disable -b? I have a feeling that an approach along that line is closer to the real solution than tweaking report_last_gc_error() and trying to deduce if we are making any progress.
Hello
Hello, I'M Anessa female 25 years old single, I am contacting you because I will be relocating to your country. I will send you my photos soon as i hear from you and also tell you much about my self. Thanks. Sincerely yours Anessa.
"disabling bitmap writing, as some objects are not being packed"?
I'm a bit confused by the message "disabling bitmap writing, as some objects are not being packed". I see it the my gc.log file on my git server. 1. Its presence in the gc.log file prevents future automatic garbage collection. This seems bad. I understand the desire to avoid making things worse if a past gc has run into issues. But this warning is non-fatal; the only consequence is that many operations get slower. But a lack of gc when there are too many packs causes that consequence too (often a much worse slowdown than would be caused by the missing bitmap). So I wonder if it would be better for auto gc to grep gc.log for fatal errors (as opposed to warnings) and only skip running if any are found. Alternately, we could simply put warnings into gc.log.warning and reserve gc.log for fatal errors. I'm not sure which would be simpler. 2. I don't understand what would cause that message. That is, what bad thing am I doing that I should stop doing? I've briefly skimmed the code and commit message, but the answer isn't leaping out at me.
test failure
Hi Lars, For the last two days, I've noticed t0021.15 on the 'pu' branch has been failing intermittently (well it fails with: 'make test >ptest-out', but when run by hand, it fails only say 1-in-6, 1-in-18, etc.). [yes, it's a bit strange; this hasn't changed in a couple of weeks!] I don't have time to investigate further tonight and, since I had not heard anyone else complain, I thought I should let you know. See below for the output from a failing run. [Note: this is on Linux Mint 18, tonight's pu branch @7c7984401]. ATB, Ramsay Jones $ ./t0021-conversion.sh -i -v ... ok 14 - diff does not reuse worktree files that need cleaning expecting success: test_config_global filter.protocol.process "rot13-filter.pl clean smudge" && test_config_global filter.protocol.required true && rm -rf repo && mkdir repo && ( cd repo && git init && echo "*.r filter=protocol" >.gitattributes && git add . && git commit -m "test commit 1" && git branch empty-branch && cp "$TEST_ROOT/test.o" test.r && cp "$TEST_ROOT/test2.o" test2.r && mkdir testsubdir && cp "$TEST_ROOT/test3 'sq',\$x=.o" "testsubdir/test3 'sq',\$x=.r" && >test4-empty.r && S=$(file_size test.r) && S2=$(file_size test2.r) && S3=$(file_size "testsubdir/test3 'sq',\$x=.r") && filter_git add . && cat >expected.log <<-EOF && START init handshake complete IN: clean test.r $S [OK] -- OUT: $S . [OK] IN: clean test2.r $S2 [OK] -- OUT: $S2 . [OK] IN: clean test4-empty.r 0 [OK] -- OUT: 0 [OK] IN: clean testsubdir/test3 'sq',\$x=.r $S3 [OK] -- OUT: $S3 . [OK] STOP EOF test_cmp_count expected.log rot13-filter.log && filter_git commit -m "test commit 2" && cat >expected.log <<-EOF && START init handshake complete IN: clean test.r $S [OK] -- OUT: $S . [OK] IN: clean test2.r $S2 [OK] -- OUT: $S2 . [OK] IN: clean test4-empty.r 0 [OK] -- OUT: 0 [OK] IN: clean testsubdir/test3 'sq',\$x=.r $S3 [OK] -- OUT: $S3 . [OK] IN: clean test.r $S [OK] -- OUT: $S . [OK] IN: clean test2.r $S2 [OK] -- OUT: $S2 . [OK] IN: clean test4-empty.r 0 [OK] -- OUT: 0 [OK] IN: clean testsubdir/test3 'sq',\$x=.r $S3 [OK] -- OUT: $S3 . [OK] STOP EOF test_cmp_count expected.log rot13-filter.log && rm -f test2.r "testsubdir/test3 'sq',\$x=.r" && filter_git checkout --quiet --no-progress . && cat >expected.log <<-EOF && START init handshake complete IN: smudge test2.r $S2 [OK] -- OUT: $S2 . [OK] IN: smudge testsubdir/test3 'sq',\$x=.r $S3 [OK] -- OUT: $S3 . [OK] STOP EOF test_cmp_exclude_clean expected.log rot13-filter.log && filter_git checkout --quiet --no-progress empty-branch && cat >expected.log <<-EOF && START init handshake complete IN: clean test.r $S [OK] -- OUT: $S . [OK] STOP EOF test_cmp_exclude_clean expected.log rot13-filter.log && filter_git checkout --quiet --no-progress master && cat >expected.log <<-EOF && START init handshake complete IN: smudge test.r $S [OK] -- OUT: $S . [OK] IN: smudge test2.r $S2 [OK] -- OUT: $S2 . [OK] IN: smudge test4-empty.r 0 [OK] -- OUT: 0 [OK] IN: smudge testsubdir/test3 'sq',\$x=.r $S3 [OK] -- OUT: $S3 . [OK] STOP EOF test_cmp_exclude_clean expected.log rot13-filter.log && test_cmp_committed_rot13 "$TEST_ROOT/test.o" test.r && test_cmp_committed_rot13 "$TEST_ROOT/test2.o" test2.r && test_cmp_committed_rot13 "$TEST_ROOT/test3 'sq',\$x=.o" "testsubdir/test3 'sq',\$x=.r" ) Initialized empty Git repository in /home/ramsay/git/t/trash directory.t0021-conversion/repo/.git/ [master (root-commit) 56d459b] test commit 1 Author: A U Thor1 file changed, 1 insertion(+)
Hello
Hello, I'M Anessa female 25 years old single, I am contacting you because I will be relocating to your country. I will send you my photos soon as i hear from you and also tell you much about my self. Thanks. Sincerely yours Anessa.
Re: [PATCH v2 20/34] sequencer (rebase -i): copy commit notes at end
Johannes Schindelinwrites: > +static void flush_rewritten_pending(void) { > + struct strbuf buf = STRBUF_INIT; > + unsigned char newsha1[20]; > + FILE *out; > + > + if (strbuf_read_file(, rebase_path_rewritten_pending(), 82) > 0 && > + !get_sha1("HEAD", newsha1) && > + (out = fopen(rebase_path_rewritten_list(), "a"))) { An error in fopen() here ... > + ... > + } > + strbuf_release(); > +} > + > +static void record_in_rewritten(struct object_id *oid, > + enum todo_command next_command) { > + FILE *out = fopen(rebase_path_rewritten_pending(), "a"); > + > + if (!out) > + return; ... and here are ignored as an insignificant error in the scripted version, and this one does the same. > + > + fprintf(out, "%s\n", oid_to_hex(oid)); > + fclose(out); > + > + if (!is_fixup(next_command)) > + flush_rewritten_pending(); > +} > + > static int do_pick_commit(enum todo_command command, struct commit *commit, > struct replay_opts *opts, int final_fixup) > { > @@ -1750,6 +1797,17 @@ static int is_final_fixup(struct todo_list *todo_list) > return 1; > } > > +static enum todo_command peek_command(struct todo_list *todo_list, int > offset) > +{ > + int i; > + > + for (i = todo_list->current + offset; i < todo_list->nr; i++) > + if (todo_list->items[i].command != TODO_NOOP) > + return todo_list->items[i].command; Makes me wonder, after having commented on 07/34 regarding the fact that in the end you would end up having three variants of no-op (i.e. NOOP, DROP and COMMENT), what definition of a "command" this function uses to return its result, when asked to "peek". I suspect that this will be updated in a later patch to do "< TODO_NOOP" instead? If so, then that answers one question in my comment on 07/34, i.e. If a check for "is it one of the no-op commands?" appears only here, a single liner comment may be sufficient (but necessary) to help readers. Otherwise a single-liner helper function (similar to is_fixup() you have) with a descriptive name would be better than a single liner comment. The answer is "no, it is not just there" hence the conclusion is "we want a helper with a descriptive name".
Re: [PATCH v15 08/27] bisect--helper: `is_expected_rev` & `check_expected_revs` shell function in C
Hey Stephan, On Thu, Nov 17, 2016 at 5:17 AM, Stephan Beyerwrote: > Hi, > > On 10/14/2016 04:14 PM, Pranit Bauva wrote: >> diff --git a/builtin/bisect--helper.c b/builtin/bisect--helper.c >> index d84ba86..c542e8b 100644 >> --- a/builtin/bisect--helper.c >> +++ b/builtin/bisect--helper.c >> @@ -123,13 +123,40 @@ static int bisect_reset(const char *commit) >> return bisect_clean_state(); >> } >> >> +static int is_expected_rev(const char *expected_hex) >> +{ >> + struct strbuf actual_hex = STRBUF_INIT; >> + int res = 0; >> + if (strbuf_read_file(_hex, git_path_bisect_expected_rev(), 0) >> >= 40) { >> + strbuf_trim(_hex); >> + res = !strcmp(actual_hex.buf, expected_hex); >> + } >> + strbuf_release(_hex); >> + return res; >> +} > > I am not sure it does what it should. > > I would expect the following behavior from this function: > - file does not exist (or is "broken") => return 0 > - actual_hex != expected_hex => return 0 > - otherwise return 1 > > If I am not wrong, the code does the following instead: > - file does not exist (or is "broken") => return 0 > - actual_hex != expected_hex => return 1 > - otherwise => return 0 It seems that I didn't carefully see what the shell code is (or apparently did a mistake in understanding it ;)). I think the C version does exactly what the shell version does. Can you confirm it once again, please? Regards, Pranit Bauva
Re: [PATCH v2 18/34] sequencer (rebase -i): refactor setting the reflog message
Johannes Schindelinwrites: > This makes the code DRYer, with the obvious benefit that we can enhance > the code further in a single place. > > We can also reuse the functionality elsewhere by calling this new > function. > > Signed-off-by: Johannes Schindelin > --- > sequencer.c | 33 ++--- > 1 file changed, 26 insertions(+), 7 deletions(-) > > diff --git a/sequencer.c b/sequencer.c > index 33fb36dcbe..d20efad562 100644 > --- a/sequencer.c > +++ b/sequencer.c > @@ -1750,6 +1750,26 @@ static int is_final_fixup(struct todo_list *todo_list) > return 1; > } > > +static const char *reflog_message(struct replay_opts *opts, > + const char *sub_action, const char *fmt, ...) > +{ > + va_list ap; > + static struct strbuf buf = STRBUF_INIT; > + > + va_start(ap, fmt); > + strbuf_reset(); > + strbuf_addstr(, action_name(opts)); > + if (sub_action) > + strbuf_addf(, " (%s)", sub_action); > + if (fmt) { > + strbuf_addstr(, ": "); > + strbuf_vaddf(, fmt, ap); > + } > + va_end(ap); > + > + return buf.buf; > +} It is unlikely for a single caller to want to format another reflog entry after formating one but before consuming it [*1*], so this "call this, and you can use the return value until you call it the next time without worrying about leakage" is quite a reasonable pattern to employ. [Footnote] *1* And it is unlikely that this will run in a multi-threaded environment, either ;-)
Re: [PATCH v2 11/34] sequencer (rebase -i): remove CHERRY_PICK_HEAD when no longer needed
Johannes Schindelinwrites: > The scripted version of the interactive rebase already does that. Sensible. I was wondering why this wasn't there while reviewing 10/34, comparing the two (this is not a suggestion to squash this into the previous step). > > Signed-off-by: Johannes Schindelin > --- > sequencer.c | 7 ++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/sequencer.c b/sequencer.c > index 855d3ba503..abffaf3b40 100644 > --- a/sequencer.c > +++ b/sequencer.c > @@ -1835,8 +1835,13 @@ static int commit_staged_changes(struct replay_opts > *opts) > > if (has_unstaged_changes(1)) > return error(_("cannot rebase: You have unstaged changes.")); > - if (!has_uncommitted_changes(0)) > + if (!has_uncommitted_changes(0)) { > + const char *cherry_pick_head = git_path("CHERRY_PICK_HEAD"); > + > + if (file_exists(cherry_pick_head) && unlink(cherry_pick_head)) > + return error(_("could not remove CHERRY_PICK_HEAD")); > return 0; > + } > > if (file_exists(rebase_path_amend())) { > struct strbuf rev = STRBUF_INIT;
Re: [PATCH v2 15/34] sequencer (rebase -i): leave a patch upon error
Johannes Schindelinwrites: > When doing an interactive rebase, we want to leave a 'patch' file for > further inspection by the user (even if we never tried to actually apply > that patch, since we're cherry-picking instead). > > Signed-off-by: Johannes Schindelin > --- Yup. The other day, I was kind of surprised to see the "patch" file produced when I tried to do "git rebase -i HEAD^^ HEAD" with the one in current Git (not yours), marked the first one "edit", and then it gave me control back. Obviously there was no conflict and I could just do "git show" if I wanted to see what the original change was, but the "patch" file was there. I personally never have looked at the "patch" file in such a situation, and I kind of feel it is wasteful, but I can see people expect to find one there whenever "rebase -i" stops and gives control back. It is good that you are preserving the behaviour. > sequencer.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/sequencer.c b/sequencer.c > index a4e9b326ba..4361fe0e94 100644 > --- a/sequencer.c > +++ b/sequencer.c > @@ -1777,6 +1777,9 @@ static int pick_commits(struct todo_list *todo_list, > struct replay_opts *opts) > return error_failed_squash(item->commit, opts, > item->arg_len, item->arg); > } > + else if (res && is_rebase_i(opts)) > + return res | error_with_patch(item->commit, > + item->arg, item->arg_len, opts, res, 0); > } > else if (item->command == TODO_EXEC) { > char *end_of_arg = (char *)(item->arg + item->arg_len);
Re: [PATCH v2 14/34] sequencer (rebase -i): update refs after a successful rebase
Johannes Schindelinwrites: > An interactive rebase operates on a detached HEAD (to keep the reflog > of the original branch relatively clean), and updates the branch only > at the end. > > Now that the sequencer learns to perform interactive rebases, it also > needs to learn the trick to update the branch before removing the > directory containing the state of the interactive rebase. > > We introduce a new head_ref variable in a wider scope than necessary at > the moment, to allow for a later patch that prints out "Successfully > rebased and updated ". > > Signed-off-by: Johannes Schindelin > --- > sequencer.c | 32 +++- > 1 file changed, 31 insertions(+), 1 deletion(-) > > diff --git a/sequencer.c b/sequencer.c > index a6625e765d..a4e9b326ba 100644 > --- a/sequencer.c > +++ b/sequencer.c > @@ -100,6 +100,8 @@ static GIT_PATH_FUNC(rebase_path_stopped_sha, > "rebase-merge/stopped-sha") > static GIT_PATH_FUNC(rebase_path_gpg_sign_opt, "rebase-merge/gpg_sign_opt") > static GIT_PATH_FUNC(rebase_path_orig_head, "rebase-merge/orig-head") > static GIT_PATH_FUNC(rebase_path_verbose, "rebase-merge/verbose") > +static GIT_PATH_FUNC(rebase_path_head_name, "rebase-merge/head-name") > +static GIT_PATH_FUNC(rebase_path_onto, "rebase-merge/onto") > > static inline int is_rebase_i(const struct replay_opts *opts) > { > @@ -1793,12 +1795,39 @@ static int pick_commits(struct todo_list *todo_list, > struct replay_opts *opts) > } > > if (is_rebase_i(opts)) { > - struct strbuf buf = STRBUF_INIT; > + struct strbuf head_ref = STRBUF_INIT, buf = STRBUF_INIT; > > /* Stopped in the middle, as planned? */ > if (todo_list->current < todo_list->nr) > return 0; > > + if (read_oneliner(_ref, rebase_path_head_name(), 0) && > + starts_with(head_ref.buf, "refs/")) { > + unsigned char head[20], orig[20]; > + > + if (get_sha1("HEAD", head)) > + return error(_("cannot read HEAD")); > + if (!read_oneliner(, rebase_path_orig_head(), 0) || > + get_sha1_hex(buf.buf, orig)) > + return error(_("could not read orig-head")); > + strbuf_addf(, "rebase -i (finish): %s onto ", > + head_ref.buf); > + if (!read_oneliner(, rebase_path_onto(), 0)) > + return error(_("could not read 'onto'")); > + if (update_ref(buf.buf, head_ref.buf, head, orig, > + REF_NODEREF, UPDATE_REFS_MSG_ON_ERR)) > + return error(_("could not update %s"), > + head_ref.buf); > + strbuf_reset(); > + strbuf_addf(, > + "rebase -i (finish): returning to %s", > + head_ref.buf); > + if (create_symref("HEAD", head_ref.buf, buf.buf)) > + return error(_("could not update HEAD to %s"), > + head_ref.buf); All of the above return error() calls leak head_ref.buf; in addition some leak buf.buf, too. > + strbuf_reset(); > + } > + > if (opts->verbose) { > const char *argv[] = { > "diff-tree", "--stat", NULL, NULL > @@ -1813,6 +1842,7 @@ static int pick_commits(struct todo_list *todo_list, > struct replay_opts *opts) > strbuf_reset(); > } > strbuf_release(); > + strbuf_release(_ref); > } > > /*
Re: [PATCH 1/1] mingw: intercept isatty() to handle /dev/null as Git expects it
Johannes Sixtwrites: > Am 16.12.2016 um 19:08 schrieb Junio C Hamano: >> Sorry for not having waited for you to chime in before agreeing to >> fast-track this one, and now this is in 'master'. > > No reason to be sorry, things happen... Dscho's request for > fast-tracking was very reasonable, and the patch looked "obviously > correct". Oh, I do agree it was reasonable and looked obviously correct. And I suspect that it did not make anything worse, either, so there is not much harm done, other than that I now have to remember not to merge it down without further fixes to 'maint' and declare the NUL thing fixed ;-) > Unfortunately, I'm away from my Windows box over the weekend. It will > have to wait until Monday before I can test this idea. Thanks.
[PATCH v7 4/7] grep: add submodules as a grep source type
Add `GREP_SOURCE_SUBMODULE` as a grep_source type and cases for this new type in the various switch statements in grep.c. When initializing a grep_source with type `GREP_SOURCE_SUBMODULE` the identifier can either be NULL (to indicate that the working tree will be used) or a SHA1 (the REV of the submodule to be grep'd). If the identifier is a SHA1 then we want to fall through to the `GREP_SOURCE_SHA1` case to handle the copying of the SHA1. Signed-off-by: Brandon Williams--- grep.c | 16 +++- grep.h | 1 + 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/grep.c b/grep.c index 1194d35..0dbdc1d 100644 --- a/grep.c +++ b/grep.c @@ -1735,12 +1735,23 @@ void grep_source_init(struct grep_source *gs, enum grep_source_type type, case GREP_SOURCE_FILE: gs->identifier = xstrdup(identifier); break; + case GREP_SOURCE_SUBMODULE: + if (!identifier) { + gs->identifier = NULL; + break; + } + /* +* FALL THROUGH +* If the identifier is non-NULL (in the submodule case) it +* will be a SHA1 that needs to be copied. +*/ case GREP_SOURCE_SHA1: gs->identifier = xmalloc(20); hashcpy(gs->identifier, identifier); break; case GREP_SOURCE_BUF: gs->identifier = NULL; + break; } } @@ -1760,6 +1771,7 @@ void grep_source_clear_data(struct grep_source *gs) switch (gs->type) { case GREP_SOURCE_FILE: case GREP_SOURCE_SHA1: + case GREP_SOURCE_SUBMODULE: free(gs->buf); gs->buf = NULL; gs->size = 0; @@ -1831,8 +1843,10 @@ static int grep_source_load(struct grep_source *gs) return grep_source_load_sha1(gs); case GREP_SOURCE_BUF: return gs->buf ? 0 : -1; + case GREP_SOURCE_SUBMODULE: + break; } - die("BUG: invalid grep_source type"); + die("BUG: invalid grep_source type to load"); } void grep_source_load_driver(struct grep_source *gs) diff --git a/grep.h b/grep.h index 5856a23..267534c 100644 --- a/grep.h +++ b/grep.h @@ -161,6 +161,7 @@ struct grep_source { GREP_SOURCE_SHA1, GREP_SOURCE_FILE, GREP_SOURCE_BUF, + GREP_SOURCE_SUBMODULE, } type; void *identifier; -- 2.8.0.rc3.226.g39d4020
[PATCH v7 6/7] grep: enable recurse-submodules to work on objects
Teach grep to recursively search in submodules when provided with a object. This allows grep to search a submodule based on the state of the submodule that is present in a commit of the super project. When grep is provided with a object, the name of the object is prefixed to all output. In order to provide uniformity of output between the parent and child processes the option `--parent-basename` has been added so that the child can preface all of it's output with the name of the parent's object instead of the name of the commit SHA1 of the submodule. This changes output from the command `git grep -e. -l --recurse-submodules HEAD` from: HEAD:file :sub/file to: HEAD:file HEAD:sub/file Signed-off-by: Brandon Williams--- Documentation/git-grep.txt | 13 - builtin/grep.c | 76 --- t/t7814-grep-recurse-submodules.sh | 103 - tree-walk.c| 28 ++ 4 files changed, 211 insertions(+), 9 deletions(-) diff --git a/Documentation/git-grep.txt b/Documentation/git-grep.txt index 17aa1ba..71f32f3 100644 --- a/Documentation/git-grep.txt +++ b/Documentation/git-grep.txt @@ -26,7 +26,7 @@ SYNOPSIS [--threads ] [-f ] [-e] [--and|--or|--not|(|)|-e ...] - [--recurse-submodules] + [--recurse-submodules] [--parent-basename ] [ [--[no-]exclude-standard] [--cached | --no-index | --untracked] | ...] [--] [...] @@ -91,7 +91,16 @@ OPTIONS --recurse-submodules:: Recursively search in each submodule that has been initialized and - checked out in the repository. + checked out in the repository. When used in combination with the +option the prefix of all submodule output will be the name of + the parent project's object. + +--parent-basename :: + For internal use only. In order to produce uniform output with the + --recurse-submodules option, this option can be used to provide the + basename of a parent's object to a submodule so the submodule + can prefix its output with the parent's name rather than the SHA1 of + the submodule. -a:: --text:: diff --git a/builtin/grep.c b/builtin/grep.c index dca0be6..5918a26 100644 --- a/builtin/grep.c +++ b/builtin/grep.c @@ -19,6 +19,7 @@ #include "dir.h" #include "pathspec.h" #include "submodule.h" +#include "submodule-config.h" static char const * const grep_usage[] = { N_("git grep [] [-e] [...] [[--] ...]"), @@ -28,6 +29,7 @@ static char const * const grep_usage[] = { static const char *super_prefix; static int recurse_submodules; static struct argv_array submodule_options = ARGV_ARRAY_INIT; +static const char *parent_basename; static int grep_submodule_launch(struct grep_opt *opt, const struct grep_source *gs); @@ -534,19 +536,53 @@ static int grep_submodule_launch(struct grep_opt *opt, { struct child_process cp = CHILD_PROCESS_INIT; int status, i; + const char *end_of_base; + const char *name; struct work_item *w = opt->output_priv; + end_of_base = strchr(gs->name, ':'); + if (gs->identifier && end_of_base) + name = end_of_base + 1; + else + name = gs->name; + prepare_submodule_repo_env(_array); /* Add super prefix */ argv_array_pushf(, "--super-prefix=%s%s/", super_prefix ? super_prefix : "", -gs->name); +name); argv_array_push(, "grep"); + /* +* Add basename of parent project +* When performing grep on a tree object the filename is prefixed +* with the object's name: 'tree-name:filename'. In order to +* provide uniformity of output we want to pass the name of the +* parent project's object name to the submodule so the submodule can +* prefix its output with the parent's name and not its own SHA1. +*/ + if (gs->identifier && end_of_base) + argv_array_pushf(, "--parent-basename=%.*s", +(int) (end_of_base - gs->name), +gs->name); + /* Add options */ - for (i = 0; i < submodule_options.argc; i++) + for (i = 0; i < submodule_options.argc; i++) { + /* +* If there is a tree identifier for the submodule, add the +* rev after adding the submodule options but before the +* pathspecs. To do this we listen for the '--' and insert the +* sha1 before pushing the '--' onto the child process argv +* array. +*/ + if (gs->identifier && + !strcmp("--", submodule_options.argv[i])) { + argv_array_push(, sha1_to_hex(gs->identifier)); +
[PATCH v7 7/7] grep: search history of moved submodules
If a submodule was renamed at any point since it's inception then if you were to try and grep on a commit prior to the submodule being moved, you wouldn't be able to find a working directory for the submodule since the path in the past is different from the current path. This patch teaches grep to find the .git directory for a submodule in the parents .git/modules/ directory in the event the path to the submodule in the commit that is being searched differs from the state of the currently checked out commit. If found, the child process that is spawned to grep the submodule will chdir into its gitdir instead of a working directory. In order to override the explicit setting of submodule child process's gitdir environment variable (which was introduced in '10f5c526') `GIT_DIR_ENVIORMENT` needs to be pushed onto child process's env_array. This allows the searching of history from a submodule's gitdir, rather than from a working directory. Signed-off-by: Brandon Williams--- builtin/grep.c | 20 +-- t/t7814-grep-recurse-submodules.sh | 41 ++ 2 files changed, 59 insertions(+), 2 deletions(-) diff --git a/builtin/grep.c b/builtin/grep.c index 5918a26..2c727ef 100644 --- a/builtin/grep.c +++ b/builtin/grep.c @@ -547,6 +547,7 @@ static int grep_submodule_launch(struct grep_opt *opt, name = gs->name; prepare_submodule_repo_env(_array); + argv_array_push(_array, GIT_DIR_ENVIRONMENT); /* Add super prefix */ argv_array_pushf(, "--super-prefix=%s%s/", @@ -615,8 +616,23 @@ static int grep_submodule(struct grep_opt *opt, const unsigned char *sha1, { if (!is_submodule_initialized(path)) return 0; - if (!is_submodule_populated(path)) - return 0; + if (!is_submodule_populated(path)) { + /* +* If searching history, check for the presense of the +* submodule's gitdir before skipping the submodule. +*/ + if (sha1) { + const struct submodule *sub = + submodule_from_path(null_sha1, path); + if (sub) + path = git_path("modules/%s", sub->name); + + if (!(is_directory(path) && is_git_directory(path))) + return 0; + } else { + return 0; + } + } #ifndef NO_PTHREADS if (num_threads) { diff --git a/t/t7814-grep-recurse-submodules.sh b/t/t7814-grep-recurse-submodules.sh index d5fc316..67247a0 100755 --- a/t/t7814-grep-recurse-submodules.sh +++ b/t/t7814-grep-recurse-submodules.sh @@ -186,6 +186,47 @@ test_expect_success !MINGW 'grep recurse submodule colon in name' ' test_cmp expect actual ' +test_expect_success 'grep history with moved submoules' ' + git init parent && + test_when_finished "rm -rf parent" && + echo "foobar" >parent/file && + git -C parent add file && + git -C parent commit -m "add file" && + + git init sub && + test_when_finished "rm -rf sub" && + echo "foobar" >sub/file && + git -C sub add file && + git -C sub commit -m "add file" && + + git -C parent submodule add ../sub dir/sub && + git -C parent commit -m "add submodule" && + + cat >expect <<-\EOF && + dir/sub/file:foobar + file:foobar + EOF + git -C parent grep -e "foobar" --recurse-submodules >actual && + test_cmp expect actual && + + git -C parent mv dir/sub sub-moved && + git -C parent commit -m "moved submodule" && + + cat >expect <<-\EOF && + file:foobar + sub-moved/file:foobar + EOF + git -C parent grep -e "foobar" --recurse-submodules >actual && + test_cmp expect actual && + + cat >expect <<-\EOF && + HEAD^:dir/sub/file:foobar + HEAD^:file:foobar + EOF + git -C parent grep -e "foobar" --recurse-submodules HEAD^ >actual && + test_cmp expect actual +' + test_incompatible_with_recurse_submodules () { test_expect_success "--recurse-submodules and $1 are incompatible" " -- 2.8.0.rc3.226.g39d4020
[PATCH v7 5/7] grep: optionally recurse into submodules
Allow grep to recognize submodules and recursively search for patterns in each submodule. This is done by forking off a process to recursively call grep on each submodule. The top level --super-prefix option is used to pass a path to the submodule which can in turn be used to prepend to output or in pathspec matching logic. Recursion only occurs for submodules which have been initialized and checked out by the parent project. If a submodule hasn't been initialized and checked out it is simply skipped. In order to support the existing multi-threading infrastructure in grep, output from each child process is captured in a strbuf so that it can be later printed to the console in an ordered fashion. To limit the number of theads that are created, each child process has half the number of threads as its parents (minimum of 1), otherwise we potentailly have a fork-bomb. Signed-off-by: Brandon Williams--- Documentation/git-grep.txt | 5 + builtin/grep.c | 300 ++--- git.c | 2 +- t/t7814-grep-recurse-submodules.sh | 99 4 files changed, 386 insertions(+), 20 deletions(-) create mode 100755 t/t7814-grep-recurse-submodules.sh diff --git a/Documentation/git-grep.txt b/Documentation/git-grep.txt index 0ecea6e..17aa1ba 100644 --- a/Documentation/git-grep.txt +++ b/Documentation/git-grep.txt @@ -26,6 +26,7 @@ SYNOPSIS [--threads ] [-f ] [-e] [--and|--or|--not|(|)|-e ...] + [--recurse-submodules] [ [--[no-]exclude-standard] [--cached | --no-index | --untracked] | ...] [--] [...] @@ -88,6 +89,10 @@ OPTIONS mechanism. Only useful when searching files in the current directory with `--no-index`. +--recurse-submodules:: + Recursively search in each submodule that has been initialized and + checked out in the repository. + -a:: --text:: Process binary files as if they were text. diff --git a/builtin/grep.c b/builtin/grep.c index 8887b6a..dca0be6 100644 --- a/builtin/grep.c +++ b/builtin/grep.c @@ -18,12 +18,20 @@ #include "quote.h" #include "dir.h" #include "pathspec.h" +#include "submodule.h" static char const * const grep_usage[] = { N_("git grep [] [-e] [...] [[--] ...]"), NULL }; +static const char *super_prefix; +static int recurse_submodules; +static struct argv_array submodule_options = ARGV_ARRAY_INIT; + +static int grep_submodule_launch(struct grep_opt *opt, +const struct grep_source *gs); + #define GREP_NUM_THREADS_DEFAULT 8 static int num_threads; @@ -174,7 +182,10 @@ static void *run(void *arg) break; opt->output_priv = w; - hit |= grep_source(opt, >source); + if (w->source.type == GREP_SOURCE_SUBMODULE) + hit |= grep_submodule_launch(opt, >source); + else + hit |= grep_source(opt, >source); grep_source_clear_data(>source); work_done(w); } @@ -300,6 +311,10 @@ static int grep_sha1(struct grep_opt *opt, const unsigned char *sha1, if (opt->relative && opt->prefix_length) { quote_path_relative(filename + tree_name_len, opt->prefix, ); strbuf_insert(, 0, filename, tree_name_len); + } else if (super_prefix) { + strbuf_add(, filename, tree_name_len); + strbuf_addstr(, super_prefix); + strbuf_addstr(, filename + tree_name_len); } else { strbuf_addstr(, filename); } @@ -328,10 +343,13 @@ static int grep_file(struct grep_opt *opt, const char *filename) { struct strbuf buf = STRBUF_INIT; - if (opt->relative && opt->prefix_length) + if (opt->relative && opt->prefix_length) { quote_path_relative(filename, opt->prefix, ); - else + } else { + if (super_prefix) + strbuf_addstr(, super_prefix); strbuf_addstr(, filename); + } #ifndef NO_PTHREADS if (num_threads) { @@ -378,31 +396,260 @@ static void run_pager(struct grep_opt *opt, const char *prefix) exit(status); } -static int grep_cache(struct grep_opt *opt, const struct pathspec *pathspec, int cached) +static void compile_submodule_options(const struct grep_opt *opt, + const struct pathspec *pathspec, + int cached, int untracked, + int opt_exclude, int use_index, + int pattern_type_arg) +{ + struct grep_pat *pattern; + int i; + + if (recurse_submodules) + argv_array_push(_options, "--recurse-submodules"); + + if (cached) + argv_array_push(_options,
[PATCH v7 1/7] submodules: add helper to determine if a submodule is populated
Add the `is_submodule_populated()` helper function to submodules.c. `is_submodule_populated()` performes a check to see if a submodule has been checkout out (and has a valid .git directory/file) at the given path. Signed-off-by: Brandon Williams--- submodule.c | 15 +++ submodule.h | 1 + 2 files changed, 16 insertions(+) diff --git a/submodule.c b/submodule.c index c85ba50..ee3198d 100644 --- a/submodule.c +++ b/submodule.c @@ -198,6 +198,21 @@ void gitmodules_config(void) } } +/* + * Determine if a submodule has been populated at a given 'path' + */ +int is_submodule_populated(const char *path) +{ + int ret = 0; + char *gitdir = xstrfmt("%s/.git", path); + + if (resolve_gitdir(gitdir)) + ret = 1; + + free(gitdir); + return ret; +} + int parse_submodule_update_strategy(const char *value, struct submodule_update_strategy *dst) { diff --git a/submodule.h b/submodule.h index d9e197a..c4af505 100644 --- a/submodule.h +++ b/submodule.h @@ -37,6 +37,7 @@ void set_diffopt_flags_from_submodule_config(struct diff_options *diffopt, const char *path); int submodule_config(const char *var, const char *value, void *cb); void gitmodules_config(void); +extern int is_submodule_populated(const char *path); int parse_submodule_update_strategy(const char *value, struct submodule_update_strategy *dst); const char *submodule_strategy_to_string(const struct submodule_update_strategy *s); -- 2.8.0.rc3.226.g39d4020
[PATCH v7 3/7] submodules: load gitmodules file from commit sha1
teach submodules to load a '.gitmodules' file from a commit sha1. This enables the population of the submodule_cache to be based on the state of the '.gitmodules' file from a particular commit. Signed-off-by: Brandon Williams--- cache.h| 2 ++ config.c | 8 submodule-config.c | 6 +++--- submodule-config.h | 3 +++ submodule.c| 12 submodule.h| 1 + 6 files changed, 25 insertions(+), 7 deletions(-) diff --git a/cache.h b/cache.h index e12a5d9..de237ca 100644 --- a/cache.h +++ b/cache.h @@ -1693,6 +1693,8 @@ extern int git_default_config(const char *, const char *, void *); extern int git_config_from_file(config_fn_t fn, const char *, void *); extern int git_config_from_mem(config_fn_t fn, const enum config_origin_type, const char *name, const char *buf, size_t len, void *data); +extern int git_config_from_blob_sha1(config_fn_t fn, const char *name, +const unsigned char *sha1, void *data); extern void git_config_push_parameter(const char *text); extern int git_config_from_parameters(config_fn_t fn, void *data); extern void git_config(config_fn_t fn, void *); diff --git a/config.c b/config.c index 83fdecb..4d78e72 100644 --- a/config.c +++ b/config.c @@ -1214,10 +1214,10 @@ int git_config_from_mem(config_fn_t fn, const enum config_origin_type origin_typ return do_config_from(, fn, data); } -static int git_config_from_blob_sha1(config_fn_t fn, -const char *name, -const unsigned char *sha1, -void *data) +int git_config_from_blob_sha1(config_fn_t fn, + const char *name, + const unsigned char *sha1, + void *data) { enum object_type type; char *buf; diff --git a/submodule-config.c b/submodule-config.c index 098085b..8b9a2ef 100644 --- a/submodule-config.c +++ b/submodule-config.c @@ -379,9 +379,9 @@ static int parse_config(const char *var, const char *value, void *data) return ret; } -static int gitmodule_sha1_from_commit(const unsigned char *commit_sha1, - unsigned char *gitmodules_sha1, - struct strbuf *rev) +int gitmodule_sha1_from_commit(const unsigned char *commit_sha1, + unsigned char *gitmodules_sha1, + struct strbuf *rev) { int ret = 0; diff --git a/submodule-config.h b/submodule-config.h index d05c542..78584ba 100644 --- a/submodule-config.h +++ b/submodule-config.h @@ -29,6 +29,9 @@ const struct submodule *submodule_from_name(const unsigned char *commit_sha1, const char *name); const struct submodule *submodule_from_path(const unsigned char *commit_sha1, const char *path); +extern int gitmodule_sha1_from_commit(const unsigned char *commit_sha1, + unsigned char *gitmodules_sha1, + struct strbuf *rev); void submodule_free(void); #endif /* SUBMODULE_CONFIG_H */ diff --git a/submodule.c b/submodule.c index edffaa1..2600908 100644 --- a/submodule.c +++ b/submodule.c @@ -198,6 +198,18 @@ void gitmodules_config(void) } } +void gitmodules_config_sha1(const unsigned char *commit_sha1) +{ + struct strbuf rev = STRBUF_INIT; + unsigned char sha1[20]; + + if (gitmodule_sha1_from_commit(commit_sha1, sha1, )) { + git_config_from_blob_sha1(submodule_config, rev.buf, + sha1, NULL); + } + strbuf_release(); +} + /* * Determine if a submodule has been initialized at a given 'path' */ diff --git a/submodule.h b/submodule.h index 6ec5f2f..9203d89 100644 --- a/submodule.h +++ b/submodule.h @@ -37,6 +37,7 @@ void set_diffopt_flags_from_submodule_config(struct diff_options *diffopt, const char *path); int submodule_config(const char *var, const char *value, void *cb); void gitmodules_config(void); +extern void gitmodules_config_sha1(const unsigned char *commit_sha1); extern int is_submodule_initialized(const char *path); extern int is_submodule_populated(const char *path); int parse_submodule_update_strategy(const char *value, -- 2.8.0.rc3.226.g39d4020
[PATCH v7 0/7] recursively grep across submodules
Changes in v7: * Rebased on 'origin/bw/realpath-wo-chdir' in order to fix the race condition that occurs when verifying a submodule's gitdir. * Reverted is_submodule_populated() to use resolve_gitdir() now that there is no race condition. * Added !MINGW to a test in t7814 so that it won't run on windows. This is due to testing if colons in filenames are still handled correctly, yet windows doesn't allow colons in filenames. Brandon Williams (7): submodules: add helper to determine if a submodule is populated submodules: add helper to determine if a submodule is initialized submodules: load gitmodules file from commit sha1 grep: add submodules as a grep source type grep: optionally recurse into submodules grep: enable recurse-submodules to work on objects grep: search history of moved submodules Documentation/git-grep.txt | 14 ++ builtin/grep.c | 386 ++--- cache.h| 2 + config.c | 8 +- git.c | 2 +- grep.c | 16 +- grep.h | 1 + submodule-config.c | 6 +- submodule-config.h | 3 + submodule.c| 50 + submodule.h| 3 + t/t7814-grep-recurse-submodules.sh | 241 +++ tree-walk.c| 28 +++ 13 files changed, 729 insertions(+), 31 deletions(-) create mode 100755 t/t7814-grep-recurse-submodules.sh -- 2.8.0.rc3.226.g39d4020
[PATCH v7 2/7] submodules: add helper to determine if a submodule is initialized
Add the `is_submodule_initialized()` helper function to submodules.c. `is_submodule_initialized()` performs a check to determine if the submodule at the given path has been initialized. Signed-off-by: Brandon Williams--- submodule.c | 23 +++ submodule.h | 1 + 2 files changed, 24 insertions(+) diff --git a/submodule.c b/submodule.c index ee3198d..edffaa1 100644 --- a/submodule.c +++ b/submodule.c @@ -199,6 +199,29 @@ void gitmodules_config(void) } /* + * Determine if a submodule has been initialized at a given 'path' + */ +int is_submodule_initialized(const char *path) +{ + int ret = 0; + const struct submodule *module = NULL; + + module = submodule_from_path(null_sha1, path); + + if (module) { + char *key = xstrfmt("submodule.%s.url", module->name); + char *value = NULL; + + ret = !git_config_get_string(key, ); + + free(value); + free(key); + } + + return ret; +} + +/* * Determine if a submodule has been populated at a given 'path' */ int is_submodule_populated(const char *path) diff --git a/submodule.h b/submodule.h index c4af505..6ec5f2f 100644 --- a/submodule.h +++ b/submodule.h @@ -37,6 +37,7 @@ void set_diffopt_flags_from_submodule_config(struct diff_options *diffopt, const char *path); int submodule_config(const char *var, const char *value, void *cb); void gitmodules_config(void); +extern int is_submodule_initialized(const char *path); extern int is_submodule_populated(const char *path); int parse_submodule_update_strategy(const char *value, struct submodule_update_strategy *dst); -- 2.8.0.rc3.226.g39d4020
Re: [PATCH v15 08/27] bisect--helper: `is_expected_rev` & `check_expected_revs` shell function in C
Hey Stephan, On Wed, Dec 7, 2016 at 1:03 AM, Pranit Bauvawrote: >> I don't understand why the return value is int and not void. To avoid a >> "return 0;" line when calling this function? > > Initially I thought I would be using the return value but now I > realize that it is meaningless to do so. Using void seems better. :) I just recollected when I was creating the next iteration of this series that I will need that int. >>> + case CHECK_EXPECTED_REVS: >>> + return check_expected_revs(argv, argc); See this. Regards, Pranit Bauva
Re: index-pack outside of repository?
Junio C Hamanowrites: > Jeff King writes: > ... >> I'm actually wondering if the way it calls die() in 'next' is a pretty >> reasonable way for things to work in general. It happens when we lazily >> try to ask for the repository directory. So we don't have to replicate >> logic to say "are we going to need a repo"; at the moment we need it, we >> notice we don't have it and die. The only problem is that it says "BUG" >> and not "this operation must be run in a git repository". > > Isn't what we currently have is a good way to discover which > codepaths we missed to add a check to issue the latter error? I am tempted to suggest an intermediate step that comes before b1ef400eec ("setup_git_env: avoid blind fall-back to ".git"", 2016-10-20), which is the attached, and publish that as part of an official release. That way, we'll see what is broken without hurting people too much (unless they or their scripts care about extra message given to the standard error stream). I suspect that released Git has a slightly larger user base than what is cooked on 'next'. environment.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/environment.c b/environment.c index 0935ec696e..88f857331e 100644 --- a/environment.c +++ b/environment.c @@ -167,8 +167,11 @@ static void setup_git_env(void) const char *replace_ref_base; git_dir = getenv(GIT_DIR_ENVIRONMENT); - if (!git_dir) + if (!git_dir) { + if (!startup_info->have_repository) + warning("BUG: please report this at git@vger.kernel.org"); git_dir = DEFAULT_GIT_DIR_ENVIRONMENT; + } gitfile = read_gitfile(git_dir); git_dir = xstrdup(gitfile ? gitfile : git_dir); if (get_common_dir(, git_dir))
Re: [PATCH 1/1] mingw: intercept isatty() to handle /dev/null as Git expects it
Am 16.12.2016 um 19:08 schrieb Junio C Hamano: Sorry for not having waited for you to chime in before agreeing to fast-track this one, and now this is in 'master'. No reason to be sorry, things happen... Dscho's request for fast-tracking was very reasonable, and the patch looked "obviously correct". I lacked time to test it promptly, and when I finally did, I ignored the symptoms because my workflow was a mess and I had to concentrate on other things. Only after 'git push' did not report the pack progress, was my suspicion raised... Should we not use winansi_get_osfhandle() instead of _get_osfhandle()? Unfortunately, I'm away from my Windows box over the weekend. It will have to wait until Monday before I can test this idea. -- Hannes
Re: [RFC/PATCH] Makefile: suppress some cppcheck false-positives
Chris Packhamwrites: > -CPPCHECK_FLAGS = --force --quiet --inline-suppr $(if > $(CPPCHECK_ADD),--enable=$(CPPCHECK_ADD)) > +CPPCHECK_SUPP = --suppressions-list=nedmalloc.supp \ > + --suppressions-list=regcomp.supp > + > +CPPCHECK_FLAGS = --force --quiet --inline-suppr \ > + $(if $(CPPCHECK_ADD),--enable=$(CPPCHECK_ADD)) \ > + $(CPPCHECK_SUPP) Has it been agreed that it is a good idea to tie $(CPPCHECK_ADD) only to --enable? I somehow thought I saw an objection to this. > diff --git a/nedmalloc.supp b/nedmalloc.supp > new file mode 100644 > index 0..37bd54def > --- /dev/null > +++ b/nedmalloc.supp > @@ -0,0 +1,4 @@ > +nullPointer:compat/nedmalloc/malloc.c.h:4093 > +nullPointer:compat/nedmalloc/malloc.c.h:4106 > +memleak:compat/nedmalloc/malloc.c.h:4646 > + > diff --git a/regcomp.supp b/regcomp.supp > new file mode 100644 > index 0..3ae023c26 > --- /dev/null > +++ b/regcomp.supp > @@ -0,0 +1,8 @@ > +memleak:compat/regex/regcomp.c:3086 > +memleak:compat/regex/regcomp.c:3634 > +memleak:compat/regex/regcomp.c:3086 > +memleak:compat/regex/regcomp.c:3634 > +uninitvar:compat/regex/regcomp.c:2802 > +uninitvar:compat/regex/regcomp.c:2805 > +memleak:compat/regex/regcomp.c:532 > + Yuck for both files for multiple reasons. I do not think it is a good idea to allow these files to clutter the top-level of tree. How often do we expect that we may have to add more of these files? Every time we start borrowing code from third parties? What is the goal we want to achieve by running cppcheck? a. Our code must be clean but we do not bother "fixing" [*1*] the code we borrow from third parties and squelch output instead? b. Both our own code and third party code we borrow need to be free of errors and misdetections from cppcheck? c. Something else? If a. is what we aim for, perhaps a better option may be not to run cppcheck for the code we borrowed from third-party at all in the first place. If b. is our goal, we need to make sure that the false positive rate of cppcheck is acceptably low. [Footnote] *1* "Fixing" a real problem it uncovers is a good thing, but it may have to also involve working around false positives reported by cppcheck, either by rewriting a perfectly correct code to please it, or adding these line-number based suppression that is totally unworkable. Like Peff, I'm worried that we will end up with two static checkers fighting each other, and no good way to please both.
Segfault in git_config_set_multivar_in_file_gently with direct_io in FUSE filesystem
I am using git with a simple in-memory FUSE filesystem. When I enable direct_io, I get a segfault from git_config_set_multivar_in_file_gently during git clone. I have full reproduction instructions using Go and macOS at https://github.com/josharian/gitbug. It also includes a stack trace in case anyone wants to try blind debugging. Happy to provide more info if it will help. Thanks, Josh
Re: [PATCH 1/1] mingw: intercept isatty() to handle /dev/null as Git expects it
Johannes Sixtwrites: > Am 11.12.2016 um 12:16 schrieb Johannes Schindelin: >> When Git's source code calls isatty(), it really asks whether the >> respective file descriptor is connected to an interactive terminal. >> ... >> +if (!GetConsoleScreenBufferInfo(handle, )) >> +res = 0; > > I am sorry to have to report that this check does not work as > expected. I am operating Git from CMD, not mintty, BTW. Ouch. Sorry for not having waited for you to chime in before agreeing to fast-track this one, and now this is in 'master'. I should have known better than that by now, after having seen you uncover bugs that did not trigger in Dscho's environment and vice versa to tell you that Windows specific things both of you deem good have a much higher chance of being correct than a change without an Ack by the other. > It fails with GetLastError() == 6 (invalid handle value). The reason > for this is, I think, that in reality we are not writing to the > console directly, but to a pipe, which is drained by console_thread(), > which writes to the console. > > I have little clue what this winansi.c file is doing. Do you have any > pointers where I should start digging? > > Wait... > > Should we not use winansi_get_osfhandle() instead of _get_osfhandle()? > >> +} >> +} >> + >> +return res; >> +} >> + >> void winansi_init(void) >> { >> int con1, con2; >>
Re: index-pack outside of repository?
Junio C Hamanowrites: > Junio C Hamano writes: > >> Jeff King writes: >> >>> On Thu, Dec 15, 2016 at 08:37:28PM -0500, Jeff King wrote: >>> But if this case really is just "if (from_stdin)" that's quite easy, too. >>> >>> So here is that patch (with some associated refactoring and cleanups). >>> This is conceptually independent of >>> jk/no-looking-at-dotgit-outside-repo-final, >>> though it should be fine to merge with that topic. The BUG will actually >>> pass the new test, because it calls die, too. I wonder if we should die >>> with a unique error code on BUGs, and catch them in test_must_fail >>> similar to the way we catch signal death. >>> >>> [1/3]: t5000: extract nongit function to test-lib-functions.sh >>> [2/3]: index-pack: complain when --stdin is used outside of a repo >>> [3/3]: t: use nongit() function where applicable >> >> I think 2/3 is a good change to ensure we get a reasonable error for >> "index-pack --stdin", and 3/3 is a very good cleanup. Both of them >> of course are enabled by 1/3. >> >> We still fail "nongit git index-pack tmp.pack" with a BUG: though. > > Wait. > > This only happens with a stalled-and-to-be-discarded topic on 'pu'. > Please don't waste time digging it (yet). Don't wait ;-). My mistake. We can see t5300 broken with this change and b1ef400eec ("setup_git_env: avoid blind fall-back to ".git"", 2016-10-20) without anything else. We still need to address it.
Re: index-pack outside of repository?
Jeff Kingwrites: > On Thu, Dec 15, 2016 at 08:37:28PM -0500, Jeff King wrote: > >> But if this case really is just "if (from_stdin)" that's quite easy, >> too. > > So here is that patch (with some associated refactoring and cleanups). > This is conceptually independent of > jk/no-looking-at-dotgit-outside-repo-final, > though it should be fine to merge with that topic. The BUG will actually > pass the new test, because it calls die, too. I wonder if we should die > with a unique error code on BUGs, and catch them in test_must_fail > similar to the way we catch signal death. > > [1/3]: t5000: extract nongit function to test-lib-functions.sh > [2/3]: index-pack: complain when --stdin is used outside of a repo > [3/3]: t: use nongit() function where applicable I think 2/3 is a good change to ensure we get a reasonable error for "index-pack --stdin", and 3/3 is a very good cleanup. Both of them of course are enabled by 1/3. We still fail "nongit git index-pack tmp.pack" with a BUG: though.
Re: index-pack outside of repository?
Junio C Hamanowrites: > Jeff King writes: > >> On Thu, Dec 15, 2016 at 08:37:28PM -0500, Jeff King wrote: >> >>> But if this case really is just "if (from_stdin)" that's quite easy, >>> too. >> >> So here is that patch (with some associated refactoring and cleanups). >> This is conceptually independent of >> jk/no-looking-at-dotgit-outside-repo-final, >> though it should be fine to merge with that topic. The BUG will actually >> pass the new test, because it calls die, too. I wonder if we should die >> with a unique error code on BUGs, and catch them in test_must_fail >> similar to the way we catch signal death. >> >> [1/3]: t5000: extract nongit function to test-lib-functions.sh >> [2/3]: index-pack: complain when --stdin is used outside of a repo >> [3/3]: t: use nongit() function where applicable > > I think 2/3 is a good change to ensure we get a reasonable error for > "index-pack --stdin", and 3/3 is a very good cleanup. Both of them > of course are enabled by 1/3. > > We still fail "nongit git index-pack tmp.pack" with a BUG: though. Wait. This only happens with a stalled-and-to-be-discarded topic on 'pu'. Please don't waste time digging it (yet).
Re: [PATCH] README: replace gmane link with public-inbox
Eric Wongwrites: > Jeff King wrote: > ... >> Yes, the status of gmane was up in the air for a while, but I think we >> can give it up as dead now (at least for our purposes). > > s/http/nntp/ still works for gmane. > ... >> -- >8 -- >> Subject: README: replace gmane link with public-inbox > ... > No objections, here. > > There's also https://mail-archive.com/git@vger.kernel.org > Where https://mid.mail-archive.com/ also works Yes, but do we want to be exhaustive here, of just cite one that is a useful starting point? I think it is the latter. Mentioning nntp for those who prefer (including me) may have value, though. README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index c0cd5580ea..91704fe451 100644 --- a/README.md +++ b/README.md @@ -34,7 +34,8 @@ requests, comments and patches to git@vger.kernel.org (read To subscribe to the list, send an email with just "subscribe git" in the body to majord...@vger.kernel.org. The mailing list archives are available at https://public-inbox.org/git, -http://marc.info/?l=git and other archival sites. +http://marc.info/?l=git and other archival sites. +Those who prefer NNTP can use nntp://news.public-inbox.org/inbox.comp.version-control.git The maintainer frequently sends the "What's cooking" reports that list the current status of various development topics to the mailing
Re: [PATCH 1/1] mingw: intercept isatty() to handle /dev/null as Git expects it
Am 11.12.2016 um 12:16 schrieb Johannes Schindelin: When Git's source code calls isatty(), it really asks whether the respective file descriptor is connected to an interactive terminal. Windows' _isatty() function, however, determines whether the file descriptor is associated with a character device. And NUL, Windows' equivalent of /dev/null, is a character device. Which means that for years, Git mistakenly detected an associated interactive terminal when being run through the test suite, which almost always redirects stdin, stdout and stderr to /dev/null. This bug only became obvious, and painfully so, when the new bisect--helper entered the `pu` branch and made the automatic build & test time out because t6030 was waiting for an answer. For details, see https://msdn.microsoft.com/en-us/library/f4s0ddew.aspx Signed-off-by: Johannes Schindelin--- compat/mingw.h | 3 +++ compat/winansi.c | 33 + 2 files changed, 36 insertions(+) diff --git a/compat/mingw.h b/compat/mingw.h index 034fff9479..3350169555 100644 --- a/compat/mingw.h +++ b/compat/mingw.h @@ -384,6 +384,9 @@ int mingw_raise(int sig); * ANSI emulation wrappers */ +int winansi_isatty(int fd); +#define isatty winansi_isatty + void winansi_init(void); HANDLE winansi_get_osfhandle(int fd); diff --git a/compat/winansi.c b/compat/winansi.c index db4a5b0a37..cb725fb02f 100644 --- a/compat/winansi.c +++ b/compat/winansi.c @@ -7,6 +7,9 @@ #include #include +/* In this file, we actually want to use Windows' own isatty(). */ +#undef isatty + /* ANSI codes used by git: m, K @@ -570,6 +573,36 @@ static void detect_msys_tty(int fd) #endif +int winansi_isatty(int fd) +{ + int res = isatty(fd); + + if (res) { + /* +* Make sure that /dev/null is not fooling Git into believing +* that we are connected to a terminal, as "_isatty() returns a +* nonzero value if the descriptor is associated with a +* character device."; for more information, see +* +* https://msdn.microsoft.com/en-us/library/f4s0ddew.aspx +*/ + HANDLE handle = (HANDLE)_get_osfhandle(fd); + if (fd == STDIN_FILENO) { + DWORD dummy; + + if (!GetConsoleMode(handle, )) + res = 0; + } else if (fd == STDOUT_FILENO || fd == STDERR_FILENO) { + CONSOLE_SCREEN_BUFFER_INFO dummy; + + if (!GetConsoleScreenBufferInfo(handle, )) + res = 0; I am sorry to have to report that this check does not work as expected. I am operating Git from CMD, not mintty, BTW. It fails with GetLastError() == 6 (invalid handle value). The reason for this is, I think, that in reality we are not writing to the console directly, but to a pipe, which is drained by console_thread(), which writes to the console. I have little clue what this winansi.c file is doing. Do you have any pointers where I should start digging? Wait... Should we not use winansi_get_osfhandle() instead of _get_osfhandle()? + } + } + + return res; +} + void winansi_init(void) { int con1, con2;
Re: Allow "git shortlog" to group by committer information
Jeff Kingwrites: > It obviously would need updating if we switch away from "-c", but I > think I am OK with the short "-c" (even if we add a more exotic grouping > option later, this can remain as a short synonym). Yeah, I think it probably is OK. As it is very clear that "group by author" is the default, there is no need to add the corresponding "-a/--author" option, either. The fact that "--no-committer" can countermand an earlier "--committer" on the command line is just how options work, so it probably does not deserve a separate mention, either. Thanks. > -- >8 -- > Subject: [PATCH] shortlog: test and document --committer option > > This puts the final touches on the feature added by > fbfda15fb8 (shortlog: group by committer information, > 2016-10-11). > > Signed-off-by: Jeff King > --- > Documentation/git-shortlog.txt | 4 > t/t4201-shortlog.sh| 13 + > 2 files changed, 17 insertions(+) > > diff --git a/Documentation/git-shortlog.txt b/Documentation/git-shortlog.txt > index 31af7f2736..ee6c5476c1 100644 > --- a/Documentation/git-shortlog.txt > +++ b/Documentation/git-shortlog.txt > @@ -47,6 +47,10 @@ OPTIONS > > Each pretty-printed commit will be rewrapped before it is shown. > > +-c:: > +--committer:: > + Collect and show committer identities instead of authors. > + > -w[[,[,]]]:: > Linewrap the output by wrapping each line at `width`. The first > line of each entry is indented by `indent1` spaces, and the second > diff --git a/t/t4201-shortlog.sh b/t/t4201-shortlog.sh > index ae08b57712..6c7c637481 100755 > --- a/t/t4201-shortlog.sh > +++ b/t/t4201-shortlog.sh > @@ -190,4 +190,17 @@ test_expect_success 'shortlog with --output=' ' > test_line_count = 3 shortlog > ' > > +test_expect_success 'shortlog --committer (internal)' ' > + cat >expect <<-\EOF && > + 3 C O Mitter > + EOF > + git shortlog -nsc HEAD >actual && > + test_cmp expect actual > +' > + > +test_expect_success 'shortlog --committer (external)' ' > + git log --format=full | git shortlog -nsc >actual && > + test_cmp expect actual > +' > + > test_done
Re: Races on ref .lock files?
Andreas Kreywrites: > We're occasionally seeing a lot of > > error: cannot lock ref 'stash-refs/pull-requests/18978/to': Unable to > create > '/opt/apps//repositories/68/stash-refs/pull-requests/18978/to.lock': File > exists. > > from the server side with fetches as well as pushes. (Bitbucket server.) > > What I find strange is that neither the fetches nor the pushes even > touch these refs (but the bitbucket triggers underneath might). > > But my question is whether there are race conditions that can cause > such messages in regular operation - they continue with 'If no other git > process is currently running, this probably means a git process crashed > in this repository earlier.' which indicates some level of anticipation. I think (and I think you also think) these messages come from the Bitbucket side, not your "git push" (or "git fetch"). Not having seen Bitbucket's sources, I can only guess, but assuming that its pull-request is triggered from their Web frontend like GitHub's does, it is quite possible when you try to "push" into (or "fetch" from, for that matter) a repository, somebody is clicking a button to create that ref. We do not know what their "receive-pack" that responds to your "git push" (or "upload-pack" for "git fetch") does when there are locked refs. I'd naively think that unless you are pushing to that ref you showed an error message for, the receiving end shouldn't care if the ref is being written by somebody else, but who knows ;-) They may have their own reasons wanting to lock that ref that we think would be irrelevant for the operation, causing errors.
Races on ref .lock files?
Hi all, We're occasionally seeing a lot of error: cannot lock ref 'stash-refs/pull-requests/18978/to': Unable to create '/opt/apps//repositories/68/stash-refs/pull-requests/18978/to.lock': File exists. from the server side with fetches as well as pushes. (Bitbucket server.) What I find strange is that neither the fetches nor the pushes even touch these refs (but the bitbucket triggers underneath might). But my question is whether there are race conditions that can cause such messages in regular operation - they continue with 'If no other git process is currently running, this probably means a git process crashed in this repository earlier.' which indicates some level of anticipation. - Andreas -- "Totally trivial. Famous last words." From: Linus TorvaldsDate: Fri, 22 Jan 2010 07:29:21 -0800
Re: Is there a way to have local changes in a branch 'bake' while working in different branches?
On Thu, Dec 15, 2016 at 08:14:58PM +, Larry Minton wrote: > My question: > > Let's say I have a code change that I want to 'bake' for a while > locally, just to make sure some edge case doesn't pop up while I am > working on other things. Is there any practical way of doing that? I > could constantly merge that 'bake me' branch into other branches as I > work on them and then remove those changes from the branches before > sending them out for code review, but sooner or later pretty much > guaranteed to screw that up I wrote a tool [1] a while ago to manage integration branches so I use a personal integration branch to pull together various in-progress branches. It means you can keep each topic in its own branch but work/test on top of a unified branch by running: git integration --rebuild my-integration-branch whenever you change one of the topic branches. I also use the instruction sheet to keep track of abandoned topics that I might want to go back to but which are currently in a broken state, you can see an example of that in my CGit integration branch [2]. [1] http://johnkeeping.github.io/git-integration/ [2] https://github.com/johnkeeping/cgit/blob/d01ce31ed3dfa9b05ef971464da2af5b9d6f2756/GIT-INTEGRATION-INSN
Re: Allow "git shortlog" to group by committer information
On Fri, Dec 16, 2016 at 08:39:40AM -0500, Jeff King wrote: > I'm OK with the approach your patch takes, but I think there were some > unresolved issues: > > - are we OK taking the short "-c" for this, or do we want > "--group-by=committer" or something like it? > > - no tests; you can steal the general form from my [1] > > - no documentation (can also be stolen from [1], though the syntax is > quite different) Being moved by the holiday spirit, I wrote a patch to address the latter two. ;) It obviously would need updating if we switch away from "-c", but I think I am OK with the short "-c" (even if we add a more exotic grouping option later, this can remain as a short synonym). -- >8 -- Subject: [PATCH] shortlog: test and document --committer option This puts the final touches on the feature added by fbfda15fb8 (shortlog: group by committer information, 2016-10-11). Signed-off-by: Jeff King--- Documentation/git-shortlog.txt | 4 t/t4201-shortlog.sh| 13 + 2 files changed, 17 insertions(+) diff --git a/Documentation/git-shortlog.txt b/Documentation/git-shortlog.txt index 31af7f2736..ee6c5476c1 100644 --- a/Documentation/git-shortlog.txt +++ b/Documentation/git-shortlog.txt @@ -47,6 +47,10 @@ OPTIONS Each pretty-printed commit will be rewrapped before it is shown. +-c:: +--committer:: + Collect and show committer identities instead of authors. + -w[[,[,]]]:: Linewrap the output by wrapping each line at `width`. The first line of each entry is indented by `indent1` spaces, and the second diff --git a/t/t4201-shortlog.sh b/t/t4201-shortlog.sh index ae08b57712..6c7c637481 100755 --- a/t/t4201-shortlog.sh +++ b/t/t4201-shortlog.sh @@ -190,4 +190,17 @@ test_expect_success 'shortlog with --output=' ' test_line_count = 3 shortlog ' +test_expect_success 'shortlog --committer (internal)' ' + cat >expect <<-\EOF && +3 C O Mitter + EOF + git shortlog -nsc HEAD >actual && + test_cmp expect actual +' + +test_expect_success 'shortlog --committer (external)' ' + git log --format=full | git shortlog -nsc >actual && + test_cmp expect actual +' + test_done -- 2.11.0.348.g960a0b554
Re: Allow "git shortlog" to group by committer information
On Thu, Dec 15, 2016 at 01:29:47PM -0800, Linus Torvalds wrote: > On Tue, Oct 11, 2016 at 11:45 AM, Linus Torvalds >wrote: > > In some situations you may want to group the commits not by author, > > but by committer instead. > > > > For example, when I just wanted to look up what I'm still missing from > > linux-next in the current merge window [..] > > It's another merge window later for the kernel, and I just re-applied > this patch to my git tree because I still want to know teh committer > information rather than the authorship information, and it still seems > to be the simplest way to do that. > > Jeff had apparently done something similar as part of a bigger > patch-series, but I don't see that either. I really don't care very > much how this is done, but I do find this very useful, I do things > like Sorry if I de-railed the earlier conversation. The shortlog group-by-trailer work didn't seem useful enough for me to make it a priority. I'm OK with the approach your patch takes, but I think there were some unresolved issues: - are we OK taking the short "-c" for this, or do we want "--group-by=committer" or something like it? - no tests; you can steal the general form from my [1] - no documentation (can also be stolen from [1], though the syntax is quite different) -Peff [1] http://public-inbox.org/git/20151229073515.gk8...@sigill.intra.peff.net/