Re: [PATCH v2 00/94] libify apply and use lib in am
On Thu, Jun 9, 2016 at 11:10 PM, Johannes Sixt wrote: > Am 12.05.2016 um 20:02 schrieb Christian Couder: >> >>> I'll also use it in production for a while, although I am not a git-am >>> consumer nor do I use git-rebase without -i, hence, my tests will >>> probably >>> only show that there is no bad fall-out. > > > Meanwhile, I have retrained my muscle memory to stop before typing "-i" > after "rebase" for an opportunity to consider whether bare rebase can be > used. > > What should I say? I am impressed. It's like 100 times faster than rebase -i > (on Windows). I'm now using it whenever I can, and more often than not I > plan my rebase workflow so that I can go ahead without -i. :-D Thanks for the report again! > Can't wait to test a re-roll on top of cc/apply-introduce-state! It should happen really soon now... -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/5] t1404: document function test_update_rejected
On 06/09/2016 06:05 PM, Junio C Hamano wrote: > Michael Haggerty writes: > >> On 06/07/2016 06:57 PM, Johannes Sixt wrote: >>> Am 07.06.2016 um 13:50 schrieb Michael Haggerty: test_update_rejected () { +local prefix before pack create error && >>> >>> Do we want to add more of unportable 'local' declarations? >> >> Sorry, I forgot that `local` is not in the POSIX standard. > > Regarding portability we say three things. > > * It is supported practically everywhere, and it is even in POSIX, >so let's use it. > > * Even this is not in POSIX, it is supported practically >everywhere, and it is too cumbersome to do things without using >it, so let's use it. > > * It is not available in some platforms we (collectively) still >care; it is not even in POSIX. Don't use it. > > I think "local" falls into the third one. > > : bash; ksh > $ v=1 > $ f () { local v; v=2; echo f; } > $ f > ksh: local: not found [No such file or directory] > f > $ echo $v > 2 > $ exit > : bash; OK, I didn't realize that we still support ksh. I will make the change. Michael -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 05/38] refs: create a base class "ref_store" for files_ref_store
On 06/09/2016 06:14 PM, Junio C Hamano wrote: > Michael Haggerty writes: > +static struct ref_store *main_ref_store = NULL; + +static struct ref_store *submodule_ref_stores = NULL; >>> >>> Let's let BSS take care of these initialization. > [...] > Lack of "= ..." is a clear-enough clue that the code wants these > pointers initialized to NULL. > [...] OK. While I'm at it I'll add docstrings for these variables. Michael -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCHv2] submodule--helper: initial clone learns retry logic
Each submodule that is attempted to be cloned, will be retried once in case of failure after all other submodules were cloned. This helps to mitigate ephemeral server failures and increases chances of a reliable clone of a repo with hundreds of submodules immensely. Signed-off-by: Stefan Beller --- This patch doesn't do pointer forbidden magic, also we don't abuse the prio queue. The API for running parallel commands isn't quite designed for this though as we need to pass around an int instead of a pointer, so we wrap the int into a int*, which will create a xmalloc/free for each submodule. This replaces the patch [1/2] in the series "[PATCH 0/2] Dealing with a lot of submodules" Thanks, Stefan builtin/submodule--helper.c | 66 - 1 file changed, 59 insertions(+), 7 deletions(-) diff --git a/builtin/submodule--helper.c b/builtin/submodule--helper.c index c7deb55..6424b40 100644 --- a/builtin/submodule--helper.c +++ b/builtin/submodule--helper.c @@ -590,10 +590,14 @@ struct submodule_update_clone { /* If we want to stop as fast as possible and return an error */ unsigned quickstop : 1; + + /* failed clones to be retried again */ + const struct cache_entry **failed_clones; + int failed_clones_nr, failed_clones_alloc; }; #define SUBMODULE_UPDATE_CLONE_INIT {0, MODULE_LIST_INIT, 0, \ SUBMODULE_UPDATE_STRATEGY_INIT, 0, -1, NULL, NULL, NULL, NULL, \ - STRING_LIST_INIT_DUP, 0} + STRING_LIST_INIT_DUP, 0, NULL, 0, 0} static void next_submodule_warn_missing(struct submodule_update_clone *suc, @@ -718,23 +722,47 @@ cleanup: static int update_clone_get_next_task(struct child_process *child, struct strbuf *err, void *suc_cb, - void **void_task_cb) + void **idx_task_cb) { struct submodule_update_clone *suc = suc_cb; + const struct cache_entry *ce; + int index; for (; suc->current < suc->list.nr; suc->current++) { - const struct cache_entry *ce = suc->list.entries[suc->current]; + ce = suc->list.entries[suc->current]; if (prepare_to_clone_next_submodule(ce, child, suc, err)) { + int *p = xmalloc(sizeof(*p)); + *p = suc->current; + *idx_task_cb = p; suc->current++; return 1; } } + + /* +* The loop above tried cloning each submodule once, now try the +* stragglers again, which we can imagine as an extension of the +* entry list. +*/ + index = suc->current - suc->list.nr; + if (index < suc->failed_clones_nr) { + int *p; + ce = suc->failed_clones[index]; + if (!prepare_to_clone_next_submodule(ce, child, suc, err)) + die("BUG: ce was a submodule before?"); + p = xmalloc(sizeof(*p)); + *p = suc->current; + *idx_task_cb = p; + suc->current ++; + return 1; + } + return 0; } static int update_clone_start_failure(struct strbuf *err, void *suc_cb, - void *void_task_cb) + void *idx_task_cb) { struct submodule_update_clone *suc = suc_cb; suc->quickstop = 1; @@ -744,15 +772,39 @@ static int update_clone_start_failure(struct strbuf *err, static int update_clone_task_finished(int result, struct strbuf *err, void *suc_cb, - void *void_task_cb) + void *idx_task_cb) { + const struct cache_entry *ce; struct submodule_update_clone *suc = suc_cb; + int *idxP = *(int**)idx_task_cb; + int idx = *idxP; + free(idxP); + if (!result) return 0; - suc->quickstop = 1; - return 1; + if (idx < suc->list.nr) { + ce = suc->list.entries[idx]; + strbuf_addf(err, _("Failed to clone '%s'. Retry scheduled"), + ce->name); + strbuf_addch(err, '\n'); + ALLOC_GROW(suc->failed_clones, + suc->failed_clones_nr + 1, + suc->failed_clones_alloc); + suc->failed_clones[suc->failed_clones_nr++] = ce; + return 0; + } else { + idx = suc->current - suc->list.nr; + ce = suc->failed_clones[idx]; + strbuf_addf(err, _("Failed to clone '%s' a second time, aborting"), + ce->name); + strbuf_addch(err, '\n');
Re: [PATCH] write_or_die: remove the unused write_or_whine() function
On Thu, Jun 09, 2016 at 11:52:22PM +0100, Ramsay Jones wrote: > > Signed-off-by: Ramsay Jones > --- > > Hi Junio, > > Commit f0bca72d ("send-pack: use buffered I/O to talk to pack-objects", > 08-06-2016) removed the last use of write_or_whine(). I had intended to > include this 'commit citation' in the commit message, but until it reaches > the next branch, I don't know how stable that information will be. > > Anyway, on reflection, the subject line says everything that needs to be > said. However, you need to know which commit this one depends on. ;-) IMHO, it's a good idea for removal patches to reflect on why the thing is appropriate to remove. Obviously there are no callers, but do we expect any to come back? I think probably not, because this function is doing very little that a regular if() does not besides obfuscating the error message (and if we ever did want something like it, we would probably prefer the function to come back with more "normal" return value semantics). -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/2] submodule--helper: initial clone learns retry logic
On Thu, Jun 9, 2016 at 1:40 PM, Junio C Hamano wrote: > Junio C Hamano writes: > >> Stefan Beller writes: >> >>> instead. But that is still unspecified, so we rather go with >>> >>> static int compare_ce(const void *one, const void *two, void *cb_data) >>> { >>> const struct cache_entry *ce_one = one, *ce_two = two; >>> return strcmp(ce_one->name, ce_two->name); >>> } >> >> As I said below, I do not think it is worth addressing by making the >> code's behaviour on real systems worse. As long as what you have as >> the key into priority queue is a pointer to cache_entry, you cannot >> make it better from that point of view. > > ... because having to strcmp() their names would be much more > expensive than the pointer comparison. > > However, I think you could use a pointer into a single array as > an element of prio_queue. I notice here: > > for (; suc->current < suc->list.nr; suc->current++) { > - const struct cache_entry *ce = > suc->list.entries[suc->current]; > + ce = suc->list.entries[suc->current]; > if (prepare_to_clone_next_submodule(ce, child, suc, err)) { > + *ce_task_cb = (struct cache_entry *) ce; > suc->current++; > return 1; > } > } > > that list.entries[] can serve as such an array. If you pass around > the pointer to its element instead, i.e. > > - ce = suc->list.entries[suc->current]; > + ceP = &suc->list.entries[suc->current]; > - if (prepare_to_clone_next_submodule(ce, child, suc, err)) { > + if (prepare_to_clone_next_submodule(*ceP, child, suc, err)) { > + *ce_task_cb = (struct cache_entry *) ce; > - *ce_task_cb = ceP; > ... > } > /* > * The loop above tried cloning each submodule once, > * now try the stragglers again. > */ > - ce = (struct cache_entry *) prio_queue_get(&suc->failed_queue); > + ceP = (struct cache_entry **) prio_queue_get(&suc->failed_queue); > > then the elements you are pushing into prio-queue would not be ce > (pointer to a cache entry), but would be a pointer of an array that > holds many pointers to cache entries, so it becomes kosher to > compare them for ordering. > > That would probably not add too much overhead at runtime; it may > have to involve a bit of code restructuring, so I do not know if it > is worth it, though. > I realized the patch above was bogus for another reason: In update_clone_task_finished we never distinguish between first-failing submodules and repeated fails, such that we may end up in an infinite loop. So I think the easiest way forward would be if we would pass around the index and then we can properly determine if we tried that already. (Though casting pointer to int is not possible, so we'll pass around a pointer to an int) -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Adds *~ to the .gitignore
On Thu, Jun 09, 2016 at 03:48:12PM -0700, Junio C Hamano wrote: > As I said, however, I could support a move to add some selected > small number of common file extensions, as long as we have some > (social) mechanism to avoid churning this file every time somebody > new comes and complains their favourite editor or other tools are > not supported. Yeah, I don't mind it either, myself, provided we avoid the churn. OTOH, wouldn't somebody who cared about this want it for all of their projects? I guess I just don't see the point in making this a git-specific thing. -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] write_or_die: remove the unused write_or_whine() function
Signed-off-by: Ramsay Jones --- Hi Junio, Commit f0bca72d ("send-pack: use buffered I/O to talk to pack-objects", 08-06-2016) removed the last use of write_or_whine(). I had intended to include this 'commit citation' in the commit message, but until it reaches the next branch, I don't know how stable that information will be. Anyway, on reflection, the subject line says everything that needs to be said. However, you need to know which commit this one depends on. ;-) ATB, Ramsay Jones cache.h| 1 - write_or_die.c | 11 --- 2 files changed, 12 deletions(-) diff --git a/cache.h b/cache.h index 84d8812..258f154 100644 --- a/cache.h +++ b/cache.h @@ -1767,7 +1767,6 @@ extern int copy_file(const char *dst, const char *src, int mode); extern int copy_file_with_time(const char *dst, const char *src, int mode); extern void write_or_die(int fd, const void *buf, size_t count); -extern int write_or_whine(int fd, const void *buf, size_t count, const char *msg); extern int write_or_whine_pipe(int fd, const void *buf, size_t count, const char *msg); extern void fsync_or_die(int fd, const char *); diff --git a/write_or_die.c b/write_or_die.c index 49e80aa..9816879 100644 --- a/write_or_die.c +++ b/write_or_die.c @@ -94,14 +94,3 @@ int write_or_whine_pipe(int fd, const void *buf, size_t count, const char *msg) return 1; } - -int write_or_whine(int fd, const void *buf, size_t count, const char *msg) -{ - if (write_in_full(fd, buf, count) < 0) { - fprintf(stderr, "%s: write error (%s)\n", - msg, strerror(errno)); - return 0; - } - - return 1; -} -- 2.8.0 -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Adds *~ to the .gitignore
Jeff King writes: > On Thu, Jun 09, 2016 at 02:21:55PM -0700, Junio C Hamano wrote: > >> Lars Vogel writes: >> >> > This helps contributors (like me) using editors which automatically create >> > ~ copies of the changed data >> > >> > Signed-off-by: Lars Vogel >> > --- >> >> We deliberately left this out and kept it out of .gitignore for the >> past 10 years. The justification was that use of Emacs is merely a >> personal choice that is better served by .git/info/excludes; we do >> not add .swp for vim users, either, for the same reason. > > I think a better choice than .git/info/excludes these days is > ~/.config/git/ignore. Then it is associated with the user, not the > project (which seems a better fit, since it is the user who is picking > the editor; this should apply across all projects). Yes, that is a better alternative you can use these days, of course. Thanks. As I said, however, I could support a move to add some selected small number of common file extensions, as long as we have some (social) mechanism to avoid churning this file every time somebody new comes and complains their favourite editor or other tools are not supported. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] send-pack: use buffered I/O to talk to pack-objects
On 09/06/16 18:12, Jeff King wrote: > On Thu, Jun 09, 2016 at 03:34:59PM +0100, Ramsay Jones wrote: > >> Just FYI, this patch removes the last use of write_or_whine() - should it >> be removed? > > That sounds reasonable. Want to do a patch on top? OK, will do. ATB, Ramsay Jones -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 63/94] builtin/apply: make apply_all_patches() return -1 on error
On Wed, Jun 8, 2016 at 7:44 PM, Eric Sunshine wrote: > On Wed, Jun 8, 2016 at 12:37 PM, Christian Couder > wrote: >> On Mon, May 16, 2016 at 5:44 AM, Eric Sunshine >> wrote: >>> On Wed, May 11, 2016 at 9:17 AM, Christian Couder >>> wrote: if (state->update_index) { - if (write_locked_index(&the_index, state->lock_file, COMMIT_LOCK)) - die(_("Unable to write new index file")); + res = write_locked_index(&the_index, state->lock_file, COMMIT_LOCK); state->newfd = -1; >>> >>> Does write_locked_index() unconditionally close the file descriptor >>> even when an error occurs? If not, then isn't this potentially leaking >>> 'newfd'? >>> >>> (My very cursory read of write_locked_index() seems to reveal that the >>> file descriptor may indeed remain open upon index write failure.) >> >> You are right, it is leaking newfd if write_locked_index() fails. >> The solution to that is to call `rollback_lock_file(state->lock_file)` >> and the following patch was supposed to do that: >> >> [PATCH v2 82/94] apply: roll back index lock file in case of error >> >> but it would do that only if `state->newfd >= 0` so we should set >> state->newfd to -1 only if write_locked_index() succeeds. >> >> I will fix this. >> >> I am also going to add a comment to this patch saying that this patch >> needs a following patch to call rollback_lock_file(state->lock_file) >> in case of errors. >> >> Or if you prefer, I can squash the patch that call >> rollback_lock_file(state->lock_file) in case of errors into this >> patch. > > Squashing may indeed be preferable over leaving it in a "broken" state > until the next patch, though I haven't thought too hard about it. > Alternately, can the two patches somehow be swapped? I just squashed them for now as the result looks reasonable. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/2] bisect--helper: `check_expected_revs` shell function in C
On Wed, Jun 8, 2016 at 11:24 AM, Pranit Bauva wrote: > Reimplement the `check_expected_revs` shell function in C and add a > `--check-expected-revs` subcommand to `git bisect--helper` to call it > from git-bisect.sh . > [...] > Signed-off-by: Pranit Bauva > --- > diff --git a/builtin/bisect--helper.c b/builtin/bisect--helper.c > index 06bc9b8..500efd5 100644 > @@ -174,13 +174,28 @@ static int is_expected_rev(const char *expected_hex) > +static int check_expected_revs(const char **revs, int no) In this codebase, it's more common to name this 'nr' rather than 'no'. 'revs_nr' would also be a good name. > +{ > + int i; > + > + for (i = 0; i < no; i++) { > + if (!is_expected_rev(revs[i])) { > + remove_path(git_path_bisect_ancestors_ok()); > + remove_path(git_path_bisect_expected_rev()); > + return 0; > + } > + } > + return 0; > +} -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 05/38] refs: create a base class "ref_store" for files_ref_store
Am 07.06.2016 um 18:31 schrieb Junio C Hamano: > This is a tangent, but your series that ends at 4aa2c475 (grep: -W: > don't extend context to trailing empty lines, 2016-05-28) does not > seem to have much effect when viewing the change to refs.c this > patch makes (it modifies a function in an early part, and then adds > bunch of new functions at the end) with "git show -W". > > Thanks. Good catch, thanks! -- >8 -- Subject: [PATCH] xdiff: fix merging of appended hunk with -W When -W is given we search the lines between the end of the current context and the next change for a function line. If there is none then we merge those two hunks as they must be part of the same function. If the next change is an appended chunk we abort the search early in get_func_line(), however, because its line number is out of range. Fix that by searching from the end of the pre-image in that case instead. Reported-by: Junio C Hamano Signed-off-by: Rene Scharfe --- t/t4051-diff-function-context.sh | 24 +++- xdiff/xemit.c| 3 ++- 2 files changed, 25 insertions(+), 2 deletions(-) diff --git a/t/t4051-diff-function-context.sh b/t/t4051-diff-function-context.sh index b6bb04a..b79b877 100755 --- a/t/t4051-diff-function-context.sh +++ b/t/t4051-diff-function-context.sh @@ -64,7 +64,13 @@ test_expect_success 'setup' ' grep -v "Begin of second part" file.c.new && mv file.c.new file.c && - commit_and_tag long_common_tail file.c + commit_and_tag long_common_tail file.c && + + git checkout initial && + grep -v "delete me from hello" file.c.new && + mv file.c.new file.c && + cat "$dir/appended1.c" >>file.c && + commit_and_tag changed_hello_appended file.c ' check_diff changed_hello 'changed function' @@ -157,4 +163,20 @@ test_expect_success ' context does not include preceding empty lines' ' test "$(first_context_line next) { - long l = xche->next->i1; + long l = XDL_MIN(xche->next->i1, +xe->xdf1.nrec - 1); if (l <= e1 || get_func_line(xe, xecfg, NULL, l, e1) < 0) { xche = xche->next; -- 2.8.3 -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/2] bisect--helper: `is_expected_rev` shell function in C
On Thu, Jun 9, 2016 at 5:33 PM, Eric Sunshine wrote: > On Wed, Jun 8, 2016 at 11:24 AM, Pranit Bauva wrote: >> + strbuf_trim(&actual_hex); >> + return !strcmp(actual_hex.buf, expected_hex); > > Thus, it only ever gets to this point if the file exists but is empty, > which is very unlikely to match 'expected_hex'. I could understand it > if you checked the result of strbuf_read_file() with <0 or even <=0, > but the current code doesn't make sense to me. By the way, this is also leaking strbuf 'actual_hex'. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Adds *~ to the .gitignore
On Thu, Jun 09, 2016 at 02:21:55PM -0700, Junio C Hamano wrote: > Lars Vogel writes: > > > This helps contributors (like me) using editors which automatically create > > ~ copies of the changed data > > > > Signed-off-by: Lars Vogel > > --- > > We deliberately left this out and kept it out of .gitignore for the > past 10 years. The justification was that use of Emacs is merely a > personal choice that is better served by .git/info/excludes; we do > not add .swp for vim users, either, for the same reason. I think a better choice than .git/info/excludes these days is ~/.config/git/ignore. Then it is associated with the user, not the project (which seems a better fit, since it is the user who is picking the editor; this should apply across all projects). -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/2] bisect--helper: `is_expected_rev` shell function in C
On Wed, Jun 8, 2016 at 11:24 AM, Pranit Bauva wrote: > Reimplement `is_expected_rev` shell function in C. This will further be > called from `check_expected_revs` function. This is a quite small > function thus subcommand facility is redundant. This patch should be squashed into patch 2/2, as it is otherwise pointless without that patch, and merely adds dead code. > Signed-off-by: Pranit Bauva > --- > diff --git a/builtin/bisect--helper.c b/builtin/bisect--helper.c > @@ -160,6 +160,20 @@ int bisect_reset(const char *commit) > +static int is_expected_rev(const char *expected_hex) > +{ > + struct strbuf actual_hex = STRBUF_INIT; > + > + if (!file_exists(git_path_bisect_expected_rev())) > + return 0; Invoking file_exists() seems unnecessarily redundant when you can discern effectively the same by checking the return value of strbuf_read_file() below. I'd drop the file_exists() check altogether. > + if (!strbuf_read_file(&actual_hex, git_path_bisect_expected_rev(), 0)) > + return 0; What exactly is this trying to do? Considering that strbuf_read_file() returns -1 upon error, otherwise the number of bytes read, if I'm reading this correctly, is_expected_rev() returns false if strbuf_read_file() encounters an error (which is fine) but also when it successfully reads the file and its content length is non-zero (which is very odd). > + strbuf_trim(&actual_hex); > + return !strcmp(actual_hex.buf, expected_hex); Thus, it only ever gets to this point if the file exists but is empty, which is very unlikely to match 'expected_hex'. I could understand it if you checked the result of strbuf_read_file() with <0 or even <=0, but the current code doesn't make sense to me. Am I misunderstanding? > +} -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Gitk Inotify support
On Thu, Jun 9, 2016 at 2:12 PM, Florian Schüller wrote: > Hi > Is this correct to send possible gitk patches here? or should I send > them to Paul Mackerras somehow? I cc'd Paul for you :) > > Anyway I just wanted that gitk automatically updates while I'm working > in my terminal Thanks for coming up with a patch. Welcome to the Git community! > > Are you interrested? > Also see the section that talks about signing off the patch and how to send the patch inline. :) > From 785ed6bc1b4a3b9019d3503b066afb2a025a2bc1 Mon Sep 17 00:00:00 2001 > From: =?UTF-8?q?Florian=20Sch=C3=BCller?= > Date: Thu, 9 Jun 2016 22:54:43 +0200 > Subject: [PATCH] first support for inotify Here you should describe your change, i.e. what problem is solved in this patch, what are the alternatives, why is this way the best? Also the sign off goes here. > --- > gitk | 59 +++ > 1 file changed, 59 insertions(+) > diff --git a/gitk b/gitk > index 805a1c7..6e2ead2 100755 > --- a/gitk > +++ b/gitk > @@ -8,6 +8,12 @@ exec wish "$0" -- "$@" > # either version 2, or (at your option) any later version. > > package require Tk > +try { > +package require inotify > +set we_have_inotify true > +} on error {em} { > +set we_have_inotify false > +} There are quite a few "have_*" variables, so I would drop the leading "we_" > > proc hasworktree {} { > return [expr {[exec git rev-parse --is-bare-repository] == "false" && > @@ -12363,6 +12369,59 @@ if {$i >= [llength $argv] && $revtreeargs ne {}} { > } > } > > +proc inotify_handler { fd } { > +set events [inotify_watch read] > +set watch_info [inotify_watch info] > +set update_view false > + > +foreach {event} $events { > +set current_watchid [dict get $event watchid] > +set flags [dict get $event flags] > +set event_filename [dict get $event filename] > + > +foreach {path watchid watch_flags} $watch_info { > +if {$watchid eq $current_watchid} { > +set watch_path $path > +} > +} > + > +set full_filename [file join $watch_path $event_filename] > + > +#remove does not seem to work > +#if {$flags eq "s"} { > +#puts "Remove watch $full_filename" > +#set wd [inotify_watch remove $full_filename] > +#} Why do we want to carry commented code? I'd drop that. > + > +if {$flags eq "nD"} { > +set wd [inotify_watch add $full_filename "nwds"] > +} > +if {![string match *.lock $event_filename]} { > +set update_view true > +} > +} > + > +#reloadcommits or updatecommits - depending on file and operation? > +if {$update_view} { > +updatecommits > +} > +} > + > +proc watch_recursive { dir } { > +inotify_watch add $dir "nwaCmMds" > + > +foreach i [glob -nocomplain -dir $dir *] { > +if {[file type $i] eq {directory}} { > +watch_recursive $i > +} > +} > +} > + > +if { $we_have_inotify } { > +set fd [inotify create "inotify_watch" "::inotify_handler"] > +watch_recursive $gitdir > +} > + > set nullid "" > set nullid2 "0001" > set nullfile "/dev/null" > -- > 2.7.4 > Thanks, Stefan -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Adds *~ to the .gitignore
Lars Vogel writes: > This helps contributors (like me) using editors which automatically create ~ > copies of the changed data > > Signed-off-by: Lars Vogel > --- We deliberately left this out and kept it out of .gitignore for the past 10 years. The justification was that use of Emacs is merely a personal choice that is better served by .git/info/excludes; we do not add .swp for vim users, either, for the same reason. I personally do not care too deeply either way; I could even support a move to add some selected small file extensions, as long as we some (social) mechanism to avoid churning this file every time somebody new comes and complains their favourite editor or other tools are not supported. > .gitignore | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/.gitignore b/.gitignore > index 05cb58a..13c7403 100644 > --- a/.gitignore > +++ b/.gitignore > @@ -219,3 +219,4 @@ > *.pdb > /Debug/ > /Release/ > +*~ -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] Gitk Inotify support
Hi Is this correct to send possible gitk patches here? or should I send them to Paul Mackerras somehow? Anyway I just wanted that gitk automatically updates while I'm working in my terminal Are you interrested? as described in "SubmittingPatches" this patch is based on git://ozlabs.org/~paulus/gitk 22a713c72df8b6799c59287c50cee44c4a6db51e The code should be robust to just don't autoupdate if https://sourceforge.net/projects/tcl-inotify/ is not installed Open points for now: - release watches for deleted directories seems to cause problems in tcl-inotify (so I don't) I'm not sure how often that happens in ".git/" - I only call "updatecommits" and I don't know if there is a usecase where I should be calling "reloadcommits" Regards Florian Schüller From 785ed6bc1b4a3b9019d3503b066afb2a025a2bc1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Florian=20Sch=C3=BCller?= Date: Thu, 9 Jun 2016 22:54:43 +0200 Subject: [PATCH] first support for inotify --- gitk | 59 +++ 1 file changed, 59 insertions(+) diff --git a/gitk b/gitk index 805a1c7..6e2ead2 100755 --- a/gitk +++ b/gitk @@ -8,6 +8,12 @@ exec wish "$0" -- "$@" # either version 2, or (at your option) any later version. package require Tk +try { +package require inotify +set we_have_inotify true +} on error {em} { +set we_have_inotify false +} proc hasworktree {} { return [expr {[exec git rev-parse --is-bare-repository] == "false" && @@ -12363,6 +12369,59 @@ if {$i >= [llength $argv] && $revtreeargs ne {}} { } } +proc inotify_handler { fd } { +set events [inotify_watch read] +set watch_info [inotify_watch info] +set update_view false + +foreach {event} $events { +set current_watchid [dict get $event watchid] +set flags [dict get $event flags] +set event_filename [dict get $event filename] + +foreach {path watchid watch_flags} $watch_info { +if {$watchid eq $current_watchid} { +set watch_path $path +} +} + +set full_filename [file join $watch_path $event_filename] + +#remove does not seem to work +#if {$flags eq "s"} { +#puts "Remove watch $full_filename" +#set wd [inotify_watch remove $full_filename] +#} + +if {$flags eq "nD"} { +set wd [inotify_watch add $full_filename "nwds"] +} +if {![string match *.lock $event_filename]} { +set update_view true +} +} + +#reloadcommits or updatecommits - depending on file and operation? +if {$update_view} { +updatecommits +} +} + +proc watch_recursive { dir } { +inotify_watch add $dir "nwaCmMds" + +foreach i [glob -nocomplain -dir $dir *] { +if {[file type $i] eq {directory}} { +watch_recursive $i +} +} +} + +if { $we_have_inotify } { +set fd [inotify create "inotify_watch" "::inotify_handler"] +watch_recursive $gitdir +} + set nullid "" set nullid2 "0001" set nullfile "/dev/null" -- 2.7.4
[PATCH] Adds *~ to the .gitignore
This helps contributors (like me) using editors which automatically create ~ copies of the changed data Signed-off-by: Lars Vogel --- .gitignore | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitignore b/.gitignore index 05cb58a..13c7403 100644 --- a/.gitignore +++ b/.gitignore @@ -219,3 +219,4 @@ *.pdb /Debug/ /Release/ +*~ -- 2.8.1 -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 00/94] libify apply and use lib in am
Am 12.05.2016 um 20:02 schrieb Christian Couder: On Thu, May 12, 2016 at 7:06 PM, Johannes Sixt wrote: Am 11.05.2016 um 15:16 schrieb Christian Couder: This is a patch series about libifying `git apply` functionality, and using this libified functionality in `git am`, so that no 'git apply' process is spawn anymore. This makes `git am` significantly faster, so `git rebase`, when it uses the am backend, is also significantly faster. I'm including this in my build on Windows. It passes the test suite. Great! Thanks for the report! I'll also use it in production for a while, although I am not a git-am consumer nor do I use git-rebase without -i, hence, my tests will probably only show that there is no bad fall-out. Meanwhile, I have retrained my muscle memory to stop before typing "-i" after "rebase" for an opportunity to consider whether bare rebase can be used. What should I say? I am impressed. It's like 100 times faster than rebase -i (on Windows). I'm now using it whenever I can, and more often than not I plan my rebase workflow so that I can go ahead without -i. Can't wait to test a re-roll on top of cc/apply-introduce-state! -- Hannes -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/2] submodule--helper: initial clone learns retry logic
Junio C Hamano writes: > Stefan Beller writes: > >> instead. But that is still unspecified, so we rather go with >> >> static int compare_ce(const void *one, const void *two, void *cb_data) >> { >> const struct cache_entry *ce_one = one, *ce_two = two; >> return strcmp(ce_one->name, ce_two->name); >> } > > As I said below, I do not think it is worth addressing by making the > code's behaviour on real systems worse. As long as what you have as > the key into priority queue is a pointer to cache_entry, you cannot > make it better from that point of view. ... because having to strcmp() their names would be much more expensive than the pointer comparison. However, I think you could use a pointer into a single array as an element of prio_queue. I notice here: for (; suc->current < suc->list.nr; suc->current++) { - const struct cache_entry *ce = suc->list.entries[suc->current]; + ce = suc->list.entries[suc->current]; if (prepare_to_clone_next_submodule(ce, child, suc, err)) { + *ce_task_cb = (struct cache_entry *) ce; suc->current++; return 1; } } that list.entries[] can serve as such an array. If you pass around the pointer to its element instead, i.e. - ce = suc->list.entries[suc->current]; + ceP = &suc->list.entries[suc->current]; - if (prepare_to_clone_next_submodule(ce, child, suc, err)) { + if (prepare_to_clone_next_submodule(*ceP, child, suc, err)) { + *ce_task_cb = (struct cache_entry *) ce; - *ce_task_cb = ceP; ... } /* * The loop above tried cloning each submodule once, * now try the stragglers again. */ - ce = (struct cache_entry *) prio_queue_get(&suc->failed_queue); + ceP = (struct cache_entry **) prio_queue_get(&suc->failed_queue); then the elements you are pushing into prio-queue would not be ce (pointer to a cache entry), but would be a pointer of an array that holds many pointers to cache entries, so it becomes kosher to compare them for ordering. That would probably not add too much overhead at runtime; it may have to involve a bit of code restructuring, so I do not know if it is worth it, though. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/2] submodule--helper: initial clone learns retry logic
Stefan Beller writes: > instead. But that is still unspecified, so we rather go with > > static int compare_ce(const void *one, const void *two, void *cb_data) > { > const struct cache_entry *ce_one = one, *ce_two = two; > return strcmp(ce_one->name, ce_two->name); > } As I said below, I do not think it is worth addressing by making the code's behaviour on real systems worse. As long as what you have as the key into priority queue is a pointer to cache_entry, you cannot make it better from that point of view. >> I think we have one or two other instances of such fishy pointer >> comparison already in the codebase, so it is not a show-stopper, but >> it would be better if this can be avoided and be replaced with >> something that I do not have to raise eyebrows at ;-) If you are using priority queue, it probably would make more sense to use the "priority" part properly---the paths whose ce happens to be lower in the address space are not inherently more important and deserve to be processed sooner. From "let's retry failed ones after finishing", I expected that the queue was either fifo (simplest), or a prio queue whose priority function would give lower priority for entries with least number of previous attempts (more involved but probably more fair). So a proper "fix" probably is either (1) not to use prio queue as you are not doing anything with priority anyway, or (2) use something other than ce itself as entries in prio queue, so that the priority function can be computation that is more meaningful than "happens to sit in the lower part of the address space". -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Use "working tree" instead of "working directory" for git status
Matthieu Moy writes: > Lars Vogel writes: > >> --- a/wt-status.c >> +++ b/wt-status.c >> @@ -1554,7 +1554,7 @@ void wt_status_print(struct wt_status *s) >> else >> printf(_("nothing to commit\n")); >> } else >> -printf(_("nothing to commit, working directory >> clean\n")); >> +printf(_("nothing to commit, working tree clean\n")); > > Looks good to me, thanks. Thanks, all. Queued. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/2] submodule--helper: initial clone learns retry logic
On Thu, Jun 9, 2016 at 12:19 PM, Junio C Hamano wrote: > Stefan Beller writes: > >> +static int compare_ce(const void *one, const void *two, void *cb_data) >> +{ >> + const struct cache_entry *ce_one = one, *ce_two = two; >> + return ce_two - ce_one; >> +} > > This would work in practice, but I suspect that this is not ANSI-C > kosher; as address comparison for ordering (not equality) is > undefined if two pointers are not pointing into the same array or > into the same struct's fields. Uh :( I thought about that for a while as I was about to send a 'simplification' patch for commit.c:compare_commits_by_{author,commit}_date There it made sense to keep it the way it is because sizeof(date) > sizeof(return value) At the time of writing it made sense for this to be a subtractions, but I think we want to make it if (ce_one < ce_two) return 1; else if (ce_one > ce_two) return -1; else return 0; instead. But that is still unspecified, so we rather go with static int compare_ce(const void *one, const void *two, void *cb_data) { const struct cache_entry *ce_one = one, *ce_two = two; return strcmp(ce_one->name, ce_two->name); } When reading the prio queue code correctly, we can be sure there is no NULL passed to either of one or two. > > I think we have one or two other instances of such fishy pointer > comparison already in the codebase, so it is not a show-stopper, but > it would be better if this can be avoided and be replaced with > something that I do not have to raise eyebrows at ;-) -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Use "working tree" instead of "working directory" for git status
Lars Vogel writes: > --- a/wt-status.c > +++ b/wt-status.c > @@ -1554,7 +1554,7 @@ void wt_status_print(struct wt_status *s) > else > printf(_("nothing to commit\n")); > } else > - printf(_("nothing to commit, working directory > clean\n")); > + printf(_("nothing to commit, working tree clean\n")); Looks good to me, thanks. -- Matthieu Moy http://www-verimag.imag.fr/~moy/ -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCHv4] Documentation: triangular workflow
From: "Jordan DE GEA" Currently, triangular workflow can be configured, but there is no documentation about it. A documentation is useful to keep configuration possibilities up-to-date. A new subsection is created in gitworkflow. Signed-off-by: Michael Haggerty Signed-off-by: Matthieu Moy Signed-off-by: Jordan DE GEA --- Changes since v3: * Text reorganized to follow: - Introduction - Preparation - Staying up-to-date - Alternatively * Texts added to explain why we use commands in: - Preparation - Alternatively Documentation/gitworkflows.txt | 177 - 1 file changed, 176 insertions(+), 1 deletion(-) diff --git a/Documentation/gitworkflows.txt b/Documentation/gitworkflows.txt index f16c414..1ec1f63 100644 --- a/Documentation/gitworkflows.txt +++ b/Documentation/gitworkflows.txt @@ -463,6 +463,178 @@ if you get conflicts: `git am -3` will use index information contained in patches to figure out the merge base. See linkgit:git-am[1] for other options. +TRIANGULAR WORKFLOW +--- + +In some projects, you cannot push directly to the project but have to +suggest your commits to the maintainer (e.g. pull requests). +For these projects, it's common to use what's called a *triangular +workflow*: + +- Taking the last version of the project by fetching from **UPSTREAM** +- Writing modifications and push them to a fork (e.g. **PUBLISH**) s/a fork/an accessible fork/ Isn't one of the key points of the triangle that the user can't push to the upstream? (and the maintainer can't see the users local repo) +- Opening a pull request s/request/request (formally or informally)/ +- If the maintainer accepts the changes, he merges them into the a Catch 22 here. The maintainer needs first to fetch, or othewise read, the mods before she could accept the changes. Perhaps - The maintainer fetches, and reviews, the proposed changes; And if acceptable, merges them into the upstream. the exact timing of the fetch will depend on the review system (see below). + **UPSTREAM** repository. + + + +-- - +| UPSTREAM | maintainer | PUBLISH | +| git/git |- - - - - - - -| me/remote| +-- ← - + \ / + \ / + fetch↓\ /↑push + \ / + \ / + - + | LOCAL | + - + + +Motivations +~~~ + +* Allows contributors to work with Git even though they do not have +write access to **UPSTREAM**. + +* Allows maintainers to receive code from contributors they may not +trust. s/trust/trust with psuh access/ It is the push/pull access issues that create the triangular workflow! + +* Code review is more efficient more efficient than ??? in environment ??? (e.g. a review process of web viewing/commenting on GitHub, driven by a limited patch/mail server capability) + +* Encourages clean history by using `rebase -i` and `push --force` to +your public fork before the code is merged + +Preparation +~~~ + +Cloning from **PUBLISH**, which is a fork of **UPSTREAM** or an empty +repository. I agree here. To clone the upstream, to which you have no push access (by definition), would leave the config badly mis-set for the basic user. It's better for the user to clone their publish fork (to which they have both read and write access). One issue may be the different expectations of how the fork is created (it's only one click on the GitHub..) It may be worth covering the remote rename option to set that origin to the short and sweet 'me', or 'my', as per the ascii diagram. + +== +`git clone ` +== + +Setting the behavior of push for the triangular workflow: + +=== +`git config push.default current` +=== + +Adding **UPSTREAM** remote: + +=== +`git remote add upstream ` +=== + +With the `remote add` above, using `git pull upstream` pulls there, +instead of saying its URL. In addition, `git pull` can pull from +**UPSTREAM** without argument. + +For each branch requiring a triangular workflow, set +`branch..remote` and `branch..pushRemote`. + +Example with master as : +=== +* `git config branch.master.remote upstream` +* `git config branch.master.pushRemote origin` +=== + +Stay up-to-date +~~~ + +Retrieve updates from **UPSTREAM** with `git pull` and send them to +**PUBLISH** with `git push`. + +Checks +~~ + +.Display the push remote's name: +[caption="Recipe: "] + += +`git r
Re: [PATCH 1/2] submodule--helper: initial clone learns retry logic
Stefan Beller writes: > +static int compare_ce(const void *one, const void *two, void *cb_data) > +{ > + const struct cache_entry *ce_one = one, *ce_two = two; > + return ce_two - ce_one; > +} This would work in practice, but I suspect that this is not ANSI-C kosher; as address comparison for ordering (not equality) is undefined if two pointers are not pointing into the same array or into the same struct's fields. I think we have one or two other instances of such fishy pointer comparison already in the codebase, so it is not a show-stopper, but it would be better if this can be avoided and be replaced with something that I do not have to raise eyebrows at ;-) -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Trouble installing Git on El Capitan for Mac
Hello, I have had considerable trouble with getting to the second step of using the terminal command to return anything but version not found. It says it is in usr/local/bin but can’t seem to get any further. I have looked for online help but nothing seems to work. Please advise. I am a total newbie so I use brackets and just downloaded Xcode. Thanks, Lori-- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 0/2] Dealing with a lot of submodules
We have a test repo with about 500 submodules, and I noticed some problems when cloning this repo. This is a series that helps dealing with that repo in two ways: * When having 500 submodules, you have to perform 500 clones. This makes an ephemeral error 500 times more likely. To cope with such errors, just try again after all other clones have finished. * If a recursive clone fails for another reason (in our case a missing .gitmodules file), we want to keep going to finish the clone, instead of failing. Thanks, Stefan Stefan Beller (2): submodule--helper: initial clone learns retry logic submodule update: continue when a clone fails builtin/submodule--helper.c | 44 git-submodule.sh| 2 +- 2 files changed, 37 insertions(+), 9 deletions(-) -- 2.9.0.rc2.368.gdadd65c -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 2/2] submodule update: continue when a clone fails
In 15ffb7cde48 (2011-06-13, submodule update: continue when a checkout fails), we reasoned it is ok to continue, when there is not much of a mental burden by the failure. If a recursive submodule fails to clone because a .gitmodules file is broken (e.g. : fatal: No url found for submodule path 'foo/bar' in .gitmodules Failed to recurse into submodule path 'foo', signaled by exit code 128), this is one of the cases where the user is not expected to have much of a burden afterwards, so we can also continue in that case. This means we only want to stop for updating submodules in case of rebase, merge or custom update command failures, which are all signaled with exit code 2. Signed-off-by: Stefan Beller --- git-submodule.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/git-submodule.sh b/git-submodule.sh index b39ac10..7c62b53 100755 --- a/git-submodule.sh +++ b/git-submodule.sh @@ -705,7 +705,7 @@ cmd_update() if test $res -gt 0 then die_msg="$(eval_gettext "Failed to recurse into submodule path '\$displaypath'")" - if test $res -eq 1 + if test $res -ne 2 then err="${err};$die_msg" continue -- 2.9.0.rc2.368.gdadd65c -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 1/2] submodule--helper: initial clone learns retry logic
Each submodule that is attempted to be cloned, will be retried once in case of failure after all other submodules were cloned. This helps to mitigate ephemeral server failures and increases chances of a reliable clone of a repo with hundreds of submodules immensely. The choice of the priority queue is a bit miss-leading as we use it as a standard queue, but the priority queue offers the best API to use. Signed-off-by: Stefan Beller --- builtin/submodule--helper.c | 40 +--- 1 file changed, 33 insertions(+), 7 deletions(-) diff --git a/builtin/submodule--helper.c b/builtin/submodule--helper.c index c7deb55..efb6759 100644 --- a/builtin/submodule--helper.c +++ b/builtin/submodule--helper.c @@ -12,6 +12,7 @@ #include "remote.h" #include "refs.h" #include "connect.h" +#include "prio-queue.h" static char *get_default_remote(void) { @@ -568,6 +569,12 @@ static int module_clone(int argc, const char **argv, const char *prefix) return 0; } +static int compare_ce(const void *one, const void *two, void *cb_data) +{ + const struct cache_entry *ce_one = one, *ce_two = two; + return ce_two - ce_one; +} + struct submodule_update_clone { /* index into 'list', the list of submodules to look into for cloning */ int current; @@ -590,10 +597,13 @@ struct submodule_update_clone { /* If we want to stop as fast as possible and return an error */ unsigned quickstop : 1; + + /* Queue of failed clones, containing the cache_entry */ + struct prio_queue failed_queue; }; #define SUBMODULE_UPDATE_CLONE_INIT {0, MODULE_LIST_INIT, 0, \ SUBMODULE_UPDATE_STRATEGY_INIT, 0, -1, NULL, NULL, NULL, NULL, \ - STRING_LIST_INIT_DUP, 0} + STRING_LIST_INIT_DUP, 0, { compare_ce }} static void next_submodule_warn_missing(struct submodule_update_clone *suc, @@ -718,23 +728,36 @@ cleanup: static int update_clone_get_next_task(struct child_process *child, struct strbuf *err, void *suc_cb, - void **void_task_cb) + void **ce_task_cb) { struct submodule_update_clone *suc = suc_cb; + const struct cache_entry *ce; for (; suc->current < suc->list.nr; suc->current++) { - const struct cache_entry *ce = suc->list.entries[suc->current]; + ce = suc->list.entries[suc->current]; if (prepare_to_clone_next_submodule(ce, child, suc, err)) { + *ce_task_cb = (struct cache_entry *) ce; suc->current++; return 1; } } + /* +* The loop above tried cloning each submodule once, +* now try the stragglers again. +*/ + ce = (struct cache_entry *) prio_queue_get(&suc->failed_queue); + if (ce) { + if (!prepare_to_clone_next_submodule(ce, child, suc, err)) + die("BUG: ce was a submodule before?"); + return 1; + } + return 0; } static int update_clone_start_failure(struct strbuf *err, void *suc_cb, - void *void_task_cb) + void *ce_task_cb) { struct submodule_update_clone *suc = suc_cb; suc->quickstop = 1; @@ -744,15 +767,18 @@ static int update_clone_start_failure(struct strbuf *err, static int update_clone_task_finished(int result, struct strbuf *err, void *suc_cb, - void *void_task_cb) + void *ce_task_cb) { struct submodule_update_clone *suc = suc_cb; + struct cache_entry *ce = ce_task_cb; if (!result) return 0; - suc->quickstop = 1; - return 1; + strbuf_addf(err, _("Failed to clone '%s'. Scheduled for retry"), + ce->name); + prio_queue_put(&suc->failed_queue, ce); + return 0; } static int update_clone(int argc, const char **argv, const char *prefix) -- 2.9.0.rc2.368.gdadd65c -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Use "working tree" instead of "working directory" for git status
Thanks updated patch on its way. On Thu, Jun 9, 2016 at 7:55 PM, Eric Sunshine wrote: > On Thu, Jun 9, 2016 at 1:46 PM, Lars Vogel wrote: >> Working directory can be easily confused with the current directory. >> In one of my patches I already updated the usage of working directory >> with working tree for the man page but I noticed that git status also >> uses this incorrect term. > > Missing sign-off. > >> --- >> po/bg.po| 2 +- >> po/ca.po| 2 +- >> po/de.po| 2 +- >> po/fr.po| 2 +- >> po/git.pot | 2 +- >> po/ko.po| 2 +- >> po/ru.po| 2 +- >> po/sv.po| 2 +- >> po/vi.po| 2 +- >> po/zh_CN.po | 2 +- >> wt-status.c | 2 +- > > Don't bother updating the .po files; that's the job of the > translators. Your patch should touch only wt-status.c. > >> 11 files changed, 11 insertions(+), 11 deletions(-) -- Eclipse Platform UI and e4 project co-lead CEO vogella GmbH Haindaalwisch 17a, 22395 Hamburg Amtsgericht Hamburg: HRB 127058 Geschäftsführer: Lars Vogel, Jennifer Nerlich de Vogel USt-IdNr.: DE284122352 Fax (040) 5247 6322, Email: lars.vo...@vogella.com, Web: http://www.vogella.com -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] Use "working tree" instead of "working directory" for git status
Working directory can be easily confused with the current directory. In one of my patches I already updated the usage of working directory with working tree for the man page but I noticed that git status also uses this incorrect term. Signed-off-by: Lars Vogel --- wt-status.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/wt-status.c b/wt-status.c index 4f27bd6..4ce4e35 100644 --- a/wt-status.c +++ b/wt-status.c @@ -1554,7 +1554,7 @@ void wt_status_print(struct wt_status *s) else printf(_("nothing to commit\n")); } else - printf(_("nothing to commit, working directory clean\n")); + printf(_("nothing to commit, working tree clean\n")); } } -- 2.8.1 -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 5/5] attr.c: always pass check[] to collect_some_attrs()
Stefan Beller writes: >> static void collect_some_attrs(const char *path, int pathlen, >> - struct git_attr_check *check) >> + struct git_attr_check *check, int collect_all) > > I think it may be better to have a collect_all_attrs instead of a flag here, > as more than half the executed code differs (the parts conditioned on > collect_all are rather large in the function) It is understandable if it appears to anybody who does not know the future that way ;-) The plan is to remove that "if (!collect_all)" optimization block. Once the attr_stack and check_all_attr becomes per git_attr_check instance, there is no reason to keep record of all attribute definitions read from the .gitattribute files. The entries that are known not to matter (i.e. not listed in git_attr_check) can be discarded when they are read before it hits attr_stack. The "optimization" in that block is to optimize one special case: an attribute may be defined in the git_attr_hash[] dictionary, and the ones that appeared in various .gitattribute files are marked with a bit (an attribute may be in that dictionary only because the caller who asks about the attribute filled it in check->check[] array, and may not appear in the .gitattribute files prepare_attr_stack() read). If the check->check[] wants to know about an attribute that does not appear in any of the .gitattribute files, we can answer "that attribute is not set to anything" without actually executing any matches with path patterns. The "an attr_stack that is per git_attr_check only has to contain what matters and can discard irrelevant records" is a natural extension of the current optimization. Instead of the current "when 0 rules refer to attribute, do not do any of the 47 path matches defined in the attribute system, but even if one rule refers to it, do all 47 path matches to check", we would do "among 47 path patterns, only attempt to match patterns that affect the attributes that we are checking (which could be 0 rules). -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Use "working tree" instead of "working directory" for git status
On Thu, Jun 9, 2016 at 1:46 PM, Lars Vogel wrote: > Working directory can be easily confused with the current directory. > In one of my patches I already updated the usage of working directory > with working tree for the man page but I noticed that git status also > uses this incorrect term. Missing sign-off. > --- > po/bg.po| 2 +- > po/ca.po| 2 +- > po/de.po| 2 +- > po/fr.po| 2 +- > po/git.pot | 2 +- > po/ko.po| 2 +- > po/ru.po| 2 +- > po/sv.po| 2 +- > po/vi.po| 2 +- > po/zh_CN.po | 2 +- > wt-status.c | 2 +- Don't bother updating the .po files; that's the job of the translators. Your patch should touch only wt-status.c. > 11 files changed, 11 insertions(+), 11 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] Use "working tree" instead of "working directory" for git status
Working directory can be easily confused with the current directory. In one of my patches I already updated the usage of working directory with working tree for the man page but I noticed that git status also uses this incorrect term. --- po/bg.po| 2 +- po/ca.po| 2 +- po/de.po| 2 +- po/fr.po| 2 +- po/git.pot | 2 +- po/ko.po| 2 +- po/ru.po| 2 +- po/sv.po| 2 +- po/vi.po| 2 +- po/zh_CN.po | 2 +- wt-status.c | 2 +- 11 files changed, 11 insertions(+), 11 deletions(-) diff --git a/po/bg.po b/po/bg.po index ac6f103..9390c81 100644 --- a/po/bg.po +++ b/po/bg.po @@ -2295,7 +2295,7 @@ msgstr "" #: wt-status.c:1535 #, c-format -msgid "nothing to commit, working directory clean\n" +msgid "nothing to commit, working tree clean\n" msgstr "липсват каквито и да е промени, работното дърво е чисто\n" #: wt-status.c:1642 diff --git a/po/ca.po b/po/ca.po index 46000d7..f54f137 100644 --- a/po/ca.po +++ b/po/ca.po @@ -2318,7 +2318,7 @@ msgstr "" #: wt-status.c:1535 #, c-format -msgid "nothing to commit, working directory clean\n" +msgid "nothing to commit, working tree clean\n" msgstr "no hi ha res a cometre, directori de treball net\n" #: wt-status.c:1642 diff --git a/po/de.po b/po/de.po index 0eadf34..a14fe92 100644 --- a/po/de.po +++ b/po/de.po @@ -2356,7 +2356,7 @@ msgstr "" #: wt-status.c:1535 #, c-format -msgid "nothing to commit, working directory clean\n" +msgid "nothing to commit, working tree clean\n" msgstr "nichts zu committen, Arbeitsverzeichnis unverändert\n" #: wt-status.c:1642 diff --git a/po/fr.po b/po/fr.po index 55ca387..627315c 100644 --- a/po/fr.po +++ b/po/fr.po @@ -2394,7 +2394,7 @@ msgstr "rien à valider (utilisez -u pour afficher les fichiers non suivis)\n" #: wt-status.c:1535 #, c-format -msgid "nothing to commit, working directory clean\n" +msgid "nothing to commit, working tree clean\n" msgstr "rien à valider, la copie de travail est propre\n" #: wt-status.c:1642 diff --git a/po/git.pot b/po/git.pot index 72ef798..2957091 100644 --- a/po/git.pot +++ b/po/git.pot @@ -2192,7 +2192,7 @@ msgstr "" #: wt-status.c:1535 #, c-format -msgid "nothing to commit, working directory clean\n" +msgid "nothing to commit, working tree clean\n" msgstr "" #: wt-status.c:1642 diff --git a/po/ko.po b/po/ko.po index 3ff3b9b..9520e04 100644 --- a/po/ko.po +++ b/po/ko.po @@ -2313,7 +2313,7 @@ msgstr "" #: wt-status.c:1535 #, c-format -msgid "nothing to commit, working directory clean\n" +msgid "nothing to commit, working tree clean\n" msgstr "커밋할 사항 없음, 작업 폴더 깨끗함\n" #: wt-status.c:1642 diff --git a/po/ru.po b/po/ru.po index c0a838b..6fe8c09 100644 --- a/po/ru.po +++ b/po/ru.po @@ -2233,7 +2233,7 @@ msgstr "нечего коммитить (используйте опцию «-u #: wt-status.c:1535 #, c-format -msgid "nothing to commit, working directory clean\n" +msgid "nothing to commit, working tree clean\n" msgstr "нечего коммитить, нет изменений в рабочем каталоге\n" #: wt-status.c:1642 diff --git a/po/sv.po b/po/sv.po index 32bcaba..801a816 100644 --- a/po/sv.po +++ b/po/sv.po @@ -2289,7 +2289,7 @@ msgstr "inget att checka in (använd -u för att visa ospårade filer)\n" #: wt-status.c:1535 #, c-format -msgid "nothing to commit, working directory clean\n" +msgid "nothing to commit, working tree clean\n" msgstr "inget att checka in, arbetskatalogen ren\n" #: wt-status.c:1642 diff --git a/po/vi.po b/po/vi.po index edd8e29..18fbdef 100644 --- a/po/vi.po +++ b/po/vi.po @@ -2306,7 +2306,7 @@ msgstr "" #: wt-status.c:1535 #, c-format -msgid "nothing to commit, working directory clean\n" +msgid "nothing to commit, working tree clean\n" msgstr "không có gì để chuyển giao, thư mục làm việc sạch sẽ\n" #: wt-status.c:1642 diff --git a/po/zh_CN.po b/po/zh_CN.po index a6b06f9..b76f2c0 100644 --- a/po/zh_CN.po +++ b/po/zh_CN.po @@ -2421,7 +2421,7 @@ msgstr "无文件要提交(使用 -u 显示未跟踪的文件)\n" # 译者:中文字符串拼接,可删除前导空格 #: wt-status.c:1535 #, c-format -msgid "nothing to commit, working directory clean\n" +msgid "nothing to commit, working tree clean\n" msgstr "无文件要提交,干净的工作区\n" # 译者:注意保持句尾空格 diff --git a/wt-status.c b/wt-status.c index 4f27bd6..4ce4e35 100644 --- a/wt-status.c +++ b/wt-status.c @@ -1554,7 +1554,7 @@ void wt_status_print(struct wt_status *s) else printf(_("nothing to commit\n")); } else - printf(_("nothing to commit, working directory clean\n")); + printf(_("nothing to commit, working tree clean\n")); } } -- 2.8.1 -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 4/4] bundle v3: the beginning
On Thu, Jun 09, 2016 at 03:53:26PM +0700, Duy Nguyen wrote: > > Yes. To me, this was always about punting large blobs from the clones. > > Basically the way git-lfs and other tools work, but without munging your > > history permanently. > > Makes sense. If we keep all trees and commits locally, pack v4 still > has a chance to rise! Yeah, I don't think anything here precludes pack v4. > > I don't know if Christian had other cases in mind (like the many-files > > case, which I think is better served by something like narrow clones). > > Although for git-gc or git-fsck, I guess we need special support > anyway not to download large blobs unnecessarily. Not sure if git-gc > can already do that now. All I remember is git-repack can still be > used to make a repo independent from odb alternates. We probably want > to avoid that. git-fsck definitely should verify that large remote > blobs are good without downloading them (a new "fsck" command to > external odb, maybe). I think git-gc should work out of the box; you'd want to use "repack -l", which git-gc passes already. Fsck would be OK as long as you didn't actually load blobs. We have --connectivity-only for that, but of course it isn't the default. You'd probably want the default mode to fsck local blobs, but _not_ to fault in external blobs (but have an option to fault them all in if you really wanted to be sure you have everything). -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] send-pack: use buffered I/O to talk to pack-objects
Junio C Hamano writes: > Matthieu Moy writes: > >> Jeff King writes: >> >>> --- a/send-pack.c >>> +++ b/send-pack.c >>> @@ -36,18 +36,15 @@ int option_parse_push_signed(const struct option *opt, >>> die("bad %s argument: %s", opt->long_name, arg); >>> } >>> >>> -static int feed_object(const unsigned char *sha1, int fd, int negative) >>> +static void feed_object(const unsigned char *sha1, FILE *fh, int negative) >>> { >>> - char buf[42]; >>> - >>> if (negative && !has_sha1_file(sha1)) >>> - return 1; >>> + return; >> [...] >>> @@ -97,21 +95,22 @@ static int pack_objects(int fd, struct ref *refs, >>> struct sha1_array *extra, stru >> [...] >>> for (i = 0; i < extra->nr; i++) >>> - if (!feed_object(extra->sha1[i], po.in, 1)) >>> - break; >>> + feed_object(extra->sha1[i], po_in, 1); >> >> I may have missed the obvious, but doesn't this change the behavior when >> "negative && !has_sha1_file(sha1)" happens? I understand that you don't >> need write_or_whine anymore, but don't understand how you get rid of the >> "return 1" here. > > The original feed_object() has somewhat strange interface in that a > non-zero return from it is "Everything went alright!", and zero > means "Oops, something went wrong". Indeed, this is the "obvious" I've missed. So, indeed, the new "return" does the same thing as the old "return 1". -- Matthieu Moy http://www-verimag.imag.fr/~moy/ -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
What's cooking in git.git (Jun 2016, #03; Thu, 9)
Here are the topics that have been cooking. Commits prefixed with '-' are only in 'pu' (proposed updates) while commits prefixed with '+' are in 'next'. The ones marked with '.' do not appear in any of the integration branches, but I am still holding onto them. The tip of 'pu' is not expected to pass its own test suite, due to multiple broken topics queued near its tip. Earlier there was a talk about automatically bisecting breakage on 'pu', but tonight's pushout is a good illustration why that would not be very useful. What would be useful is (1) enumeration of commits on the first parent chain on master..pu, (2) test of the second parent of each of the commits found in (1), and (3) bisection between the broken commits in (2) and 'master', which would highlight broken topics and the commit(s) that might be responsible for each broken topic. You can find the changes described here in the integration branches of the repositories listed at http://git-blame.blogspot.com/p/git-public-repositories.html -- [New Topics] * jk/avoid-unbounded-alloca (2016-06-07) 1 commit - tree-diff: avoid alloca for large allocations Will merge to 'next'. * jk/send-pack-stdio (2016-06-08) 1 commit - send-pack: use buffered I/O to talk to pack-objects Code clean-up. Will merge to 'next'. * pb/commit-editmsg-path (2016-06-09) 1 commit - builtin/commit.c: memoize git-path for COMMIT_EDITMSG Code clean-up. Will merge to 'next'. -- [Stalled] * sb/bisect (2016-04-15) 22 commits - SQUASH??? - bisect: get back halfway shortcut - bisect: compute best bisection in compute_relevant_weights() - bisect: use a bottom-up traversal to find relevant weights - bisect: prepare for different algorithms based on find_all - bisect: rename count_distance() to compute_weight() - bisect: make total number of commits global - bisect: introduce distance_direction() - bisect: extract get_distance() function from code duplication - bisect: use commit instead of commit list as arguments when appropriate - bisect: replace clear_distance() by unique markers - bisect: use struct node_data array instead of int array - bisect: get rid of recursion in count_distance() - bisect: make algorithm behavior independent of DEBUG_BISECT - bisect: make bisect compile if DEBUG_BISECT is set - bisect: plug the biggest memory leak - bisect: add test for the bisect algorithm - t6030: generalize test to not rely on current implementation - t: use test_cmp_rev() where appropriate - t/test-lib-functions.sh: generalize test_cmp_rev - bisect: allow 'bisect run' if no good commit is known - bisect: write about `bisect next` in documentation The internal algorithm used in "git bisect" to find the next commit to check has been optimized greatly. Expecting a reroll. ($gmane/291163) * nd/shallow-deepen (2016-04-13) 26 commits - fetch, upload-pack: --deepen=N extends shallow boundary by N commits - upload-pack: add get_reachable_list() - upload-pack: split check_unreachable() in two, prep for get_reachable_list() - t5500, t5539: tests for shallow depth excluding a ref - clone: define shallow clone boundary with --shallow-exclude - fetch: define shallow boundary with --shallow-exclude - upload-pack: support define shallow boundary by excluding revisions - refs: add expand_ref() - t5500, t5539: tests for shallow depth since a specific date - clone: define shallow clone boundary based on time with --shallow-since - fetch: define shallow boundary with --shallow-since - upload-pack: add deepen-since to cut shallow repos based on time - shallow.c: implement a generic shallow boundary finder based on rev-list - fetch-pack: use a separate flag for fetch in deepening mode - fetch-pack.c: mark strings for translating - fetch-pack: use a common function for verbose printing - fetch-pack: use skip_prefix() instead of starts_with() - upload-pack: move rev-list code out of check_non_tip() - upload-pack: tighten number parsing at "deepen" lines - upload-pack: use skip_prefix() instead of starts_with() - upload-pack: move "unshallow" sending code out of deepen() - upload-pack: remove unused variable "backup" - upload-pack: move "shallow" sending code out of deepen() - upload-pack: move shallow deepen code out of receive_needs() - transport-helper.c: refactor set_helper_option() - remote-curl.c: convert fetch_git() to use argv_array The existing "git fetch --depth=" option was hard to use correctly when making the history of an existing shallow clone deeper. A new option, "--deepen=", has been added to make this easier to use. "git clone" also learned "--shallow-since=" and "--shallow-exclude=" options to make it easier to specify "I am interested only in the recent N months worth of history" and "Give me only the history since that version". Needs review. * sg/completion-updates (2016-02-28) 21 commits . complet
Re: [PATCH] send-pack: use buffered I/O to talk to pack-objects
On Thu, Jun 09, 2016 at 09:40:42AM -0700, Junio C Hamano wrote: > >>for (i = 0; i < extra->nr; i++) > >> - if (!feed_object(extra->sha1[i], po.in, 1)) > >> - break; > >> + feed_object(extra->sha1[i], po_in, 1); > > > > I may have missed the obvious, but doesn't this change the behavior when > > "negative && !has_sha1_file(sha1)" happens? I understand that you don't > > need write_or_whine anymore, but don't understand how you get rid of the > > "return 1" here. > [...] > The original caller checks for errors to break out the feeding of > the process early, with things like: > > if (!feed_object(...)) > break; > > IOW, the caller would have continued when hitting that "return 1" > codepath. > > And the code with the patch, the caller continues unconditionally, > so there is no behaviour change, if I am reading the code correctly. Right, that's my reading as well (and IMHO another good motivation for the patch, if it makes this all less confusing). -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] send-pack: use buffered I/O to talk to pack-objects
On Thu, Jun 09, 2016 at 03:34:59PM +0100, Ramsay Jones wrote: > Just FYI, this patch removes the last use of write_or_whine() - should it > be removed? That sounds reasonable. Want to do a patch on top? -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2] builtin/commit.c: memoize git-path for COMMIT_EDITMSG
Pranit Bauva writes: > On Wed, May 25, 2016 at 12:49 AM, Pranit Bauva wrote: >> This is a follow up commit for f932729c (memoize common git-path >> "constant" files, 10-Aug-2015). >> >> The many function calls to git_path() are replaced by >> git_path_commit_editmsg() and which thus eliminates the need to repeatedly >> compute the location of "COMMIT_EDITMSG". >> >> Mentored-by: Lars Schneider >> Mentored-by: Christian Couder >> Signed-off-by: Pranit Bauva >> --- >> Link for v1[1]. >> ... > > Anyone any comments? It seems that nobody saw anything that needs further polishing? Thanks for pinging. Will queue. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCHv4] Documentation: triangular workflow
Jordan DE GEA writes: > +Motivations > +~~~ > +* Allows contributors to work with Git even though they do not have > +write access to **UPSTREAM**. > > +* Allows maintainers to receive code from contributors they may not > +trust. I somehow don't think "even though" sits well here. You can work with Git all you want locally even if you do not have write access anywhere, but that is stating the obvious. Also, unless the only alternative you are contrasting "triangular" with is "a single central repository workflow", "may not trust" is not a unique advantage of "triangular". You can exchange patches and reviews with contributors just like you do. I think the more important thing to say instead of the above two is that this arrangement allows distributed workflow; you publish to your place at your own pace, they pull from you if and when they choose to, and they publish their result to their place at their own pace. > +* Code review is more efficient I have no idea what data you have to back this claim up. More efficient compared to what? If you are contrasting "triangular" with "a single central repository everybody pushes into", then one big advantage "triangular" can have around "code review" is that it allows reviews before the code hits the authoritative history of the project. If your project works with a single central repository everybody pushes into, things tend to run in reverse--after you push questionable stuff into the project history, somebody else has to spot problems, go in and fix it after the fact. > +* Encourages clean history by using `rebase -i` and `push --force` to > +your public fork before the code is merged This is a side-effect of "review before merge" mentioned earlier. > +Preparation > +~~~ > + > +Cloning from **PUBLISH**, which is a fork of **UPSTREAM** or an empty > +repository. > + > +== > +`git clone ` > +== > + > +Setting the behavior of push for the triangular workflow: > + > +=== > +`git config push.default current` > +=== > + > +Adding **UPSTREAM** remote: > + > +=== > +`git remote add upstream ` > +=== > + > +With the `remote add` above, using `git pull upstream` pulls there, > +instead of saying its URL. In addition, `git pull` can pull from > +**UPSTREAM** without argument. > + > +For each branch requiring a triangular workflow, set > +`branch..remote` and `branch..pushRemote`. > + > +Example with master as : > +=== > +* `git config branch.master.remote upstream` > +* `git config branch.master.pushRemote origin` > +=== Wouldn't it be much simpler to manage if you instead start from a clone of **UPSTREAM** and then fork **PUBLISH** and push your work out to the latter? You do not have to do per-branch configuration that way, no? Instead you would set default.pushRemote to publish just once, and no matter how many branches you create later, you do not have to do anything special. It smells like you are deliberately presenting a more cumbersome way, as a prelude to add even more configuration that is not necessary if you started in the right direction in the first place. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] send-pack: use buffered I/O to talk to pack-objects
Matthieu Moy writes: > Jeff King writes: > >> --- a/send-pack.c >> +++ b/send-pack.c >> @@ -36,18 +36,15 @@ int option_parse_push_signed(const struct option *opt, >> die("bad %s argument: %s", opt->long_name, arg); >> } >> >> -static int feed_object(const unsigned char *sha1, int fd, int negative) >> +static void feed_object(const unsigned char *sha1, FILE *fh, int negative) >> { >> -char buf[42]; >> - >> if (negative && !has_sha1_file(sha1)) >> -return 1; >> +return; > [...] >> @@ -97,21 +95,22 @@ static int pack_objects(int fd, struct ref *refs, struct >> sha1_array *extra, stru > [...] >> for (i = 0; i < extra->nr; i++) >> -if (!feed_object(extra->sha1[i], po.in, 1)) >> -break; >> +feed_object(extra->sha1[i], po_in, 1); > > I may have missed the obvious, but doesn't this change the behavior when > "negative && !has_sha1_file(sha1)" happens? I understand that you don't > need write_or_whine anymore, but don't understand how you get rid of the > "return 1" here. The original feed_object() has somewhat strange interface in that a non-zero return from it is "Everything went alright!", and zero means "Oops, something went wrong". When the function actually writes things out, it calls write_or_whine(), whose return value also uses that (unusual) convention, which is the reason why it behaves that way. The "return 1" you noticed happens when the function is told not to send history behind one object, but that object does not exist in our repository. This is a perfectly normal condition and the function just ignores it and returns without feeding it to pack-objects. It reports to the caller "Everything went alright!". The original caller checks for errors to break out the feeding of the process early, with things like: if (!feed_object(...)) break; IOW, the caller would have continued when hitting that "return 1" codepath. And the code with the patch, the caller continues unconditionally, so there is no behaviour change, if I am reading the code correctly. Thanks for carefully reading the change and pointing out hard-to-read parts, as always. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/4] Resurrect "diff-lib.c: adjust position of i-t-a entries in diff"
Hi Duy, On Mon, 6 Jun 2016, Nguyễn Thái Ngọc Duy wrote: > diff --git a/diff.h b/diff.h > index b497078..9e42556 100644 > --- a/diff.h > +++ b/diff.h > @@ -92,6 +92,7 @@ typedef struct strbuf *(*diff_prefix_fn_t)(struct > diff_options *opt, void *data) > #define DIFF_OPT_FUNCCONTEXT (1 << 29) > #define DIFF_OPT_PICKAXE_IGNORE_CASE (1 << 30) > #define DIFF_OPT_DEFAULT_FOLLOW_RENAMES (1U << 31) > +#define DIFF_OPT_SHIFT_INTENT_TO_ADD (1UL << 32) I am afraid that this is not enough. On Windows, sizeof(unsigned long) is 4 (and it is perfectly legal). That means that you would have to use at least (1ULL << 32), but then you also have to change the type of xdl_opts to uint64_t, which in turn means that you will have to change the definition of xpparam_t's "flags" field from unsigned long to uint64_t. Maybe this can be avoided? Ciao, Johannes
Re: [PATCH v2 12/13] dir_iterator: new API for iterating over a directory tree
Michael Haggerty writes: > Given that ref-iterators is in next but also that you will soon be > rewinding next, would you like these tweaks as a re-roll of the branch, > as fixup patches to squash on top, or as a new patch series to be > administered separately? A re-roll after the pre-release freeze was what I had in mind. Thanks. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 05/38] refs: create a base class "ref_store" for files_ref_store
Michael Haggerty writes: >>> + >>> +static struct ref_store *main_ref_store = NULL; >>> + >>> +static struct ref_store *submodule_ref_stores = NULL; >> >> Let's let BSS take care of these initialization. > > I like the `= NULL` because it expresses "yes, I care about the initial > values of these variables", which to me is more valuable than saving a > few bytes in the size of the executable. But in fact, GCC does the > obvious optimization: it detects that these variables are being > initialized to zero, and puts them in BSS anyway. I'd be surprised if > other compilers don't do the same. So I'd prefer to leave this as-is, if > it's OK with you. Sorry; I shouldn't have phrased it with BSS, because the main point of the convention is not about "saving bytes by placing it in BSS". Lack of "= ..." is a clear-enough clue that the code wants these pointers initialized to NULL. And in this snippet: static struct foo the_foo; static struct foo *default_foo = &the_foo; static struct foo *other_foo; we want the presence of "=" as a clue that something special is going on only for default_foo (it is not initialied to the nul value for the type like usual static variables). If you wrote it this way, static struct foo *default_foo = &the_foo; static struct foo *other_foo = NULL; the reader has to say "something funny going on about other_foo?" when scanning up to "other_foo =" and then "... ah, no that is just wasted typing and reading, it is left to NULL after all". So, no, it is not OK with me. Otherwise I would have said "I wonder if ..." ;-) -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/5] t1404: document function test_update_rejected
Michael Haggerty writes: > On 06/07/2016 06:57 PM, Johannes Sixt wrote: >> Am 07.06.2016 um 13:50 schrieb Michael Haggerty: >>> test_update_rejected () { >>> +local prefix before pack create error && >> >> Do we want to add more of unportable 'local' declarations? > > Sorry, I forgot that `local` is not in the POSIX standard. Regarding portability we say three things. * It is supported practically everywhere, and it is even in POSIX, so let's use it. * Even this is not in POSIX, it is supported practically everywhere, and it is too cumbersome to do things without using it, so let's use it. * It is not available in some platforms we (collectively) still care; it is not even in POSIX. Don't use it. I think "local" falls into the third one. : bash; ksh $ v=1 $ f () { local v; v=2; echo f; } $ f ksh: local: not found [No such file or directory] f $ echo $v 2 $ exit : bash; -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 37/38] refs: make lock generic
On 06/07/2016 07:50 PM, Junio C Hamano wrote: > [...] Thanks for all your comments, Junio. I've pushed a revised version of the patch series to my GitHub fork [1] as branch "ref-store", but I'll wait for a little longer to see if there are more comments on the list before sending a re-roll. Michael [1] https://github.com/mhagger/git -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/5] t1404: document function test_update_rejected
On 06/07/2016 06:57 PM, Johannes Sixt wrote: > Am 07.06.2016 um 13:50 schrieb Michael Haggerty: >> test_update_rejected () { >> +local prefix before pack create error && > > Do we want to add more of unportable 'local' declarations? Sorry, I forgot that `local` is not in the POSIX standard. According to [1] it's nevertheless very portable. Do you object to `local` based only on its absence from POSIX, or because it's known to cause problems in the real world? I ask because it is a convenient feature (and its lack is a common cause of mysterious errors), so it would be nice if we could allow its use. Michael [1] http://apenwarr.ca/log/?m=201102#28 -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 37/38] refs: make lock generic
On 06/07/2016 07:50 PM, Junio C Hamano wrote: > Michael Haggerty writes: > >> From: David Turner >> >> Instead of including a files-backend-specific struct ref_lock, change >> the generic ref_update struct to include a void pointer that backends >> can use for their own arbitrary data. > > Hmph. I don't know what your comment means. This step is a consequence of the design decision to stick with a single ref_transaction class that is used by all ref_stores, which was nice because it avoided the need to virtualize the functions ref_transaction_{begin,update,create,delete,verify}. >> @@ -3591,7 +3590,8 @@ static int lock_ref_for_update(struct files_ref_store >> *refs, >> for (parent_update = update->parent_update; >> parent_update; >> parent_update = parent_update->parent_update) { >> -oidcpy(&parent_update->lock->old_oid, &lock->old_oid); >> +struct ref_lock *parent_lock = >> parent_update->backend_data; >> +oidcpy(&parent_lock->old_oid, &lock->old_oid); >> } >> ... >> @@ -3745,7 +3745,7 @@ static int files_transaction_commit(struct ref_store >> *ref_store, >> /* Perform updates first so live commits remain referenced */ >> for (i = 0; i < transaction->nr; i++) { >> struct ref_update *update = updates[i]; >> -struct ref_lock *lock = update->lock; >> +struct ref_lock *lock = update->backend_data; > > OK, and files_* backend method downcasts it to what it wants, which > is good. Michael -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 34/38] refs: add method for delete_refs
On 06/07/2016 07:43 PM, Junio C Hamano wrote: > Michael Haggerty writes: > >> From: David Turner >> >> In the file-based backend, delete_refs has some special optimization >> to deal with packed refs. In other backends, we might be able to make >> ref deletion faster by putting all deletions into a single >> transaction. So we need a special backend function for this. >> >> Signed-off-by: David Turner >> Signed-off-by: Junio C Hamano >> Signed-off-by: Michael Haggerty >> --- >> I think that we could get away without this method if we make >> ref_transactions a bit smarter (for example, by supporting best-effort >> updates that can fail without causing the entire transaction to be >> aborted). But that would be a significant detour, so let's leave it >> here for now. > > Hmm, I actually was wondering why 'pack without' was there while > reading 24/38; IIUC, that is one of the "special optimization" that > is very much tied to the files backend, and it may make sense to > hide it behind delete_refs() as its implementation detail. > > Which is exactly what this step is about, so I am happy ;-) In the future I think this optimization will be even better hidden, namely within the transaction_commit method of a packed_refs_store class :-) > Unlike other changes like the ones that did read_raw_ref(), > verify_refname_available(), etc., the title does not follow the > pattern "refs: make X() virtual", even though as far as I can see > the intent is the same as others. Perhaps a minor retitle is in > order? OK. Michael -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 02/38] rename_ref_available(): add docstring
Michael Haggerty writes: > I propose to change the parameter names to `old_refname` and > `new_refname` and to change the docstring to > ... > Does that sound good? Yes. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 23/38] refs: make peel_ref() virtual
On 06/07/2016 07:36 PM, Junio C Hamano wrote: > Michael Haggerty writes: > >> For now it only supports the main reference store. > > Isn't this comment applicable to a handful of recent changes that > made other things virtual, too? Just wondering if I am missing > something very special with the peel_ref() thing to single it out. It is true that many (most?) virtual functions can currently only be used with the main reference store. That is likely to stay the case for reference-writing functions, because currently there isn't any code that wants to write references in other submodule. (Or maybe there is, but it's currently done by invoking a subprocess? I haven't actually looked.) But peel_ref() is a read-only function, and it seems more plausible to implement it for other submodules. It just seems like work that can be put off. And there's the kindof ugly current_ref_iter hack that might go away if callers are rewritten to use ref_iterators, in which case implementing this method for other submodules would become even easier. In other words, the main point of the comment is not "it only supports the main reference store", but rather "for now". Michael -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 21/38] refs: make pack_refs() virtual
On 06/07/2016 07:35 PM, Junio C Hamano wrote: > Michael Haggerty writes: > >> Signed-off-by: Michael Haggerty >> --- >> refs.c | 7 +++ >> refs/files-backend.c | 6 -- >> refs/refs-internal.h | 4 >> 3 files changed, 15 insertions(+), 2 deletions(-) >> >> diff --git a/refs.c b/refs.c >> index 79ef443..f4f5f32 100644 >> --- a/refs.c >> +++ b/refs.c >> @@ -1418,6 +1418,13 @@ void assert_main_repository(struct ref_store *refs, >> const char *caller) >> } >> >> /* backend functions */ >> +int pack_refs(unsigned int flags) >> +{ >> +struct ref_store *refs = get_ref_store(NULL); >> + >> +return refs->be->pack_refs(refs, flags); >> +} >> + > > Makes me wonder what it even means to "pack_refs" in the context of > other possible backends (e.g. lmdb), but higher level API users > (e.g. "gc") needs something to call to give the backend "here is a > chance for you to optimize yourself" cue, so perhaps it is OK. My thinking exactly. This might end up being renamed to optimize_refs() and be made an optional part of the ref_store interface. But it seemed premature to worry about that. Michael -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 17/38] resolve_gitlink_ref(): avoid memory allocation in many cases
On 06/07/2016 07:29 PM, Junio C Hamano wrote: > Michael Haggerty writes: > >> If we don't have to strip trailing '/' from the submodule path, then >> don't allocate and copy the submodule name. > > Makes sense. > >> int resolve_gitlink_ref(const char *path, const char *refname, unsigned >> char *sha1) >> { >> -int len = strlen(path); >> -struct strbuf submodule = STRBUF_INIT; >> +size_t len, orig_len = strlen(path); >> struct ref_store *refs; >> int flags; >> >> -while (len && path[len-1] == '/') >> -len--; >> +for (len = orig_len; len && path[len - 1] == '/'; len--) >> +; >> + >> if (!len) >> return -1; >> >> -strbuf_add(&submodule, path, len); >> -refs = get_ref_store(submodule.buf); >> -strbuf_release(&submodule); >> +if (len == orig_len) { > > You can keep the original while (), without introducing orig_len, > and check if path[len] is NUL, which would probably be an end result > that is easier to read. OK, I'll change it. >> +refs = get_ref_store(path); >> +} else { >> +char *stripped = xmemdupz(path, len); >> + >> +refs = get_ref_store(stripped); >> +free(stripped); > > An alternative might be to add get_ref_store_counted() that takes > (path, len) instead of NUL-terminated path, which does not look too > bad looking at the state after applying all 38 patches. This slash-stripping code was introduced in 2007 (0ebde32c87) and it's not my priority to improve it as part of this patch series. Michael -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] send-pack: use buffered I/O to talk to pack-objects
On 09/06/16 13:10, Matthieu Moy wrote: > Jeff King writes: > >> --- a/send-pack.c >> +++ b/send-pack.c >> @@ -36,18 +36,15 @@ int option_parse_push_signed(const struct option *opt, >> die("bad %s argument: %s", opt->long_name, arg); >> } >> >> -static int feed_object(const unsigned char *sha1, int fd, int negative) >> +static void feed_object(const unsigned char *sha1, FILE *fh, int negative) >> { >> -char buf[42]; >> - >> if (negative && !has_sha1_file(sha1)) >> -return 1; >> +return; > [...] >> @@ -97,21 +95,22 @@ static int pack_objects(int fd, struct ref *refs, struct >> sha1_array *extra, stru > [...] >> for (i = 0; i < extra->nr; i++) >> -if (!feed_object(extra->sha1[i], po.in, 1)) >> -break; >> +feed_object(extra->sha1[i], po_in, 1); > > I may have missed the obvious, but doesn't this change the behavior when > "negative && !has_sha1_file(sha1)" happens? I understand that you don't > need write_or_whine anymore, but don't understand how you get rid of the > "return 1" here. Just FYI, this patch removes the last use of write_or_whine() - should it be removed? ATB, Ramsay Jones -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 05/38] refs: create a base class "ref_store" for files_ref_store
On 06/07/2016 07:03 PM, Junio C Hamano wrote: > Michael Haggerty writes: > >> We want ref_stores to be polymorphic, so invent a base class of which >> files_ref_store is a derived class. For now there is a one-to-one >> relationship between ref_stores and submodules. > > The mention of "submodules" made me go "Huh?" but thinking about it > for a second it is clear and obvious. We often peek into refs in a > different repository that is a submodule, and we do not mix them with > our own refs. Logically that is what a "ref store" is, and one-to-one > relationship is expected. > >> @@ -1284,3 +1288,90 @@ const char *resolve_ref_unsafe(const char *refname, >> int resolve_flags, >> errno = ELOOP; >> return NULL; >> } >> + >> +static struct ref_store *main_ref_store = NULL; >> + >> +static struct ref_store *submodule_ref_stores = NULL; > > Let's let BSS take care of these initialization. I like the `= NULL` because it expresses "yes, I care about the initial values of these variables", which to me is more valuable than saving a few bytes in the size of the executable. But in fact, GCC does the obvious optimization: it detects that these variables are being initialized to zero, and puts them in BSS anyway. I'd be surprised if other compilers don't do the same. So I'd prefer to leave this as-is, if it's OK with you. >> [...] >> /* >> + * Return a pointer to the reference store for the specified >> + * submodule. For the main repository, use submodule==NULL; such a >> + * call cannot fail. For a submodule, the submodule must exist and be >> + * a nonbare repository, otherwise return NULL. Verify that the >> + * reference store is a files_ref_store, and cast it to that type >> + * before returning it. >> */ >> +static struct files_ref_store *get_files_ref_store(const char *submodule, >> + const char *caller) >> { >> +struct ref_store *refs = get_ref_store(submodule); >> >> +return refs ? files_downcast(refs, 1, caller) : NULL; >> } > > This comment may be barking up a wrong tree, but the support for > class inheritance makes me wonder if I can do something along this > line: > > * implement a filesystem based ref_store, that is very similar to >what you have as files_ref_store, but > >- when storing a ref as a loose ref, or when checking if a ref > exists as a loose ref, quote them somehow (e.g. a branch > "Branch" is not stored as a file "$GIT_DIR/refs/heads/branch" > but by using "^" as a single shift marker, i.e. as > "$GIT_DIR/refs/heads/^branch"); > >- when enumerating what refs we have as loose refs, unquote what > readdir(3) gave us, e.g. seeing "$GIT_DIR/refs/heads/^branch", > I say "there is a branch whose name is 'Branch'". > > * as locking and other 'methods' are likely to be very similar to >your files_ref_store, make this new backend as a subclass of it, >i.e. create a new class but function pointers to many methods are >copied from files ref_store vtable. That is definitely something we could implement in the future. If I were going to design an extension like this, I think I'd go straight to something more expressive; maybe something like URL-encoding. For example, we might like a system that can record refnames with a Unicode encoding that is determined by us rather than by the filesystem. Depending on the details, it might be preferable to implement the new ref-store as an encoding layer that delegates to an old-fashioned files backend rather than using inheritance. In fact, you'd only want to do the translation for the loose storage part, so in the end the implemention might look something like overlay_ref_store( encoding_ref_store(loose_ref_store()), packed_ref_store()) > Would the strict "when downcasting to 'files', we make sure vtable > is that of 'files' backend" interfere with such an approach? True, the simple approach that I use above doesn't generalize to implementation inheritance. But I'd rather cross that bridge when we come to it. Implementing an RTTI system in C is a bit ambitious and, I think, comes with runtime costs. Michael -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 02/38] rename_ref_available(): add docstring
On 06/07/2016 08:10 PM, Junio C Hamano wrote: > Michael Haggerty writes: > >> From: David Turner >> >> Signed-off-by: David Turner >> Signed-off-by: Junio C Hamano >> Signed-off-by: Michael Haggerty >> --- >> refs/refs-internal.h | 5 + >> 1 file changed, 5 insertions(+) >> >> diff --git a/refs/refs-internal.h b/refs/refs-internal.h >> index efe5847..d8a2606 100644 >> --- a/refs/refs-internal.h >> +++ b/refs/refs-internal.h >> @@ -240,6 +240,11 @@ const char *find_descendant_ref(const char *dirname, >> const struct string_list *extras, >> const struct string_list *skip); >> >> +/* >> + * Check if the new name does not conflict with any existing refs >> + * (other than possibly the old ref). Return 0 if the ref can be >> + * renamed to the new name. >> + */ >> int rename_ref_available(const char *oldname, const char *newname); > > I do not quite understand the comment. Partly because it is unclear > what "conflict" means here, but I guess it means a D/F conflict that > is explained near verify_refname_available()? Yes. > A new name can conflict with an existing, possibly old ref? Are you > referring to this condition? > > You are trying to rename "refs/a/b" to "refs/a", which would > conflict, but as long as there is no other ref that share the > prefix "refs/a/", e.g. "refs/a/c", the new name "refs/a" is > available. That is correct. > I wonder if it is necessary to document that this function is not > meant to protect us from others racing with us. That is, when you > are renaming something to "refs/a", you call this function and it > finds, by calling verify_refname_available(), that the repository > has nothing that conflicts with the name and says "OK", but before > you actually do the rename, somebody may push from sideways to > create "refs/a/b", making the result of an earlier check with this > function invalid. > > Or is this to be called only under a lock that protects us from such > a race? It would be really awkward (maybe impossible?) to guard against all such races even using locks. One problem is that the lockfiles for the old and new refnames would themselves, in some cases, not be able to coexist due to D/F conflicts. Also, there is no way to prevent the creation of "any reference in `refs/a/*`" except by creating the reference `refs/a` (the presence of `refs/a.lock` is not enough), but by that time it is too late. In the end, this function mostly exists as a pre-check that `rename_ref()` is *likely* to succeed, so that the latter function is less likely to detect a problem after it has started moving things around. But `rename_ref()` is the final arbiter and is a bit more robust than this check. I also noticed that the docstring in this patch got the polarity of the return value backwards. I propose to change the parameter names to `old_refname` and `new_refname` and to change the docstring to /* * Check whether an attempt to rename old_refname to new_refname would * cause a D/F conflict with any existing reference (other than * possibly old_refname). If there would be a conflict, emit an error * message and return false; otherwise, return true. * * Note that this function is not safe against all races with other * processes (though rename_ref() catches some races that might get by * this check). */ Does that sound good? Michael -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] refs: mark the file-local vtable symbols as static
On 06/08/2016 08:55 PM, Junio C Hamano wrote: > Ramsay Jones writes: > >> Signed-off-by: Ramsay Jones >> --- >> >> Hi Michael, Junio, >> >> I would normally ask you to squash this into the relevant patch when >> you next re-roll your 'mh/ref-iterators' branch, but this has already >> been merged into next. (I normally have a bit more time ... sorry!). >> >> Perhaps, after the release, when the next branch is re-wound/re-built, >> this could be squashed into your branch then. > > Yup, sounds like a plan. > >> >> Anyway, after applying this patch, the following symbols are still >> 'public but unused': >> >> > refs/files-backend.o - files_reflog_iterator_begin >> > refs/iterator.o - is_empty_ref_iterator >> > refs/iterator.o - merge_ref_iterator_begin >> >> These all look (potentially) useful for the implementation of >> additional 'ref-iter' types and look to be part of the _internal_ >> iterator API - so they should not be marked static. Can you just >> confirm my interpretation. > > I am not Michael, but FWIW I think that is sensible. I *am* Michael, and I think your changes look good. Thanks for your review. I've incorporated your changes with some other changes in a re-roll [1] in case Junio wants to use it in that form. Please note that two of the hunks that you are suggesting apply to "refs: introduce an iterator interface" and the third to "for_each_reflog(): reimplement using iterators". Michael [1] http://article.gmane.org/gmane.comp.version-control.git/296883 -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 12/13] dir_iterator: new API for iterating over a directory tree
On 06/09/2016 01:46 PM, Michael Haggerty wrote: > On 06/07/2016 07:13 AM, Eric Sunshine wrote: >> [...] > Thanks for all your great comments! Junio, Given that ref-iterators is in next but also that you will soon be rewinding next, would you like these tweaks as a re-roll of the branch, as fixup patches to squash on top, or as a new patch series to be administered separately? I don't think any of these changes are serious enough to warrant holding up ref-iterators, so it's mostly a question of what would be most convenient for you. For the case that you prefer a re-roll, I have pushed it to my fork on GitHub [1] as branch ref-iterators. This branch squashes in Ramsay's patch [2], addresses Eric's comments from this thread, and incorporates the whitespace fix that you made when incorporating v2. List of changes relative to v2: * In "refs: introduce an iterator interface": * Make two vtables private. * Add a whitespace fix from Junio's mh/ref-iterators. * In "dir_iterator: new API for iterating over a directory tree": * Add and improve some comments and docstrings. * Fix some formatting problems. * Use a for rather than a while loop in `dir_iterator_abort()` to improve the clarity. * Warn on failures of `opendir()`, `readdir()`, and `closedir()` that can't be explained as a simple race. * In "for_each_reflog(): reimplement using iterators", make the vtable private. If you'd prefer a separate series, let me know and I'll prepare that. Michael [1] https://github.com/mhagger/git [2] http://thread.gmane.org/gmane.comp.version-control.git/296801 -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCHv4] Documentation: triangular workflow
Currently, triangular workflow can be configured, but there is no documentation about it. A documentation is useful to keep configuration possibilities up-to-date. A new subsection is created in gitworkflow. Signed-off-by: Michael Haggerty Signed-off-by: Matthieu Moy Signed-off-by: Jordan DE GEA --- Changes since v3: * Text reorganized to follow: - Introduction - Preparation - Staying up-to-date - Alternatively * Texts added to explain why we use commands in: - Preparation - Alternatively Documentation/gitworkflows.txt | 177 - 1 file changed, 176 insertions(+), 1 deletion(-) diff --git a/Documentation/gitworkflows.txt b/Documentation/gitworkflows.txt index f16c414..1ec1f63 100644 --- a/Documentation/gitworkflows.txt +++ b/Documentation/gitworkflows.txt @@ -463,6 +463,178 @@ if you get conflicts: `git am -3` will use index information contained in patches to figure out the merge base. See linkgit:git-am[1] for other options. +TRIANGULAR WORKFLOW +--- + +In some projects, you cannot push directly to the project but have to +suggest your commits to the maintainer (e.g. pull requests). +For these projects, it's common to use what's called a *triangular +workflow*: + +- Taking the last version of the project by fetching from **UPSTREAM** +- Writing modifications and push them to a fork (e.g. **PUBLISH**) +- Opening a pull request +- If the maintainer accepts the changes, he merges them into the + **UPSTREAM** repository. + + + +-- - +| UPSTREAM | maintainer | PUBLISH | +| git/git |- - - - - - - -| me/remote| +-- ← - + \ / + \ / + fetch↓\ /↑push + \ / + \ / + - + | LOCAL | + - + + +Motivations +~~~ + +* Allows contributors to work with Git even though they do not have +write access to **UPSTREAM**. + +* Allows maintainers to receive code from contributors they may not +trust. + +* Code review is more efficient + +* Encourages clean history by using `rebase -i` and `push --force` to +your public fork before the code is merged + +Preparation +~~~ + +Cloning from **PUBLISH**, which is a fork of **UPSTREAM** or an empty +repository. + +== +`git clone ` +== + +Setting the behavior of push for the triangular workflow: + +=== +`git config push.default current` +=== + +Adding **UPSTREAM** remote: + +=== +`git remote add upstream ` +=== + +With the `remote add` above, using `git pull upstream` pulls there, +instead of saying its URL. In addition, `git pull` can pull from +**UPSTREAM** without argument. + +For each branch requiring a triangular workflow, set +`branch..remote` and `branch..pushRemote`. + +Example with master as : +=== +* `git config branch.master.remote upstream` +* `git config branch.master.pushRemote origin` +=== + +Stay up-to-date +~~~ + +Retrieve updates from **UPSTREAM** with `git pull` and send them to +**PUBLISH** with `git push`. + +Checks +~~ + +.Display the push remote's name: +[caption="Recipe: "] + += +`git rev-parse --abbrev-ref @{push}` += + +The shorthand `@{push}` denotes the remote-tracking branch +where the would be pushed to. If no is specified +(`@{push}`), takes the value of the current branch. + + +.Display the fetch remote's name: +[caption="Recipe: "] + +=== +`git rev-parse --abbrev-ref @{upstream}` +=== + +The shorthand `@{upstream}` substitutes the upstream name of +the branch. If no is specified (`@{upstream}`), +takes the value of the current branch. + +.Display commits added to the current branch since last push: +[caption="Recipe: "] + +=== +`git log @{push}..` +=== + +.Display commits added to a specific branch since last push: +[caption="Recipe: "] + + +`git log @{push}..` + + +Alternative configuration +~ + +.Cloning from **UPSTREAM** +[caption="Recipe: "] + +In the preparation above, a clone from **PUBLISH** was used. Starting +with a clone of **UPSTREAM** is possible too. + +Cloning from **UPSTREAM** + +== +`git clone ` +== + +Setting the behavior of push for the triangular workflow: + +=== +`git config push.default curren
Re: [PATCH] send-pack: use buffered I/O to talk to pack-objects
Jeff King writes: > --- a/send-pack.c > +++ b/send-pack.c > @@ -36,18 +36,15 @@ int option_parse_push_signed(const struct option *opt, > die("bad %s argument: %s", opt->long_name, arg); > } > > -static int feed_object(const unsigned char *sha1, int fd, int negative) > +static void feed_object(const unsigned char *sha1, FILE *fh, int negative) > { > - char buf[42]; > - > if (negative && !has_sha1_file(sha1)) > - return 1; > + return; [...] > @@ -97,21 +95,22 @@ static int pack_objects(int fd, struct ref *refs, struct > sha1_array *extra, stru [...] > for (i = 0; i < extra->nr; i++) > - if (!feed_object(extra->sha1[i], po.in, 1)) > - break; > + feed_object(extra->sha1[i], po_in, 1); I may have missed the obvious, but doesn't this change the behavior when "negative && !has_sha1_file(sha1)" happens? I understand that you don't need write_or_whine anymore, but don't understand how you get rid of the "return 1" here. -- Matthieu Moy http://www-verimag.imag.fr/~moy/ -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v4 6/6] send-email: add option --cite to quote the message body
Samuel GROOT writes: > If used with `in-reply-to=`, cite the message body of the given > email file. Otherwise, do nothing. It should at least warn when --in-reply-to= is not given (either no --in-reply-to or --in-reply-to=). I don't see any use-case where a user would want --cite on the command-line and not want --in-reply-to=. OTOH, it seems a plausible user-error, and the user would appreciate a message saying what's going on. > @@ -56,6 +57,8 @@ git send-email --dump-aliases > --subject * Email "Subject:" > --in-reply-to * Email "In-Reply-To:" > --in-reply-to* Populate header fields appropriately. > +--cite * Quote the message body in the cover if > + --compose is set, else in the first > patch. > --[no-]xmailer * Add "X-Mailer:" header (default). > --[no-]annotate* Review each patch that will be sent in > an editor. > --compose * Open an editor for introduction. Just wondering: would it make sense to activate --cite by default when --in-reply-to=file is used, and to allow --no-cite to disable this? This is something we can easily do now without breaking backward compatibility (--in-reply-to=file doesn't exist yet), but would be more painful to do later. > @@ -640,6 +644,7 @@ if (@files) { > usage(); > } > > +my $message_cited; Nit: I read "$message_cited" as "Boolean saying whether the message was cited". $cited_message would be clearer to me (but this is to be taken with a grain of salt as I'm not a native speaker), since the variable holds the content of the cited message. > +sub do_insert_cited_message { > + my $tmp_file = shift; > + my $original_file = shift; > + > + open my $c, "<", $original_file > + or die "Failed to open $original_file: " . $!; > + > + open my $c2, ">", $tmp_file > + or die "Failed to open $tmp_file: " . $!; > + > + # Insertion after the triple-dash > + while (<$c>) { > + print $c2 $_; > + last if (/^---$/); > + } > + print $c2 $message_cited; I would add a newline here to get a blank line between the message cited and the diffstat. I think non-ascii characters would deserve particular attention here too. For example, if the patch contain only ascii and the cited part contains UTF-8, does the generated patch have a proper Content-type: header? I can imagine worse, like a patch containing latin1 character and a cited message with another 8-bit encoding. > +test_expect_success $PREREQ 'correct cited message with --in-reply-to and > --compose' ' > + grep "> On Sat, 12 Jun 2010 15:53:58 +0200, aut...@example.com wrote:" > msgtxt3 && I would prefer to have the full address including the real name here (A ) in this example. Actually, after a quick look at the code, I don't understand where the name has gone (what's shown here is extracted from the From: header). -- Matthieu Moy http://www-verimag.imag.fr/~moy/ -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC/PATCH] verify-tag: add --check-name flag
Junio C Hamano venit, vidit, dixit 08.06.2016 20:43: > Santiago Torres writes: > >> Sorry I'm trying to follow this. Would it be best to then have >> >> verify-tag [--check-name=tagname] (tag-ref|tag-name|sha1)? >> >> and >> >> tag -v [--check-name] (tag-name) >> >> Or would --format still work better? > > No matter what you do, don't call that "--check-name". It does not > tell the users what aspect of that thing is "checked". Avoid being > asked "Does this check tagname to make sure it does not have > non-ASCII letters?", in other words. > > As a longer-term direction, I think the best one is to make what > peff@ originally suggested, i.e. > > If we do go with the "print it out and let the caller do their own > checks" strategy, I think I'd prefer rather than "--show-tagname" to > just respect the "--format" we use for tag-listing. That would let you > do: > > git tag -v --format='%(tag)%n%(tagger)' > > or similar. In fact you can already do that with a separate step (modulo > %n, which we do not seem to understand here)... > > work. > > Thanks. > The extent of this thread shows (again) that assigning trust is an individual decision, and the base for that decision will be different in different projects. (While the gpg project keeps emphasizing that, it doesn't keep gpg users from thinking differently.) All that git can realistically do is: A) provide the answer that gpg gives (which depends on the configured trust model, available keys and the trustdb entries) - this is about trust in the validity of the signature B) provide easy access to all data that a project may potentially want to use for their manual or automatic decision - this is about trust in the meaning of the signature We can do better for B), and we will with a for-each-ref'ed "git tag" that knows format strings. Nota bene: A) really requires a tightened keyring and trustdb etc., something that is usually not found on the user side. If we want to do all this "in git" we would need a "plug-in" infrastructure/trust helper that receives the tag object and decides about the trust (taking project specifics into account) - that is not that much different from scripting it around git, and could probably be "monkey patched in" today by specifying a different "gpg". Michael -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 12/13] dir_iterator: new API for iterating over a directory tree
On 06/07/2016 07:13 AM, Eric Sunshine wrote: > On Fri, Jun 3, 2016 at 8:33 AM, Michael Haggerty wrote: >> The iterator interface is modeled on that for references, though no >> vtable is necessary because there is (so far?) only one type of >> dir_iterator. >> [...] > > Some minor comments below, though probably nothing demanding a re-roll... > >> Signed-off-by: Michael Haggerty >> --- >> diff --git a/dir-iterator.c b/dir-iterator.c >> @@ -0,0 +1,185 @@ >> +int dir_iterator_advance(struct dir_iterator *dir_iterator) >> +{ >> + struct dir_iterator_int *iter = >> + (struct dir_iterator_int *)dir_iterator; >> + >> + while (1) { >> + struct dir_iterator_level *level = >> + &iter->levels[iter->levels_nr - 1]; >> + struct dirent *de; >> + >> + if (!level->initialized) { >> + if >> (!is_dir_sep(iter->base.path.buf[iter->base.path.len - 1])) > > I realize that dir_iterator_begin() ensures that we never get this far > if path has zero length, but that check is so (textually and perhaps > mentally) distant from this chunk of code that it's still a little bit > scary to see this *potential* access of base.path.buf[-1]. Perhaps an > assert(iter->base.path.len) just before this line would document that > this condition was taken into consideration? An assert() seems a little bit heavy for checking this invariant, which is easy to check by inspection. I find assert() more useful for checking state that results from more complicated algorithms, where inspection doesn't so quickly make it clear that the invariant is held. If you have no objections, I'll add a comment instead. >> + strbuf_addch(&iter->base.path, '/'); >> + level->prefix_len = iter->base.path.len; >> + >> + /* opendir() errors are handled below */ >> + level->dir = opendir(iter->base.path.buf); >> +[...] >> + if (!level->dir) { >> + /* >> +* This level is exhausted (or wasn't opened >> +* successfully); pop up a level. >> +*/ >> + if (--iter->levels_nr == 0) { >> + return dir_iterator_abort(dir_iterator); >> + } > > Style: unnecessary braces Thanks. >> + continue; >> + } >> + >> + /* >> +* Loop until we find an entry that we can give back >> +* to the caller: >> +*/ >> + while (1) { >> +[...] >> + strbuf_addstr(&iter->base.path, de->d_name); >> + if (lstat(iter->base.path.buf, &iter->base.st) < 0) { >> + if (errno != ENOENT) >> + warning("error reading path '%s': >> %s", >> + iter->base.path.buf, >> + strerror(errno)); > > Duy's warning_errno() series is already in 'master'... This patch series is based on a version of master from before Duy's series was merged. I think it's better to leave this the way it is, and fix it up in a separate patch sometime after it is merged, than to make this patch series depend on both mh/split-under-lock *and* a post-warning_errno() version of master. >> + continue; >> + } >> + >> + /* >> +* We have to set these each time because >> +* the path strbuf might have been realloc()ed. >> +*/ >> + > > Maybe drop the blank line between the comment and the code to which it > applies. Thanks. >> + iter->base.relative_path = >> + iter->base.path.buf + >> iter->levels[0].prefix_len; >> + iter->base.basename = >> + iter->base.path.buf + level->prefix_len; >> + level->dir_state = DIR_STATE_ITER; >> + >> + return ITER_OK; >> + } >> + } >> +} >> + >> +int dir_iterator_abort(struct dir_iterator *dir_iterator) >> +{ >> + struct dir_iterator_int *iter = (struct dir_iterator_int >> *)dir_iterator; >> + >> + while (iter->levels_nr) { >> + struct dir_iterator_level *level = >> + &iter->levels[--iter->levels_nr]; > > It's a bit difficult to locate the loop control embedded within this > (textually) noisy expression. I wonder if it would be easier to read > as a plain for-loop instead: > > for (; iter->levels_nr; iter->levels_nr--) { > >> + if (level->dir) >> + closedir(level->dir); Yes, that's a bit clearer. I'll change it. > The doc
Re: [PATCH v2] builtin/commit.c: memoize git-path for COMMIT_EDITMSG
Hey Jeff, On Thu, Jun 9, 2016 at 12:28 PM, Jeff King wrote: > On Tue, Jun 07, 2016 at 08:25:17PM +0530, Pranit Bauva wrote: > >> On Wed, May 25, 2016 at 12:49 AM, Pranit Bauva >> wrote: >> > This is a follow up commit for f932729c (memoize common git-path >> > "constant" files, 10-Aug-2015). >> > >> > The many function calls to git_path() are replaced by >> > git_path_commit_editmsg() and which thus eliminates the need to repeatedly >> > compute the location of "COMMIT_EDITMSG". >> > >> > Mentored-by: Lars Schneider >> > Mentored-by: Christian Couder >> > Signed-off-by: Pranit Bauva >> > --- >> [...] >> Anyone any comments? > > Looks good to me. You may want to re-post without the quoting to make it > easier for the maintainer to pick up, and feel free to add my: > > Reviewed-by: Jeff King Sure I could re-post it. Thanks for your tag! :) Regards, Pranit Bauva -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v4 5/6] send-email: --in-reply-to= populate header fields
Samuel GROOT writes: > diff --git a/Documentation/git-send-email.txt > b/Documentation/git-send-email.txt > index edbba3a..21776f0 100644 > --- a/Documentation/git-send-email.txt > +++ b/Documentation/git-send-email.txt > @@ -84,13 +84,16 @@ See the CONFIGURATION section for 'sendemail.multiEdit'. > the value of GIT_AUTHOR_IDENT, or GIT_COMMITTER_IDENT if that is not > set, as returned by "git var -l". > > ---in-reply-to=:: > +--in-reply-to=:: > Make the first mail (or all the mails with `--no-thread`) appear as a > - reply to the given Message-Id, which avoids breaking threads to > - provide a new patch series. > + reply to the given Message-Id (given directly by argument or via the > email > + file), which avoids breaking threads to provide a new patch series. > The second and subsequent emails will be sent as replies according to > the `--[no]-chain-reply-to` setting. > + > +Furthermore, if the argument is an email file, parse it and populate header > +fields appropriately for the reply. "populate header fields appropriately" would seem obscure to someone not having followed this converation. At least s/fields/To: and Cc: fields/. > --- a/git-send-email.perl > +++ b/git-send-email.perl > @@ -55,6 +55,7 @@ git send-email --dump-aliases > --[no-]bcc* Email Bcc: > --subject * Email "Subject:" > --in-reply-to * Email "In-Reply-To:" > +--in-reply-to* Populate header fields appropriately. Likewise. To avoid an overly long line, I'd write just "Populate To/Cc/In-reply-to". Probably should be . > +if ($initial_reply_to && -f $initial_reply_to) { > + my $error = validate_patch($initial_reply_to); > + die "fatal: $initial_reply_to: $error\nwarning: no patches were sent\n" > + if $error; > + > + open my $fh, "<", $initial_reply_to or die "can't open file > $initial_reply_to"; > + my $mail = Git::parse_email($fh); > + close $fh; > + > + my $initial_sender = $sender || $repoauthor || $repocommitter || ''; This is duplicated from the "if ($compose) { ... my $tpl_sender = ..." a bit later in the existing file. It would be better to get this "my $initial_sender = ..." out of your "if" and use $initial_sender directly later IMHO. Actually, $initial_sender does not seem to be a good variable name. It's not really "initial", right? > + my $prefix_re = ""; > + my $subject_re = $mail->{"subject"}[0]; > + if ($subject_re =~ /^[^Re:]/) { > + $prefix_re = "Re: "; > + } > + $initial_subject = $prefix_re . $subject_re; Why introduce $prefix_re. You can just my $subject = $mail->{"subject"}[0]; if (...) { $subject = "Re: " . $subject; } (preferably using sensible as '...' as noted by Junio ;-) ). In previous iterations of this series, you had issues with non-ascii characters in at least To: and Cc: fields (perhaps in the Subject field too?). Are they solved? I don't see any tests about it ... -- Matthieu Moy http://www-verimag.imag.fr/~moy/ -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 4/4] bundle v3: the beginning
On Wed, Jun 8, 2016 at 11:19 PM, Jeff King wrote: > On Wed, Jun 08, 2016 at 05:44:06PM +0700, Duy Nguyen wrote: > >> On Wed, Jun 8, 2016 at 3:23 AM, Jeff King wrote: >> > Because this "external odb" essentially acts as a git alternate, we >> > would hit it only when we couldn't find an object through regular means. >> > Git would then make the object available in the usual on-disk format >> > (probably as a loose object). >> >> This means git-gc (and all things that do rev-list --objects --all) >> would download at least all trees and commits? Or will we have special >> treatment for those commands? > > Yes. To me, this was always about punting large blobs from the clones. > Basically the way git-lfs and other tools work, but without munging your > history permanently. Makes sense. If we keep all trees and commits locally, pack v4 still has a chance to rise! > I don't know if Christian had other cases in mind (like the many-files > case, which I think is better served by something like narrow clones). Although for git-gc or git-fsck, I guess we need special support anyway not to download large blobs unnecessarily. Not sure if git-gc can already do that now. All I remember is git-repack can still be used to make a repo independent from odb alternates. We probably want to avoid that. git-fsck definitely should verify that large remote blobs are good without downloading them (a new "fsck" command to external odb, maybe). -- Duy -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3 1/6] t9001: non order-sensitive file comparison
On 06/09/16 07:51, Matthieu Moy wrote: > Junio C Hamano writes: > >> Tom Russello writes: >> >>> +# Check if two files have the same content, non-order sensitive >>> +test_cmp_noorder () { >>> + sort $1 >$1; >> >> Here is what I think happens: >> >> 0) the shell parses this command line; >> 1) the shell notices that the output has to go to $1; >> 2) the shell does open(2) of $1, >> 3) the shell spawns "sort" with a single argument, with its >>output connected to the file descriptor obtained in 2). >> >> Because "$1" becomes an empty file at 2), "sort" reads nothing and >> writes nothing. > > Tom: in case you're not convinced, try this: > > $ (echo b; echo a) >f > $ sort f > a > b > $ sort f >f > $ cat f > $ > > Also, useless ';' and missing double-quotes around "$1" to avoid bad > surprises ifever test_cmp_noorder is called on file names with special > characters. I was totally convinced by Junio's explanation, it is partially fixed in v4 and will be entirely fixed in v5. Thanks. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html