Re: Issue: "Could not access submodule" error when pulling recursively with Git 2.22.0

2019-10-23 Thread SZEDER Gábor
On Wed, Oct 23, 2019 at 07:22:12AM +, Aleksey Mikhaylov wrote:
> "Could not access submodule" error when pulling recursively with Git 2.22.0.
> This issue causes if there is submodule in submodule.

> Please use my simple test repository to reproduce the problem:
> https://github.com/agmikhailov/example2361.git
> 
> It is just an empty repository with one submodule 
> (https://github.com/agmikhailov/example2361M1.git) 
> and one submodule of submodule 
> (https://github.com/agmikhailov/example2361M2.git):
> 
> git clone https://github.com/agmikhailov/example2361.git
> cd example2361/
> git submodule init

According to the docs of 'git submodule init', it "initializes the
submodules recorded in the index".  Therefore, it can only initialize
the submodule in 'example2361M1', because at this point we have no
idea that there is a nested submodule in there.

  $ git submodule init
  Submodule 'example2361M1' (https://github.com/agmikhailov/example2361M1.git) 
registered for path 'example2361M1'

> git submodule update

This command clones 'example2361M1':

  $ git submodule update --recursive
  Cloning into '/tmp/example2361/example2361M1'...
  Submodule path 'example2361M1': checked out 
'6a9be24a1c0ebd44d91ae4dcf1fd62580b936540'

Only at this point can we finally see that there is a nested
submodule, and can initialize and clone it with:

  $ cd example2361M1
  $ git submodule init
  Submodule 'example2361M2' (https://github.com/agmikhailov/example2361M2.git) 
registered for path 'example2361M2'
  $ git submodule update
  Cloning into '/tmp/example2361/example2361M1/example2361M2'...
  Submodule path 'example2361M2': checked out 
'9ed39cf1fe0a8cf34e72d2e7ebff1ea9d4a63ac1'

> git pull --recurse-submodules=true

And after that:

  $ cd ../..
  $ git pull --recurse-submodules=true
  Fetching submodule example2361M1
  Fetching submodule example2361M1/example2361M2
  Already up to date.


> ACTUAL RESULT
> 
> "git --recurse-submodules=true" command fails with message "Could not access 
> submodule":
> 
> $ git --recurse-submodules=true
> Fetching submodule example2361M1
> Could not access submodule 'example2361M2'
> 
> EXPECTED RESULT
> 
> All submodules are successfully updated by "git --recurse-submodules=true" 
> command.
> 
> ADDITIONAL INFORMATION
> 
> Git version 2.20.1 does not have this problem. 
> So we had to downgrade Git to work with submodules.

The behavior was indeed different with v2.20.1, but that version
didn't show the behavior you expected.  When running your commands
with v2.20.1 I get:

  $ ~/src/git/bin-wrappers/git pull --recurse-submodules=true
  Fetching submodule example2361M1
  Already up to date.
  $ find example2361M1/example2361M2/
  example2361M1/example2361M2/

So while that 'git pull' didn't error out, it didn't even look at the
nested submodule, which is still uninitialized and empty.

FWIW, bisecting shows that the behavior changed with commit
a62387b3fc, but, unfortunately, the commit message doesn't seem to be
very helpful to me, but perhaps others with more experience with
submodules can make something out of it.

commit a62387b3fc9f5aeeb04a2db278121d33a9caafa7
Author: Stefan Beller 
Date:   Wed Nov 28 16:27:55 2018 -0800

submodule.c: fetch in submodules git directory instead of in worktree

Keep the properties introduced in 10f5c52656 (submodule: avoid
auto-discovery in prepare_submodule_repo_env(), 2016-09-01), by fixating
the git directory of the submodule.

Signed-off-by: Stefan Beller 
Signed-off-by: Junio C Hamano 

 submodule.c | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)




Issue: "Could not access submodule" error when pulling recursively with Git 2.22.0

2019-10-23 Thread Aleksey Mikhaylov
PROBLEM DESCRIPTION

"Could not access submodule" error when pulling recursively with Git 2.22.0.
This issue causes if there is submodule in submodule.

At first, we reported this problem for Git for Windows:
https://github.com/git-for-windows/git/issues/2361
But we received the answer that it was reproduced on Linux too.

Also, the same problem is described here: 
https://gitlab.com/gitlab-org/gitlab/issues/27287

STEPS TO REPRODUCE

Please use my simple test repository to reproduce the problem:
https://github.com/agmikhailov/example2361.git

It is just an empty repository with one submodule 
(https://github.com/agmikhailov/example2361M1.git) 
and one submodule of submodule 
(https://github.com/agmikhailov/example2361M2.git):

git clone https://github.com/agmikhailov/example2361.git
cd example2361/
git submodule init
git submodule update
git pull --recurse-submodules=true

ACTUAL RESULT

"git --recurse-submodules=true" command fails with message "Could not access 
submodule":

$ git --recurse-submodules=true
Fetching submodule example2361M1
Could not access submodule 'example2361M2'

EXPECTED RESULT

All submodules are successfully updated by "git --recurse-submodules=true" 
command.

ADDITIONAL INFORMATION

Git version 2.20.1 does not have this problem. 
So we had to downgrade Git to work with submodules.

Best regards,
--
Aleksey



[PATCH 03/23] parse_tag_buffer(): treat NULL tag pointer as parse error

2019-10-17 Thread Jeff King
When parsing a tag, we may end up with a NULL "tagged" field when
there's a type mismatch (e.g., the tag claims to point to object X as a
commit, but we previously saw X as a blob in the same process), but we
do not otherwise indicate a parse failure to the caller.

This is similar to the case discussed in the previous commit, where a
commit could end up with a NULL tree field: while slightly convenient
for callers who want to overlook a corrupt object, it means that normal
callers have to explicitly deal with this case (rather than just relying
on the return code from parsing). And most don't, leading to segfault
fixes like the one in c77722b3ea (use get_tagged_oid(), 2019-09-05).

Let's address this more centrally, by returning an error code from the
parse itself, which most callers would already notice (adventurous
callers are free to ignore the error and continue looking at the
struct).

This also covers the case where the tag contains a nonsensical "type"
field (there we produced a user-visible error but still returned success
to the caller; now we'll produce a slightly better message and return an
error).

Signed-off-by: Jeff King 
---
It's possible that c77722b3ea fixed all of the segfaults here, though of
course new callers must remember to use it instead of accessing
tag->tagged directly. We could _possibly_ get rid of that check now, but
that assumes that all of the caller notice parse_tag() or parse_object()
failing. Having both gives us belt-and-suspenders protection against a
segfault.

 tag.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tag.c b/tag.c
index bfa0e31435..6a51efda8d 100644
--- a/tag.c
+++ b/tag.c
@@ -167,10 +167,15 @@ int parse_tag_buffer(struct repository *r, struct tag 
*item, const void *data, u
} else if (!strcmp(type, tag_type)) {
item->tagged = (struct object *)lookup_tag(r, &oid);
} else {
-   error("Unknown type %s", type);
-   item->tagged = NULL;
+   return error("unknown tag type '%s' in %s",
+type, oid_to_hex(&item->object.oid));
}
 
+   if (!item->tagged)
+   return error("bad tag pointer to %s in %s",
+oid_to_hex(&oid),
+oid_to_hex(&item->object.oid));
+
if (bufptr + 4 < tail && starts_with(bufptr, "tag "))
;   /* good */
else
-- 
2.23.0.1228.gee29b05929



[PATCH 02/23] parse_commit_buffer(): treat lookup_tree() failure as parse error

2019-10-17 Thread Jeff King
If parsing a commit yields a valid tree oid, but we've seen that same
oid as a non-tree in the same process, the resulting commit struct will
end up with a NULL tree pointer, but not otherwise report a parsing
failure.

That's perhaps convenient for callers which want to operate on even
partially corrupt commits (e.g., by still looking at the parents). But
it leaves a potential trap for most callers, who now have to manually
check for a NULL tree. Most do not, and it's likely that there are
possible segfaults lurking. I say "possible" because there are many
candidates, and I don't think it's worth following through on
reproducing them when we can just fix them all in one spot. And
certainly we _have_ seen real-world cases, such as the one fixed by
806278dead (commit-graph.c: handle corrupt/missing trees, 2019-09-05).

Note that we can't quite drop the check in the caller added by that
commit yet, as there's some subtlety with repeated parsings (which will
be addressed in a future commit).

Signed-off-by: Jeff King 
---
 commit.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/commit.c b/commit.c
index 6467c9e175..810419a168 100644
--- a/commit.c
+++ b/commit.c
@@ -401,6 +401,7 @@ int parse_commit_buffer(struct repository *r, struct commit 
*item, const void *b
struct commit_graft *graft;
const int tree_entry_len = the_hash_algo->hexsz + 5;
const int parent_entry_len = the_hash_algo->hexsz + 7;
+   struct tree *tree;
 
if (item->object.parsed)
return 0;
@@ -412,7 +413,12 @@ int parse_commit_buffer(struct repository *r, struct 
commit *item, const void *b
if (get_oid_hex(bufptr + 5, &parent) < 0)
return error("bad tree pointer in commit %s",
 oid_to_hex(&item->object.oid));
-   set_commit_tree(item, lookup_tree(r, &parent));
+   tree = lookup_tree(r, &parent);
+   if (!tree)
+   return error("bad tree pointer %s in commit %s",
+oid_to_hex(&parent),
+oid_to_hex(&item->object.oid));
+   set_commit_tree(item, tree);
bufptr += tree_entry_len + 1; /* "tree " + "hex sha1" + "\n" */
pptr = &item->parents;
 
-- 
2.23.0.1228.gee29b05929



[PATCH 01/23] parse_commit_buffer(): treat lookup_commit() failure as parse error

2019-10-17 Thread Jeff King
While parsing the parents of a commit, if we are able to parse an actual
oid but lookup_commit() fails on it (because we previously saw it in
this process as a different object type), we silently omit the parent
and do not report any error to the caller.

The caller has no way of knowing this happened, because even an empty
parent list is a valid parse result. As a result, it's possible to fool
our "rev-list" connectivity check into accepting a corrupted set of
objects.

There's a test for this case already in t6102, but unfortunately it has
a slight error. It creates a broken commit with a parent line pointing
to a blob, and then checks that rev-list notices the problem in two
cases:

  1. the "lone" case: we traverse the broken commit by itself (here we
 try to actually load the blob from disk and find out that it's not
 a commit)

  2. the "seen" case: we parse the blob earlier in the process, and then
 when calling lookup_commit() we realize immediately that it's not a
 commit

The "seen" variant for this test mistakenly parsed another commit
instead of the blob, meaning that we were actually just testing the
"lone" case again. Changing that reveals the breakage (and shows that
this fixes it).

Signed-off-by: Jeff King 
---
 commit.c   | 11 ---
 t/t6102-rev-list-unexpected-objects.sh |  2 +-
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/commit.c b/commit.c
index 40890ae7ce..6467c9e175 100644
--- a/commit.c
+++ b/commit.c
@@ -432,8 +432,11 @@ int parse_commit_buffer(struct repository *r, struct 
commit *item, const void *b
if (graft && (graft->nr_parent < 0 || grafts_replace_parents))
continue;
new_parent = lookup_commit(r, &parent);
-   if (new_parent)
-   pptr = &commit_list_insert(new_parent, pptr)->next;
+   if (!new_parent)
+   return error("bad parent %s in commit %s",
+oid_to_hex(&parent),
+oid_to_hex(&item->object.oid));
+   pptr = &commit_list_insert(new_parent, pptr)->next;
}
if (graft) {
int i;
@@ -442,7 +445,9 @@ int parse_commit_buffer(struct repository *r, struct commit 
*item, const void *b
new_parent = lookup_commit(r,
   &graft->parent[i]);
if (!new_parent)
-   continue;
+   return error("bad graft parent %s in commit %s",
+oid_to_hex(&graft->parent[i]),
+oid_to_hex(&item->object.oid));
pptr = &commit_list_insert(new_parent, pptr)->next;
}
}
diff --git a/t/t6102-rev-list-unexpected-objects.sh 
b/t/t6102-rev-list-unexpected-objects.sh
index 28611c978e..52cde097dd 100755
--- a/t/t6102-rev-list-unexpected-objects.sh
+++ b/t/t6102-rev-list-unexpected-objects.sh
@@ -52,7 +52,7 @@ test_expect_success 'traverse unexpected non-commit parent 
(lone)' '
 '
 
 test_expect_success 'traverse unexpected non-commit parent (seen)' '
-   test_must_fail git rev-list --objects $commit $broken_commit \
+   test_must_fail git rev-list --objects $blob $broken_commit \
>output 2>&1 &&
test_i18ngrep "not a commit" output
 '
-- 
2.23.0.1228.gee29b05929



Re: Why is "Sparse checkout leaves no entry on working directory" a fatal error?

2019-10-09 Thread Josef Wolf
Thanks for your comprehensive answer, Elijah!

On Di, Okt 08, 2019 at 09:14:27 -0700, Elijah Newren wrote:
> On Mon, Oct 7, 2019 at 11:52 PM Josef Wolf  wrote:
> >
> > I am trying to add a file to an arbitrary branch without touching the 
> > current
> > worktree with as little overhead as possible.
>
> I can see the logical progression that a sparse worktree would be less
> overhead than a full worktree, and that a bare worktree would be even
> better.  But you're still dealing with unnecessary overhead; you don't
> need a worktree at all to achieve what you want.

Well, the "as little overhead as possible" must be seen from the context. This
is a repository with roundabout 10GB and more than 6200 files. Shared-clones
with sparse-worktree is a BIG BIG BIG improvement here, which reduces
operations from "minutes" to "withhin a second".

> Traditionally, if you wanted to modify another branch without touching
> the worktree at all, you would use a combination of hash-object,
> mktree, commit-tree, and update-ref.  That would be a better solution
> to your problem than trying to approximate it with a sparse checkout.
> However, that's at least four invocations of git, and you said as
> little overhead as possible, so I'd recommend you use fast-import.

I have taken a look into the commands you are recommending, and indeed, they
seem to be better suited. Especially fast-import looks very
promising. Unfortunately, those commands require intimate knowledge about git
internals. I'll take a closer look into this!

> It is very easy to mess up the sparse specifications.  We can't check
> for all errors, but a pretty obvious one is when people specify
> restrictions that match no path.

But why erroring out only on completely empty tree? Why not requiring that
_every_ line in .git/info/sparse-checkout should match at least one file?
Would make no sense, right?

> We can at least give an error in that case.

Why must this be a fatal error? Wouldn't a warning suffice?

> 2) When they've learned about sparse checkouts and just want to test
> what things are like in extreme situations.
[ ... ]
> For case 2, people learn that an empty working tree is a too extreme
> situation that we'll throw an error at and so they adjust and make
> sure to match at least one path.

When I am trying to learn how a new feature works, I tend to double-check the
results. If I expect contens but end up with an empty WT, I'd go and double
check the specifications I've given anyway.

I can easily understand that a warning might be desirable. But erroring out
and failing to honor the "-b" flag is a bit too drastic, IMHO.

> > Strange enough, I have some repositories at this machine where the
> > .git/info/sparse-checkout file contains only non-existing files and git
> > happily executes this "git checkout -b XXX remotes/origin/XXX" command 
> > leaving
> > the working tree totally empty all the time.
> 
> I can't reproduce:
> 
> $ git config core.sparseCheckout true
> $ echo 'non-existent' > .git/info/sparse-checkout
> $ git checkout -b next origin/next
> error: Sparse checkout leaves no entry on working directory
> 
> Can you provide any more details about how you get into this state?

Unfortunately not.

Honestly, I have tried to reproduce for several days, since I tried
to find a way how to work around that fatal error. Unfortunately, I could not
find how to reproduce it. The only thing I can say is: threre are several
clones on my disk which happily switch branches with an empty WT and without
any complaints.

> > Someone understands this inconsistent behaviour?
> 
> No, but I wouldn't be surprised if there are bugs and edge cases.  I
> think I ran into one or two when testing things out, didn't take good
> enough notes, and had trouble reproducing later.  The sparse checkout
> stuff has been under-tested and not well documented, something Stolee
> is trying to fix right now.

Yes, I've seen the work on the ML. But I am only a user of git and have a very
hard time to understand what is going on there.

-- 
Josef Wolf
j...@raven.inka.de


Shallow-to-deep conversion error query

2019-10-08 Thread Francis M
Hello,

Quick question, hopefully. Have tried searching through my local archive
of the list but didn't find an answer, so fingers crossed this hasn't come
up come up frequently in the past...

So, a fresh shallow clone[1] of Linus' mainline repo followed by
"git fetch --unshallow" results in:

  fatal: error in object: unshallow 

Re-packing with "git repack -d" sorts it all out, but my question is: is this
a bug, or is it instead down to something like the server-side version of
git that created the pack being outdated or similar?

Cheers,
Francis

[1]

# git clone --shallow-since='Tue  1 Jan 00:00:00 BST 2019'
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
torvalds/linux-mainline
[...]
Checking out files: 100% (65698/65698), done.
# git fetch --unshallow
fatal: error in object: unshallow 00f8ccd0c95f4e604297057a5bccec86c0903d14


Re: Why is "Sparse checkout leaves no entry on working directory" a fatal error?

2019-10-08 Thread Elijah Newren
On Mon, Oct 7, 2019 at 11:52 PM Josef Wolf  wrote:
>
> Hello,
>
> This is a repost, since the original message seems to have been lost somehow.
>
>
> I am trying to add a file to an arbitrary branch without touching the current
> worktree with as little overhead as possible. This should work no matter in
> which state the current worktree is in. And it should not touch the current WT
> in any way.
>
> For this, the sparse-checkout feature in conjuntion with the "shared
> repository" feature seems to be perfect.

I can see the logical progression that a sparse worktree would be less
overhead than a full worktree, and that a bare worktree would be even
better.  But you're still dealing with unnecessary overhead; you don't
need a worktree at all to achieve what you want.

Traditionally, if you wanted to modify another branch without touching
the worktree at all, you would use a combination of hash-object,
mktree, commit-tree, and update-ref.  That would be a better solution
to your problem than trying to approximate it with a sparse checkout.
However, that's at least four invocations of git, and you said as
little overhead as possible, so I'd recommend you use fast-import.

But, since you asked some other questions about sparse checkouts...

> The basic idea goes like this:
>
>
>TMP=`mktemp -d /var/tmp/test-X`
>GD=$TMP/git
>WD=$TMP/wd
>
>git --work-tree $WD --git-dir $GD clone -qns -n . $GD
>git --work-tree $WD --git-dir $GD config core.sparsecheckout true
>echo path/of/file/which/I/want/to/create >>$GD/info/sparse-checkout
>
>git --work-tree $WD --git-dir $GD checkout -b some-branch 
> remotes/origin/some-branch  # !!!
>
>( cd $WD
>  mkdir -p path/of/file/which/I/want/to
>  echo huhuh >path/of/file/which/I/want/to/create
>  git --work-tree $WD --git-dir $GD add path/of/file/which/I/want/to/create
>  git --work-tree $WD --git-dir $GD commit
>  git --work-tree $WD --git-dir $GD push
>)
>
>rm -rf $TMP
>
>
> Unfortunately, the marked command errors out with
>
>"error: Sparse checkout leaves no entry on working directory"
>
> and won't create/switch to the branch that is to be modified.
>
> Why is this an error? Since there are no matching files, an empty worktree
> is EXACTLY what I wanted. Why will the "git checkout -b" command error out?

It is very easy to mess up the sparse specifications.  We can't check
for all errors, but a pretty obvious one is when people specify
restrictions that match no path.  We can at least give an error in
that case.  There are times when folks might intentionally specify
paths that don't match anything, but they are quite rare.  The ones I
can think of:

1) When they are doing something exotic where they are just trying to
approximate something else rather than actual use sparse checkouts as
intended.
2) When they've learned about sparse checkouts and just want to test
what things are like in extreme situations.

Case 1 consists of stuff like what you are doing here, for which there
are better solutions, or when I was attempting to simulate the
performance issues microsoft folks were having with a really large
repo and knowing they used sparse checkouts as part of VFS-for-git (I
created a very large index and had no entries checked out at first,
but then ran into these errors, and added one file to the index and
had a sparse specification match it.)

For case 2, people learn that an empty working tree is a too extreme
situation that we'll throw an error at and so they adjust and make
sure to match at least one path.

> Strange enough, I have some repositories at this machine where the
> .git/info/sparse-checkout file contains only non-existing files and git
> happily executes this "git checkout -b XXX remotes/origin/XXX" command leaving
> the working tree totally empty all the time.

I can't reproduce:

$ git config core.sparseCheckout true
$ echo 'non-existent' > .git/info/sparse-checkout
$ git checkout -b next origin/next
error: Sparse checkout leaves no entry on working directory

Can you provide any more details about how you get into this state?

> Someone understands this inconsistent behaviour?

No, but I wouldn't be surprised if there are bugs and edge cases.  I
think I ran into one or two when testing things out, didn't take good
enough notes, and had trouble reproducing later.  The sparse checkout
stuff has been under-tested and not well documented, something Stolee
is trying to fix right now.


Why is "Sparse checkout leaves no entry on working directory" a fatal error?

2019-10-07 Thread Josef Wolf
Hello,

This is a repost, since the original message seems to have been lost somehow.


I am trying to add a file to an arbitrary branch without touching the current
worktree with as little overhead as possible. This should work no matter in
which state the current worktree is in. And it should not touch the current WT
in any way.

For this, the sparse-checkout feature in conjuntion with the "shared
repository" feature seems to be perfect.

The basic idea goes like this:


   TMP=`mktemp -d /var/tmp/test-X`
   GD=$TMP/git
   WD=$TMP/wd
   
   git --work-tree $WD --git-dir $GD clone -qns -n . $GD
   git --work-tree $WD --git-dir $GD config core.sparsecheckout true
   echo path/of/file/which/I/want/to/create >>$GD/info/sparse-checkout
   
   git --work-tree $WD --git-dir $GD checkout -b some-branch 
remotes/origin/some-branch  # !!!
   
   ( cd $WD
 mkdir -p path/of/file/which/I/want/to
 echo huhuh >path/of/file/which/I/want/to/create
 git --work-tree $WD --git-dir $GD add path/of/file/which/I/want/to/create
 git --work-tree $WD --git-dir $GD commit
 git --work-tree $WD --git-dir $GD push
   )
   
   rm -rf $TMP


Unfortunately, the marked command errors out with

   "error: Sparse checkout leaves no entry on working directory"

and won't create/switch to the branch that is to be modified.

Why is this an error? Since there are no matching files, an empty worktree
is EXACTLY what I wanted. Why will the "git checkout -b" command error out?


Strange enough, I have some repositories at this machine where the
.git/info/sparse-checkout file contains only non-existing files and git
happily executes this "git checkout -b XXX remotes/origin/XXX" command leaving
the working tree totally empty all the time.

Someone understands this inconsistent behaviour?

Thanks,

-- 
Josef Wolf
j...@raven.inka.de


Re: bad error message - Not a git repository (or any of the parent directories): .git

2019-10-04 Thread Johannes Schindelin
Hi Alexander,

On Thu, 3 Oct 2019, Alexander Mills wrote:

> when running git commands outside of a git repo, we often see:
>
> fatal: Not a git repository (or any of the parent directories): .git
>
> such a lame message lol.

An equally ornery response might point out that reporting this as a bug
instead of providing a patch is probably just as "lame"... :-)

> can we get an absolute path on this message in future git versions, eg:
>
> Not a git repository (or any of the parent directories): /home/ubuntu/xyz/.git

Just clone https://github.com/git/git, then look for that error message:

-- snip --
$ git grep -A1 "or any of the parent" \*.[ch]
setup.c:die(_("not a git repository (or any of the 
parent directories): %s"),
setup.c-DEFAULT_GIT_DIR_ENVIRONMENT);
-- snap --

You can then wrap the argument in that second line in `absolute_path()`,
build (using `make -j$(nproc)` on Linux), and test (use
`/path/to/git/bin-wrappers/git` instead of regular `git` to test without
installing). Once everything works as you expect it, commit it.

Please make sure that your commit message focuses on answering the
question "why?" more than on "how?", and that it wraps at <= 76 columns
per line. Also do make sure to add your sign off
(https://git-scm.com/docs/SubmittingPatches#sign-off).

Finally send the patch to the mailing list for review. You can use
GitGitGadget (https://gitgitgadget.github.io), submitGit
(https://submitgit.herokuapp.com) or send it manually
(https://github.com/git-for-windows/git/blob/master/CONTRIBUTING.md#submit-your-patch).

Ciao,
Johannes

> ty
>
> -alex
>
> --
> Alexander D. Mills
> New cell phone # (415)730-1805
> alexander.d.mi...@gmail.com
>
> www.linkedin.com/pub/alexander-mills/b/7a5/418/
>


bad error message - Not a git repository (or any of the parent directories): .git

2019-10-03 Thread Alexander Mills
when running git commands outside of a git repo, we often see:

fatal: Not a git repository (or any of the parent directories): .git

such a lame message lol.
can we get an absolute path on this message in future git versions, eg:

Not a git repository (or any of the parent directories): /home/ubuntu/xyz/.git

ty

-alex

-- 
Alexander D. Mills
New cell phone # (415)730-1805
alexander.d.mi...@gmail.com

www.linkedin.com/pub/alexander-mills/b/7a5/418/


[PATCH 1/1] gitk: Addressing error running on MacOS with large repos.

2019-10-03 Thread Arash Bannazadeh-Mahani via GitGitGadget
From: Arash Bannazadeh-Mahani 

The change is stemmed from a problem on the MacOS where, if --all
is passed to gitk it should show all the refs/commits graphically.
However, on really large git repos, in my instance a git repo with
over 52,000 commits, gitk will report an error,
"Error executing git log: couldn't execute "git": argument list too long".
Mac OS has a limit of which my large repo exceeds. This works fine on Linux,
however, not sure about Windows.

Looking at gitk script, the decision to have all commit-ids on the command line
comes from return value of parseviewargs() function which uses the value of
"allknown" to return. If it is '1' then --all is translated to a string of all
the commit-ids in the repo, otherwise --all is passed as-is to `git log` cli,
which according to git-log man page it is the same as listing all the
commit-ids.

So, this change is to prevent --all option from being expanded into list
of all refs on the command line.

Signed-off-by: Arash Bannazadeh-Mahani 
---
 gitk-git/gitk | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/gitk-git/gitk b/gitk-git/gitk
index abe4805ade..96634d9d33 100755
--- a/gitk-git/gitk
+++ b/gitk-git/gitk
@@ -250,8 +250,13 @@ proc parseviewargs {n arglist} {
set nextisval 1
lappend glflags $arg
}
-   "--not" - "--all" {
+   "--not" {
lappend revargs $arg
+   }
+   "--all" {
+   # we recognize this argument;
+   # no expansion needed, use it with 'git log' as-is
+   set allknown 0
}
"--merge" {
set vmergeonly($n) 1
-- 
gitgitgadget


[PATCH 0/1] gitk: Addressing error running on MacOS with large repos.

2019-10-03 Thread Arash via GitGitGadget
gitk: no need to specify all refs, since git log --all is the same as list
is all the refs/commit ids. Also Mac OS has a limit on size of the list of
params for a command line

Arash Bannazadeh-Mahani (1):
  gitk: Addressing error running on MacOS with large repos.

 gitk-git/gitk | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)


base-commit: bc12974a897308fd3254cf0cc90319078fe45eea
Published-As: 
https://github.com/gitgitgadget/git/releases/tag/pr-375%2Faxbannaz%2Fuse.all-v1
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git 
pr-375/axbannaz/use.all-v1
Pull-Request: https://github.com/gitgitgadget/git/pull/375
-- 
gitgitgadget


[PATCH v4 03/12] dir: fix off-by-one error in match_pathspec_item

2019-09-17 Thread Elijah Newren
For a pathspec like 'foo/bar' comparing against a path named "foo/",
namelen will be 4, and match[namelen] will be 'b'.  The correct location
of the directory separator is namelen-1.

However, other callers of match_pathspec_item() such as builtin/grep.c's
submodule_path_match() will compare against a path named "foo" instead of
"foo/".  It might be better to change all the callers to be consistent,
as discussed at
   https://public-inbox.org/git/xmqq7e6cdnkr@gitster-ct.c.googlers.com/
and
   
https://public-inbox.org/git/cabpp-berwupcpq-9fvw1lnocqkrfsof4bpj3gjd9+en43ve...@mail.gmail.com/
but there are many cases to audit, so for now just make sure we handle
both cases with and without a trailing slash.

The reason the code worked despite this sometimes-off-by-one error was
that the subsequent code immediately checked whether the first matchlen
characters matched (which they do) and then bailed and return
MATCHED_RECURSIVELY anyway since wildmatch doesn't have the ability to
check if "name" can be matched as a directory (or prefix) against the
pathspec.

Signed-off-by: Elijah Newren 
---
 dir.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/dir.c b/dir.c
index a9168bed96..bf1a74799e 100644
--- a/dir.c
+++ b/dir.c
@@ -356,8 +356,9 @@ static int match_pathspec_item(const struct index_state 
*istate,
/* Perform checks to see if "name" is a super set of the pathspec */
if (flags & DO_MATCH_SUBMODULE) {
/* name is a literal prefix of the pathspec */
+   int offset = name[namelen-1] == '/' ? 1 : 0;
if ((namelen < matchlen) &&
-   (match[namelen] == '/') &&
+   (match[namelen-offset] == '/') &&
!ps_strncmp(item, match, name, namelen))
return MATCHED_RECURSIVELY;
 
-- 
2.22.1.17.g6e632477f7.dirty



Re: [PATCH v2] git-submodule.txt: fix AsciiDoc formatting error

2019-09-16 Thread Junio C Hamano
Denton Liu  writes:

> Break `--default` and `--branch` into their own separate invocations to
> make it obvious that these options are mutually exclusive.

That sounds even clearer than what the original wanted to do.  Very
good idea.

> diff --git a/Documentation/git-submodule.txt b/Documentation/git-submodule.txt
> index 0ed5c24dc1..1f46380af2 100644
> --- a/Documentation/git-submodule.txt
> +++ b/Documentation/git-submodule.txt
> @@ -173,7 +173,8 @@ submodule with the `--init` option.
>  If `--recursive` is specified, this command will recurse into the
>  registered submodules, and update any nested submodules within.
>  --
> -set-branch ((-d|--default)|(-b|--branch )) [--] ::
> +set-branch (-b|--branch)  [--] ::
> +set-branch (-d|--default) [--] ::
>   Sets the default remote tracking branch for the submodule. The
>   `--branch` option allows the remote branch to be specified. The
>   `--default` option removes the submodule..branch configuration


[PATCH v2] git-submodule.txt: fix AsciiDoc formatting error

2019-09-16 Thread Denton Liu
In b57e8119e6 (submodule: teach set-branch subcommand, 2019-02-08), the
`set-branch` subcommand was added for submodules. When the documentation
was written, the syntax for a "index term" in AsciiDoc was
accidentally used. This caused the documentation to be rendered as

set-branch -d|--default)|(-b|--branch  [--] 

instead of

set-branch ((-d|--default)|(-b|--branch )) [--] 

In addition to this, the original documentation was possibly confusing
as it made it seem as if the `-b` option didn't accept a ``
argument.

Break `--default` and `--branch` into their own separate invocations to
make it obvious that these options are mutually exclusive. Also, this
removes the surrounding parentheses so that the "index term" syntax is
not triggered.

Signed-off-by: Denton Liu 
---

Thanks for reviewing, Junio.

I thought about it over the weekend and I hope that this is a good
compromise.

I looked through other options in the documentation and I couldn't find
any other places where a short-form was documented in the body instead
of in the summary line so I didn't want to introduce that with this
commit. I feel like it would be unfamiliar to someone who's used to
reading other Git documentation.

Also, I opted to separate the the options onto their own lines because
this makes it obvious that these two arguments are incompatible.
Hopefully, this assuages your concerns.

Range-diff against v1:
1:  796a25ee1e ! 1:  2ae16375ba git-submodule.txt: fix AsciiDoc formatting error
@@ Commit message
 
 instead of
 
-set-branch (-d|--default)|(-b|--branch ) [--] 
+set-branch ((-d|--default)|(-b|--branch )) [--] 
 
-Remove surrounding parentheses so that the "index term" syntax is not
-triggered (and because it looks nicer without them anyway ;) ).
+In addition to this, the original documentation was possibly confusing
+as it made it seem as if the `-b` option didn't accept a ``
+argument.
+
+Break `--default` and `--branch` into their own separate invocations to
+make it obvious that these options are mutually exclusive. Also, this
+removes the surrounding parentheses so that the "index term" syntax is
+not triggered.
 
 Signed-off-by: Denton Liu 
 
@@ Documentation/git-submodule.txt: submodule with the `--init` option.
  registered submodules, and update any nested submodules within.
  --
 -set-branch ((-d|--default)|(-b|--branch )) [--] ::
-+set-branch (-d|--default)|(-b|--branch ) [--] ::
++set-branch (-b|--branch)  [--] ::
++set-branch (-d|--default) [--] ::
Sets the default remote tracking branch for the submodule. The
`--branch` option allows the remote branch to be specified. The
`--default` option removes the submodule..branch configuration

 Documentation/git-submodule.txt | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Documentation/git-submodule.txt b/Documentation/git-submodule.txt
index 0ed5c24dc1..1f46380af2 100644
--- a/Documentation/git-submodule.txt
+++ b/Documentation/git-submodule.txt
@@ -173,7 +173,8 @@ submodule with the `--init` option.
 If `--recursive` is specified, this command will recurse into the
 registered submodules, and update any nested submodules within.
 --
-set-branch ((-d|--default)|(-b|--branch )) [--] ::
+set-branch (-b|--branch)  [--] ::
+set-branch (-d|--default) [--] ::
Sets the default remote tracking branch for the submodule. The
`--branch` option allows the remote branch to be specified. The
`--default` option removes the submodule..branch configuration
-- 
2.23.0



[PATCH 3/3] list-objects-filter: give a more specific error sparse parsing error

2019-09-14 Thread Jeff King
From: Jon Simons 

The sparse:oid filter has two error modes: we might fail to resolve the
name to an OID, or we might fail to parse the contents of that OID. In
the latter case, let's give a less generic error message, and mention
the OID we did find.

While we're here, let's also mark both messages as translatable.

Signed-off-by: Jeff King 
---
 list-objects-filter.c| 5 +++--
 t/t5616-partial-clone.sh | 2 +-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/list-objects-filter.c b/list-objects-filter.c
index d2cdc03a73..50f0c6d07b 100644
--- a/list-objects-filter.c
+++ b/list-objects-filter.c
@@ -469,11 +469,12 @@ static void *filter_sparse_oid__init(
if (get_oid_with_context(the_repository,
 filter_options->sparse_oid_name,
 GET_OID_BLOB, &sparse_oid, &oc))
-   die("unable to access sparse blob in '%s'",
+   die(_("unable to access sparse blob in '%s'"),
filter_options->sparse_oid_name);
d->omits = omitted;
if (add_excludes_from_blob_to_list(&sparse_oid, NULL, 0, &d->el) < 0)
-   die("could not load filter specification");
+   die(_("unable to parse sparse filter data in %s"),
+   oid_to_hex(&sparse_oid));
 
ALLOC_GROW(d->array_frame, d->nr + 1, d->alloc);
d->array_frame[d->nr].defval = 0; /* default to include */
diff --git a/t/t5616-partial-clone.sh b/t/t5616-partial-clone.sh
index 84365b802a..b85d3f5e4c 100755
--- a/t/t5616-partial-clone.sh
+++ b/t/t5616-partial-clone.sh
@@ -274,7 +274,7 @@ test_expect_success 'partial clone with unresolvable sparse 
filter fails cleanly
test_must_fail git clone --no-local --bare \
 --filter=sparse:oid=master \
 sparse-src dst.git 2>err &&
-   test_i18ngrep "could not load filter specification" err
+   test_i18ngrep "unable to parse sparse filter data in" err
 '
 
 . "$TEST_DIRECTORY"/lib-httpd.sh
-- 
2.23.0.667.gcccf1fbb03


Re: [PATCH] git-submodule.txt: fix AsciiDoc formatting error

2019-09-13 Thread Junio C Hamano
Denton Liu  writes:

>> > -set-branch ((-d|--default)|(-b|--branch )) [--] ::
>> 
>> I say "almost", as it gives a wrong impression that you can give
>> "-b" without "" X-<.
>> 
>> Now what does the updated text say to us?
>> 
>> > +set-branch (-d|--default)|(-b|--branch ) [--] ::
>> 
>> I think the attempt to cram the short-form is unnecessarily
>> cluttering and making the result incorrect.
>>  ...
>> 
> Hmm, I don't really like this since with every other subcommand, the
> short-forms are in the command summary so it's obvious to the reader
> in a quick glance which options are available.

I actually do not think it adds that much value.  Once a user learns
what --branch does and is told -b is its short form, it is much less
important that -b and --branch are both available than --default and
--branch are possibilities, and you cannot use both at the same time.

If anything, perhaps other subcommands' description may benefit if
we unclutter by reducing the prominence we give to their short form.

> In the context line above, we see `[(-n|--summary-limit) ]` as a
> possible way of notating a short and long option with argument. What do
> you think about the following potential output?
>
>   set-branch (-d|--default)|((-b|--branch) ) [--] ::
>
> Of course, we reintroduce the double paren problem but I can dig into
> asciidoc syntax and figure out how to escape it properly.

That's much less important than the fact that you are losing "-b and
-d cannot be used together", which is what the usage string gives us
(and which is what I tried to express with my rewrite).


Re: [PATCH] git-submodule.txt: fix AsciiDoc formatting error

2019-09-13 Thread Denton Liu
On Fri, Sep 13, 2019 at 11:03:39AM -0700, Junio C Hamano wrote:
> Denton Liu  writes:
> 
> > Remove surrounding parentheses so that the "index term" syntax is not
> > triggered (and because it looks nicer without them anyway ;) ).
> 
> "Correct" always trumps "nicer", though ;-)
> 
> The $USAGE string in the script describes the available options to
> this subcommand like so:
> 
> or: $dashless [--quiet] set-branch (--default|--branch ) [--] 
> 
> 
> which is, "you can give either --default or --branch , but
> not both".  What the original, if there were no special meaning in
> double-paren, meant to say seems to (almost) match that.
> 
> > -set-branch ((-d|--default)|(-b|--branch )) [--] ::
> 
> I say "almost", as it gives a wrong impression that you can give
> "-b" without "" X-<.
> 
> Now what does the updated text say to us?
> 
> > +set-branch (-d|--default)|(-b|--branch ) [--] ::
> 
> I think the attempt to cram the short-form is unnecessarily
> cluttering and making the result incorrect.  How about doing
> something like this instead?
> 
>  Documentation/git-submodule.txt | 8 +---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/Documentation/git-submodule.txt b/Documentation/git-submodule.txt
> index 0ed5c24dc1..816baa7dd0 100644
> --- a/Documentation/git-submodule.txt
> +++ b/Documentation/git-submodule.txt
> @@ -173,10 +173,12 @@ submodule with the `--init` option.
>  If `--recursive` is specified, this command will recurse into the
>  registered submodules, and update any nested submodules within.
>  --
> -set-branch ((-d|--default)|(-b|--branch )) [--] ::
> +set-branch (--default|--branch ) [--] ::
>   Sets the default remote tracking branch for the submodule. The
> - `--branch` option allows the remote branch to be specified. The
> - `--default` option removes the submodule..branch configuration
> + `--branch` option (or its short-form, `-b `)
> + allows the remote branch to be specified. The
> + `--default` option (or its short-form, `-d`)
> + removes the submodule..branch configuration
>   key, which causes the tracking branch to default to 'master'.
>  
>  summary [--cached|--files] [(-n|--summary-limit) ] [commit] [--] 
> [...]::

Hmm, I don't really like this since with every other subcommand, the
short-forms are in the command summary so it's obvious to the reader
in a quick glance which options are available. With this change, a
reader would have to not only read the summary line but also scan the
blurb below.

In the context line above, we see `[(-n|--summary-limit) ]` as a
possible way of notating a short and long option with argument. What do
you think about the following potential output?

set-branch (-d|--default)|((-b|--branch) ) [--] ::

Of course, we reintroduce the double paren problem but I can dig into
asciidoc syntax and figure out how to escape it properly.


Re: [PATCH v3 03/12] dir: fix off-by-one error in match_pathspec_item

2019-09-13 Thread Junio C Hamano
Elijah Newren  writes:

> For a pathspec like 'foo/bar' comparing against a path named "foo/",
> namelen will be 4, and match[namelen] will be 'b'.  The correct location
> of the directory separator is namelen-1.

And the reason why name[namelen-1] may not be slash, in which case
your new code makes offset 0, is because we need to handle what
case?  When path is "foo" (not "foo/")?  Just makes me wonder why
this callee allows the caller(s) to be inconsistent, sometimes
including the trailing slash in  tuple, sometimes
not.

> The reason the code worked anyway was that the following code immediately
> checked whether the first matchlen characters matched (which they do) and
> then bailed and return MATCHED_RECURSIVELY anyway since wildmatch doesn't
> have the ability to check if "name" can be matched as a directory (or
> prefix) against the pathspec.

Nicely spotted and explained.

>
> Signed-off-by: Elijah Newren 
> ---
>  dir.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/dir.c b/dir.c
> index a9168bed96..bf1a74799e 100644
> --- a/dir.c
> +++ b/dir.c
> @@ -356,8 +356,9 @@ static int match_pathspec_item(const struct index_state 
> *istate,
>   /* Perform checks to see if "name" is a super set of the pathspec */
>   if (flags & DO_MATCH_SUBMODULE) {
>   /* name is a literal prefix of the pathspec */
> + int offset = name[namelen-1] == '/' ? 1 : 0;
>   if ((namelen < matchlen) &&
> - (match[namelen] == '/') &&
> + (match[namelen-offset] == '/') &&
>   !ps_strncmp(item, match, name, namelen))
>   return MATCHED_RECURSIVELY;


Re: [PATCH] git-submodule.txt: fix AsciiDoc formatting error

2019-09-13 Thread Junio C Hamano
Denton Liu  writes:

> Remove surrounding parentheses so that the "index term" syntax is not
> triggered (and because it looks nicer without them anyway ;) ).

"Correct" always trumps "nicer", though ;-)

The $USAGE string in the script describes the available options to
this subcommand like so:

or: $dashless [--quiet] set-branch (--default|--branch ) [--] 

which is, "you can give either --default or --branch , but
not both".  What the original, if there were no special meaning in
double-paren, meant to say seems to (almost) match that.

> -set-branch ((-d|--default)|(-b|--branch )) [--] ::

I say "almost", as it gives a wrong impression that you can give
"-b" without "" X-<.

Now what does the updated text say to us?

> +set-branch (-d|--default)|(-b|--branch ) [--] ::

I think the attempt to cram the short-form is unnecessarily
cluttering and making the result incorrect.  How about doing
something like this instead?

 Documentation/git-submodule.txt | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/Documentation/git-submodule.txt b/Documentation/git-submodule.txt
index 0ed5c24dc1..816baa7dd0 100644
--- a/Documentation/git-submodule.txt
+++ b/Documentation/git-submodule.txt
@@ -173,10 +173,12 @@ submodule with the `--init` option.
 If `--recursive` is specified, this command will recurse into the
 registered submodules, and update any nested submodules within.
 --
-set-branch ((-d|--default)|(-b|--branch )) [--] ::
+set-branch (--default|--branch ) [--] ::
Sets the default remote tracking branch for the submodule. The
-   `--branch` option allows the remote branch to be specified. The
-   `--default` option removes the submodule..branch configuration
+   `--branch` option (or its short-form, `-b `)
+   allows the remote branch to be specified. The
+   `--default` option (or its short-form, `-d`)
+   removes the submodule..branch configuration
key, which causes the tracking branch to default to 'master'.
 
 summary [--cached|--files] [(-n|--summary-limit) ] [commit] [--] 
[...]::


[PATCH v3 03/12] dir: fix off-by-one error in match_pathspec_item

2019-09-12 Thread Elijah Newren
For a pathspec like 'foo/bar' comparing against a path named "foo/",
namelen will be 4, and match[namelen] will be 'b'.  The correct location
of the directory separator is namelen-1.

The reason the code worked anyway was that the following code immediately
checked whether the first matchlen characters matched (which they do) and
then bailed and return MATCHED_RECURSIVELY anyway since wildmatch doesn't
have the ability to check if "name" can be matched as a directory (or
prefix) against the pathspec.

Signed-off-by: Elijah Newren 
---
 dir.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/dir.c b/dir.c
index a9168bed96..bf1a74799e 100644
--- a/dir.c
+++ b/dir.c
@@ -356,8 +356,9 @@ static int match_pathspec_item(const struct index_state 
*istate,
/* Perform checks to see if "name" is a super set of the pathspec */
if (flags & DO_MATCH_SUBMODULE) {
/* name is a literal prefix of the pathspec */
+   int offset = name[namelen-1] == '/' ? 1 : 0;
if ((namelen < matchlen) &&
-   (match[namelen] == '/') &&
+   (match[namelen-offset] == '/') &&
!ps_strncmp(item, match, name, namelen))
return MATCHED_RECURSIVELY;
 
-- 
2.23.0.173.gad11b3a635.dirty



Re: error: could not read '.git/rebase-apply/head-name': No such file or directory

2019-09-12 Thread Johannes Schindelin
Hi Eugen,

On Thu, 12 Sep 2019, Eugen Konkov wrote:

> $ git --version
> git version 2.20.1

First thing to try is whether Git v2.23.0 still exposes that bug.

Ciao,
Johannes


Re: [PATCH] ci: install P4 from package to fix build error

2019-09-12 Thread SZEDER Gábor
On Wed, Sep 11, 2019 at 05:12:31PM +0200, Johannes Schindelin wrote:
> On Tue, 10 Sep 2019, SZEDER Gábor wrote:
> 
> > On Tue, Sep 10, 2019 at 12:51:01AM +0200, Johannes Schindelin wrote:
> > > On Fri, 6 Sep 2019, SZEDER Gábor wrote:
> > >
> > > > On Fri, Sep 06, 2019 at 12:27:11PM +0200, SZEDER Gábor wrote:
> >
> > > > > Let's install P4 from the package repository, because this
> > > > > approach seems to be simpler and more future proof.
> > > > >
> > > > > Note that we used to install an old P4 version (2016.2) in the
> > > > > Linux build jobs, but with this change we'll install the most
> > > > > recent version available in the Perforce package repository
> > > > > (currently 2019.1).
> > > >
> > > > So I'm not quite sure whether we really want this patch.  It
> > > > depends on how important it is to test 'git-p4' with an old P4
> > > > version, but I don't really have an opinion on that.
> > >
> > > I'd rather have that patch. It seems to be a much better idea to use
> > > the package management system than to rely on one host, especially
> > > when said host already displayed hiccups.
> >
> > Well, I'm not so sure.  As far as I remember this was the first time
> > that this Perforce filehost was inaccessible and a simple "Restart
> > job" could not rectify the situation, because it was inaccessible for
> > about a day or more.
> 
> Right.
> 
> > OTOH, transient errors or timeouts from 'apt-get update' or 'install'
> > from the official Ubuntu package repositories are not uncommon (at
> > least on Travis CI), although in those cases it's usually enough to
> > just restart the errored job.
> 
> My impression precisely. I trust the Ubuntu package servers to have
> transient, _short-lived_ problems ;-)

Heh, that's true, but note that we won't install P4 from the Ubuntu
package servers, but from the Perforce package repository, so I doubt
we can expect any improvements in this regard.

OTOH, instead of issuing only two HTTP requests to download those two
binaries we'll issue a total of 9 requests (1 pubkey, 2 package lists,
6 packages) that could go wrong.

I'm undecided, and at the very least the proposed commit message
should be updated.



error: could not read '.git/rebase-apply/head-name': No such file or directory

2019-09-12 Thread Eugen Konkov
Hello Git,

Next commands cause the error:

$ git pull
fatal: It seems that there is already a rebase-apply directory, and
I wonder if you are in the middle of another rebase.  If that is the
case, please try
git rebase (--continue | --abort | --skip)
If that is not the case, please
rm -fr ".git/rebase-apply"
and run me again.  I am stopping in case you still have something
valuable there.

$ git rebase --abort 
error: could not read '.git/rebase-apply/head-name': No such file or directory

$ git --version
git version 2.20.1

$ rm -fr ".git/rebase-apply"

$ git rebase --abort 
error: could not read '.git/rebase-apply/head-name': No such file or directory

$ git pull
fatal: It seems that there is already a rebase-apply directory, and
I wonder if you are in the middle of another rebase.  If that is the
case, please try
git rebase (--continue | --abort | --skip)
If that is not the case, please
rm -fr ".git/rebase-apply"
and run me again.  I am stopping in case you still have something
valuable there.


I resolved this issue by
git checkout -f some_br

But this issues should not occur


-- 
Best regards,
Eugen Konkov



[PATCH v2 0/1] Fix perl error "unescaped left brace in regex" for paranoid update hook

2019-09-11 Thread Dominic Winkler via GitGitGadget
A literal "{" should now be escaped in a pattern starting from perl versions
>= v5.26. In perl v5.22, using a literal { in a regular expression was
deprecated, and will emit a warning if it isn't escaped: {. In v5.26, this
won't just warn, it'll cause a syntax error.

(see https://metacpan.org/pod/release/RJBS/perl-5.22.0/pod/perldelta.pod)

Signed-off-by: Dominic Winkler d.wink...@flexarts.at [d.wink...@flexarts.at]

Dominic Winkler (1):
  contrib/hooks: escape left brace in regex in the paranoid update hook

 contrib/hooks/update-paranoid | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


base-commit: 5fa0f5238b0cd46cfe7f6fa76c3f526ea98148d9
Published-As: 
https://github.com/gitgitgadget/git/releases/tag/pr-335%2Fflexarts%2Fmaint-update-paranoid-perlv5.26-v2
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git 
pr-335/flexarts/maint-update-paranoid-perlv5.26-v2
Pull-Request: https://github.com/gitgitgadget/git/pull/335

Range-diff vs v1:

 1:  2743caa22e ! 1:  0d762cfb50 Fix perl error "unescaped left brace in regex" 
for paranoid update hook
 @@ -1,6 +1,6 @@
  Author: Dominic Winkler 
  
 -Fix perl error "unescaped left brace in regex" for paranoid update 
hook
 +contrib/hooks: escape left brace in regex in the paranoid update hook
  
  A literal "{" should now be escaped in a pattern starting from perl
  versions >= v5.26. In perl v5.22, using a literal { in a regular

-- 
gitgitgadget


Re: [PATCH 1/1] Fix perl error "unescaped left brace in regex" for paranoid update hook

2019-09-11 Thread Johannes Schindelin
Hi Dominic,

all looks good, with one exception: the Subject should start with
`:`, i.e. in this instance something like this would be better:

contrib/hooks: escape left brace in regex in the paranoid update hook

Ciao,
Johannes

On Mon, 9 Sep 2019, Dominic Winkler via GitGitGadget wrote:

> From: Dominic Winkler 
>
> A literal "{" should now be escaped in a pattern starting from perl
> versions >= v5.26. In perl v5.22, using a literal { in a regular
> expression was deprecated, and will emit a warning if it isn't escaped: \{.
> In v5.26, this won't just warn, it'll cause a syntax error.
>
> (see https://metacpan.org/pod/release/RJBS/perl-5.22.0/pod/perldelta.pod)
>
> Signed-off-by: Dominic Winkler 
> ---
>  contrib/hooks/update-paranoid | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/contrib/hooks/update-paranoid b/contrib/hooks/update-paranoid
> index d18b317b2f..fc0a242a4e 100755
> --- a/contrib/hooks/update-paranoid
> +++ b/contrib/hooks/update-paranoid
> @@ -302,13 +302,13 @@ $op = 'U' if ($op eq 'R'
>
>  RULE:
>   foreach (@$rules) {
> - while (/\${user\.([a-z][a-zA-Z0-9]+)}/) {
> + while (/\$\{user\.([a-z][a-zA-Z0-9]+)}/) {
>   my $k = lc $1;
>   my $v = $data{"user.$k"};
>   next RULE unless defined $v;
>   next RULE if @$v != 1;
>   next RULE unless defined $v->[0];
> - s/\${user\.$k}/$v->[0]/g;
> + s/\$\{user\.$k}/$v->[0]/g;
>   }
>
>   if (/^([AMD 
> ]+)\s+of\s+([^\s]+)\s+for\s+([^\s]+)\s+diff\s+([^\s]+)$/) {
> --
> gitgitgadget
>


Re: [PATCH] ci: install P4 from package to fix build error

2019-09-11 Thread Johannes Schindelin
Hi,

On Tue, 10 Sep 2019, SZEDER Gábor wrote:

> On Tue, Sep 10, 2019 at 12:51:01AM +0200, Johannes Schindelin wrote:
> > On Fri, 6 Sep 2019, SZEDER Gábor wrote:
> >
> > > On Fri, Sep 06, 2019 at 12:27:11PM +0200, SZEDER Gábor wrote:
>
> > > > Let's install P4 from the package repository, because this
> > > > approach seems to be simpler and more future proof.
> > > >
> > > > Note that we used to install an old P4 version (2016.2) in the
> > > > Linux build jobs, but with this change we'll install the most
> > > > recent version available in the Perforce package repository
> > > > (currently 2019.1).
> > >
> > > So I'm not quite sure whether we really want this patch.  It
> > > depends on how important it is to test 'git-p4' with an old P4
> > > version, but I don't really have an opinion on that.
> >
> > I'd rather have that patch. It seems to be a much better idea to use
> > the package management system than to rely on one host, especially
> > when said host already displayed hiccups.
>
> Well, I'm not so sure.  As far as I remember this was the first time
> that this Perforce filehost was inaccessible and a simple "Restart
> job" could not rectify the situation, because it was inaccessible for
> about a day or more.

Right.

> OTOH, transient errors or timeouts from 'apt-get update' or 'install'
> from the official Ubuntu package repositories are not uncommon (at
> least on Travis CI), although in those cases it's usually enough to
> just restart the errored job.

My impression precisely. I trust the Ubuntu package servers to have
transient, _short-lived_ problems ;-)

Ciao,
Dscho


[PATCH] git-submodule.txt: fix AsciiDoc formatting error

2019-09-10 Thread Denton Liu
In b57e8119e6 (submodule: teach set-branch subcommand, 2019-02-08), the
`set-branch` subcommand was added for submodules. When the documentation
was written, the syntax for a "index term" in AsciiDoc was
accidentally used. This caused the documentation to be rendered as

set-branch -d|--default)|(-b|--branch  [--] 

instead of

set-branch (-d|--default)|(-b|--branch ) [--] 

Remove surrounding parentheses so that the "index term" syntax is not
triggered (and because it looks nicer without them anyway ;) ).

Signed-off-by: Denton Liu 
---
 Documentation/git-submodule.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/git-submodule.txt b/Documentation/git-submodule.txt
index 0ed5c24dc1..e349442f4c 100644
--- a/Documentation/git-submodule.txt
+++ b/Documentation/git-submodule.txt
@@ -173,7 +173,7 @@ submodule with the `--init` option.
 If `--recursive` is specified, this command will recurse into the
 registered submodules, and update any nested submodules within.
 --
-set-branch ((-d|--default)|(-b|--branch )) [--] ::
+set-branch (-d|--default)|(-b|--branch ) [--] ::
Sets the default remote tracking branch for the submodule. The
`--branch` option allows the remote branch to be specified. The
`--default` option removes the submodule..branch configuration
-- 
2.23.0.163.g796a25ee1e



Re: [PATCH] ci: install P4 from package to fix build error

2019-09-10 Thread SZEDER Gábor
On Tue, Sep 10, 2019 at 12:51:01AM +0200, Johannes Schindelin wrote:
> On Fri, 6 Sep 2019, SZEDER Gábor wrote:
> 
> > On Fri, Sep 06, 2019 at 12:27:11PM +0200, SZEDER Gábor wrote:
> > > To test 'git-p4' in the Linux Clang and GCC build jobs we used to
> > > install the 'p4' and 'p4d' binaries by directly downloading those
> > > binaries from a Perforce filehost.  This has worked just fine ever
> > > since we started using Travis CI [1], but during the last day or so
> > > that filehost appeared to be gone: while its hostname still resolves,
> > > the host doesn't seem to reply to any download request, it doesn't
> > > even refuse the connection, and eventually our build jobs time out
> > > [2].
> > >
> > > Now, this might be just a temporary glitch, but I'm afraid that it
> > > isn't.
> >
> > Well, now would you believe it, while I was testing this patch (I even
> > made a gitgitgadget PR to run it on Azure Pipelines! :) and touching
> > up its log message the good old Perforce filehost sprang back to life,
> > and the CI build jobs now succeed again even without this patch.
> 
> Sorry for being so slow with granting you access to GitGitGadget. FWIW
> _anybody_ who already was granted access can issue `/allow` commands, it
> is not just me.

No problem; I was only interested in the results of the Azure
Pipelines build, and that seemed to go well even without access.

> > > Let's install P4 from the package repository, because this approach
> > > seems to be simpler and more future proof.
> > >
> > > Note that we used to install an old P4 version (2016.2) in the Linux
> > > build jobs, but with this change we'll install the most recent version
> > > available in the Perforce package repository (currently 2019.1).
> >
> > So I'm not quite sure whether we really want this patch.  It depends
> > on how important it is to test 'git-p4' with an old P4 version, but I
> > don't really have an opinion on that.
> 
> I'd rather have that patch. It seems to be a much better idea to use the
> package management system than to rely on one host, especially when said
> host already displayed hiccups.

Well, I'm not so sure.  As far as I remember this was the first time
that this Perforce filehost was inaccessible and a simple "Restart
job" could not rectify the situation, because it was inaccessible for
about a day or more.  OTOH, transient errors or timeouts from 'apt-get
update' or 'install' from the official Ubuntu package repositories are
not uncommon (at least on Travis CI), although in those cases it's
usually enough to just restart the errored job.



Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-09-10 Thread SZEDER Gábor
On Wed, Sep 04, 2019 at 01:04:42AM -0400, Jeff King wrote:
> On Fri, Aug 30, 2019 at 02:10:05PM +0200, SZEDER Gábor wrote:
> 
> > On Fri, Aug 30, 2019 at 12:06:30AM +0200, SZEDER Gábor wrote:
> > > On Thu, Aug 29, 2019 at 11:58:18PM +0200, SZEDER Gábor wrote:
> > > > On Thu, Aug 29, 2019 at 10:38:05AM -0400, Jeff King wrote:
> > > > > So any fixes there have to happen on the client side. I am still
> > > > > confused about why the client is writing in this case, per the 
> > > > > argument
> > > > > in 014ade7484 (upload-pack: send ERR packet for non-tip objects,
> > > > > 2019-04-13). It would be nice to use GIT_TRACE_PACKET to see what it's
> > > > > trying to write, but I still haven't been able to reproduce the issue.
> > > > 
> > > > It's the "done" line:
> > > > 
> > > >   + cat trace-packet
> > [...]
> > > >   packet:  upload-pack> 
> > > >   packet:fetch> done
> > > > 
> > > > In the avarage successful run that "fetch> done" pkt-line is
> > > > immediately after the "fetch> ".
> 
> Thanks for all of your persistent digging on this.

Yeah, I can be easily distracted by an interesting looking bug... was
told it's a character flaw ;)

> I had forgotten about
> the "done" packet, but it explains all of the symptoms we've seen.
> 
> > So instead of immediately die()int after write_in_full() returned an
> > error, fetch should first try to read all incoming packets in the hope
> > that the remote did send an ERR packet before it died, and then die
> > with the error in that packet, or fall back to the current generic
> > error message if there is no ERR packet (e.g. remote segfaulted or
> > something similarly horrible).  This fixes the test failure with that
> > strategically-placed sleep() in 'fetch-pack.c'.
> > 
> >   https://travis-ci.org/szeder/git/jobs/578778749#L2689
> > 
> > Alas, passing a 'reader' to a function called send_request() doesn't
> > look quite right, does it...  And I'm not sure about the stateless
> > communication, it still uses write_or_die().
> 
> And thank you for putting this patch together. I had taken a stab at it
> a while ago, but got discouraged by figuring out at which layer to add
> the "reader" info (I had envisioned it much lower in packet_write(), but
> it is clear from your patch that fetch-pack does most of its own
> writing).
> 
> I agree passing around the reader is a bit weird;

I considered renaming send_request() so that 'reader' won't look that
out of place among its parameters, but all my ideas were ridiculous,
e.g. send_request_and_process_ERR_pkt_on_error()...

> I wonder if we should
> be representing the full-duplex connection more clearly as a single
> struct. But I suspect that creates other headaches, and what you have
> here doesn't look _too_ bad. As you note, it probably doesn't cover all
> code paths, but it at least fixes some of them, and gives us a template
> for addressing the others.
> 
> > } else {
> > -   if (write_in_full(fd, buf->buf, buf->len) < 0)
> > +   if (write_in_full(fd, buf->buf, buf->len) < 0) {
> > +   int save_errno = errno;
> > +   /*
> > +* Read everything the remote has sent to us.
> > +* If there is an ERR packet, then the loop die()s
> > +* with the received error message.
> > +* If we reach EOF without seeing an ERR, then die()
> > +* with a generic error message, most likely "Broken
> > +* pipe".
> > +*/
> > +   while (packet_reader_read(reader) != PACKET_READ_EOF);
> > +   errno = save_errno;
> > die_errno(_("unable to write to remote"));
> > +   }
> 
> One unfortunate thing here is that we could block indefinitely in
> packet_reader_read(). That shouldn't happen, I don't think, but since
> this is an error case where we've been cutoff, anything's possible.

Yeah, when we use different file descriptors for reading and writing,
then any error on the writing fd doesn't necessarily mean that there
is on an error on the reading fd as well.  I mean, we could get an
EBADF or EFAULT as well, but those would rather indicate a bug in Git
than an error with the connection itself.

I wondere

[PATCH 0/1] Fix perl error "unescaped left brace in regex" for paranoid update hook

2019-09-09 Thread Dominic Winkler via GitGitGadget
A literal "{" should now be escaped in a pattern starting from perl versions
>= v5.26. In perl v5.22, using a literal { in a regular expression was
deprecated, and will emit a warning if it isn't escaped: {. In v5.26, this
won't just warn, it'll cause a syntax error.

(see https://metacpan.org/pod/release/RJBS/perl-5.22.0/pod/perldelta.pod)

Signed-off-by: Dominic Winkler d.wink...@flexarts.at [d.wink...@flexarts.at]

Dominic Winkler (1):
  Fix perl error "unescaped left brace in regex" for paranoid update
hook

 contrib/hooks/update-paranoid | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


base-commit: 5fa0f5238b0cd46cfe7f6fa76c3f526ea98148d9
Published-As: 
https://github.com/gitgitgadget/git/releases/tag/pr-335%2Fflexarts%2Fmaint-update-paranoid-perlv5.26-v1
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git 
pr-335/flexarts/maint-update-paranoid-perlv5.26-v1
Pull-Request: https://github.com/gitgitgadget/git/pull/335
-- 
gitgitgadget


[PATCH 1/1] Fix perl error "unescaped left brace in regex" for paranoid update hook

2019-09-09 Thread Dominic Winkler via GitGitGadget
From: Dominic Winkler 

A literal "{" should now be escaped in a pattern starting from perl
versions >= v5.26. In perl v5.22, using a literal { in a regular
expression was deprecated, and will emit a warning if it isn't escaped: \{.
In v5.26, this won't just warn, it'll cause a syntax error.

(see https://metacpan.org/pod/release/RJBS/perl-5.22.0/pod/perldelta.pod)

Signed-off-by: Dominic Winkler 
---
 contrib/hooks/update-paranoid | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/contrib/hooks/update-paranoid b/contrib/hooks/update-paranoid
index d18b317b2f..fc0a242a4e 100755
--- a/contrib/hooks/update-paranoid
+++ b/contrib/hooks/update-paranoid
@@ -302,13 +302,13 @@ $op = 'U' if ($op eq 'R'
 
 RULE:
foreach (@$rules) {
-   while (/\${user\.([a-z][a-zA-Z0-9]+)}/) {
+   while (/\$\{user\.([a-z][a-zA-Z0-9]+)}/) {
my $k = lc $1;
my $v = $data{"user.$k"};
next RULE unless defined $v;
next RULE if @$v != 1;
next RULE unless defined $v->[0];
-   s/\${user\.$k}/$v->[0]/g;
+   s/\$\{user\.$k}/$v->[0]/g;
}
 
if (/^([AMD 
]+)\s+of\s+([^\s]+)\s+for\s+([^\s]+)\s+diff\s+([^\s]+)$/) {
-- 
gitgitgadget


Re: [PATCH] ci: install P4 from package to fix build error

2019-09-09 Thread Johannes Schindelin
Hi,


On Fri, 6 Sep 2019, SZEDER Gábor wrote:

> On Fri, Sep 06, 2019 at 12:27:11PM +0200, SZEDER Gábor wrote:
> > To test 'git-p4' in the Linux Clang and GCC build jobs we used to
> > install the 'p4' and 'p4d' binaries by directly downloading those
> > binaries from a Perforce filehost.  This has worked just fine ever
> > since we started using Travis CI [1], but during the last day or so
> > that filehost appeared to be gone: while its hostname still resolves,
> > the host doesn't seem to reply to any download request, it doesn't
> > even refuse the connection, and eventually our build jobs time out
> > [2].
> >
> > Now, this might be just a temporary glitch, but I'm afraid that it
> > isn't.
>
> Well, now would you believe it, while I was testing this patch (I even
> made a gitgitgadget PR to run it on Azure Pipelines! :) and touching
> up its log message the good old Perforce filehost sprang back to life,
> and the CI build jobs now succeed again even without this patch.

Sorry for being so slow with granting you access to GitGitGadget. FWIW
_anybody_ who already was granted access can issue `/allow` commands, it
is not just me.

> > Let's install P4 from the package repository, because this approach
> > seems to be simpler and more future proof.
> >
> > Note that we used to install an old P4 version (2016.2) in the Linux
> > build jobs, but with this change we'll install the most recent version
> > available in the Perforce package repository (currently 2019.1).
>
> So I'm not quite sure whether we really want this patch.  It depends
> on how important it is to test 'git-p4' with an old P4 version, but I
> don't really have an opinion on that.

I'd rather have that patch. It seems to be a much better idea to use the
package management system than to rely on one host, especially when said
host already displayed hiccups.

Ciao,
Dscho


Re: [PATCH] ci: install P4 from package to fix build error

2019-09-06 Thread SZEDER Gábor
On Fri, Sep 06, 2019 at 12:27:11PM +0200, SZEDER Gábor wrote:
> To test 'git-p4' in the Linux Clang and GCC build jobs we used to
> install the 'p4' and 'p4d' binaries by directly downloading those
> binaries from a Perforce filehost.  This has worked just fine ever
> since we started using Travis CI [1], but during the last day or so
> that filehost appeared to be gone: while its hostname still resolves,
> the host doesn't seem to reply to any download request, it doesn't
> even refuse the connection, and eventually our build jobs time out
> [2].
> 
> Now, this might be just a temporary glitch, but I'm afraid that it
> isn't.

Well, now would you believe it, while I was testing this patch (I even
made a gitgitgadget PR to run it on Azure Pipelines! :) and touching
up its log message the good old Perforce filehost sprang back to life,
and the CI build jobs now succeed again even without this patch.

> Let's install P4 from the package repository, because this approach
> seems to be simpler and more future proof.
> 
> Note that we used to install an old P4 version (2016.2) in the Linux
> build jobs, but with this change we'll install the most recent version
> available in the Perforce package repository (currently 2019.1).

So I'm not quite sure whether we really want this patch.  It depends
on how important it is to test 'git-p4' with an old P4 version, but I
don't really have an opinion on that.



[PATCH] ci: install P4 from package to fix build error

2019-09-06 Thread SZEDER Gábor
To test 'git-p4' in the Linux Clang and GCC build jobs we used to
install the 'p4' and 'p4d' binaries by directly downloading those
binaries from a Perforce filehost.  This has worked just fine ever
since we started using Travis CI [1], but during the last day or so
that filehost appeared to be gone: while its hostname still resolves,
the host doesn't seem to reply to any download request, it doesn't
even refuse the connection, and eventually our build jobs time out
[2].

Now, this might be just a temporary glitch, but I'm afraid that it
isn't.  The "Helix Core Server Administrator Guide" [3] describes two
ways to install these binaries on Linux, and none of them mentions the
filehost that we've been downloading from in the past:

  - non-package installation: open the website's download page in your
web browser, select OS and platform, click on the download link,
and eventually you get a .tar.gz archive containing, among other
things, the necessary 'p4' and 'p4d' binaries.

Although we could use the URL of this archive to download it in
our CI scripts with 'wget', nobody said that that URL remains
stable, and we would still need to extract the archive and copy
the binaries to $PATH.

  - package installation for various distros, including Ubuntu 16.04
(i.e. the Ubuntu version used both in our Travis CI and Azure
Pipelines builds): add a package repository and its pubkey,
'apt-get update && apt-get install', and ready to go.

Let's install P4 from the package repository, because this approach
seems to be simpler and more future proof.

Note that we used to install an old P4 version (2016.2) in the Linux
build jobs, but with this change we'll install the most recent version
available in the Perforce package repository (currently 2019.1).

[1] 522354d70f (Add Travis CI support, 2015-11-27).
[2] https://travis-ci.org/git/git/jobs/581429927#L422
[3] https://www.perforce.com/manuals/p4sag/Content/P4SAG/chapter.install.html

Signed-off-by: SZEDER Gábor 
---
 ci/install-dependencies.sh | 14 +-
 ci/lib.sh  |  8 +++-
 2 files changed, 8 insertions(+), 14 deletions(-)

diff --git a/ci/install-dependencies.sh b/ci/install-dependencies.sh
index 8cc72503cb..0df48365dc 100755
--- a/ci/install-dependencies.sh
+++ b/ci/install-dependencies.sh
@@ -5,27 +5,23 @@
 
 . ${0%/*}/lib.sh
 
-P4WHENCE=http://filehost.perforce.com/perforce/r$LINUX_P4_VERSION
 
LFSWHENCE=https://github.com/github/git-lfs/releases/download/v$LINUX_GIT_LFS_VERSION
 
 case "$jobname" in
 linux-clang|linux-gcc)
+   wget -qO - https://package.perforce.com/perforce.pubkey | sudo apt-key 
add -
+   echo "deb http://package.perforce.com/apt/ubuntu xenial release" 
>perforce.list
+   sudo mv perforce.list /etc/apt/sources.list.d/
sudo apt-add-repository -y "ppa:ubuntu-toolchain-r/test"
sudo apt-get -q update
-   sudo apt-get -q -y install language-pack-is libsvn-perl apache2
+   sudo apt-get -q -y install language-pack-is libsvn-perl apache2 \
+   helix-p4d
case "$jobname" in
linux-gcc)
sudo apt-get -q -y install gcc-8
;;
esac
 
-   mkdir --parents "$P4_PATH"
-   pushd "$P4_PATH"
-   wget --quiet "$P4WHENCE/bin.linux26x86_64/p4d"
-   wget --quiet "$P4WHENCE/bin.linux26x86_64/p4"
-   chmod u+x p4d
-   chmod u+x p4
-   popd
mkdir --parents "$GIT_LFS_PATH"
pushd "$GIT_LFS_PATH"
wget --quiet 
"$LFSWHENCE/git-lfs-linux-amd64-$LINUX_GIT_LFS_VERSION.tar.gz"
diff --git a/ci/lib.sh b/ci/lib.sh
index 44db2d5cbb..efcccfee6f 100755
--- a/ci/lib.sh
+++ b/ci/lib.sh
@@ -162,17 +162,15 @@ linux-clang|linux-gcc)
 
export GIT_TEST_HTTPD=YesPlease
 
-   # The Linux build installs the defined dependency versions below.
-   # The OS X build installs much more recent versions, whichever
+   # The Linux build installs the defined dependency version below.
+   # The OS X build installs much more recent version, whichever
# were recorded in the Homebrew database upon creating the OS X
# image.
# Keep that in mind when you encounter a broken OS X build!
-   export LINUX_P4_VERSION="16.2"
export LINUX_GIT_LFS_VERSION="1.5.2"
 
-   P4_PATH="$HOME/custom/p4"
GIT_LFS_PATH="$HOME/custom/git-lfs"
-   export PATH="$GIT_LFS_PATH:$P4_PATH:$PATH"
+   export PATH="$GIT_LFS_PATH:$PATH"
;;
 osx-clang|osx-gcc)
if [ "$jobname" = osx-gcc ]
-- 
2.23.0.331.g4e51dcdf11



[RFC PATCH v2 03/12] dir: fix off-by-one error in match_pathspec_item

2019-09-05 Thread Elijah Newren
For a pathspec like 'foo/bar' comparing against a path named "foo/",
namelen will be 4, and match[namelen] will be 'b'.  The correct location
of the directory separator is namelen-1.

The reason the code worked anyway was that the following code immediately
checked whether the first matchlen characters matched (which they do) and
then bailed and return MATCHED_RECURSIVELY anyway since wildmatch doesn't
have the ability to check if "name" can be matched as a directory (or
prefix) against the pathspec.

Signed-off-by: Elijah Newren 
---
 dir.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/dir.c b/dir.c
index a9168bed96..bf1a74799e 100644
--- a/dir.c
+++ b/dir.c
@@ -356,8 +356,9 @@ static int match_pathspec_item(const struct index_state 
*istate,
/* Perform checks to see if "name" is a super set of the pathspec */
if (flags & DO_MATCH_SUBMODULE) {
/* name is a literal prefix of the pathspec */
+   int offset = name[namelen-1] == '/' ? 1 : 0;
if ((namelen < matchlen) &&
-   (match[namelen] == '/') &&
+   (match[namelen-offset] == '/') &&
!ps_strncmp(item, match, name, namelen))
return MATCHED_RECURSIVELY;
 
-- 
2.22.1.11.g45a39ee867



Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-09-03 Thread Jeff King
On Fri, Aug 30, 2019 at 02:10:05PM +0200, SZEDER Gábor wrote:

> On Fri, Aug 30, 2019 at 12:06:30AM +0200, SZEDER Gábor wrote:
> > On Thu, Aug 29, 2019 at 11:58:18PM +0200, SZEDER Gábor wrote:
> > > On Thu, Aug 29, 2019 at 10:38:05AM -0400, Jeff King wrote:
> > > > So any fixes there have to happen on the client side. I am still
> > > > confused about why the client is writing in this case, per the argument
> > > > in 014ade7484 (upload-pack: send ERR packet for non-tip objects,
> > > > 2019-04-13). It would be nice to use GIT_TRACE_PACKET to see what it's
> > > > trying to write, but I still haven't been able to reproduce the issue.
> > > 
> > > It's the "done" line:
> > > 
> > >   + cat trace-packet
> [...]
> > >   packet:  upload-pack> 
> > >   packet:fetch> done
> > > 
> > > In the avarage successful run that "fetch> done" pkt-line is
> > > immediately after the "fetch> ".

Thanks for all of your persistent digging on this. I had forgotten about
the "done" packet, but it explains all of the symptoms we've seen.

> So instead of immediately die()int after write_in_full() returned an
> error, fetch should first try to read all incoming packets in the hope
> that the remote did send an ERR packet before it died, and then die
> with the error in that packet, or fall back to the current generic
> error message if there is no ERR packet (e.g. remote segfaulted or
> something similarly horrible).  This fixes the test failure with that
> strategically-placed sleep() in 'fetch-pack.c'.
> 
>   https://travis-ci.org/szeder/git/jobs/578778749#L2689
> 
> Alas, passing a 'reader' to a function called send_request() doesn't
> look quite right, does it...  And I'm not sure about the stateless
> communication, it still uses write_or_die().

And thank you for putting this patch together. I had taken a stab at it
a while ago, but got discouraged by figuring out at which layer to add
the "reader" info (I had envisioned it much lower in packet_write(), but
it is clear from your patch that fetch-pack does most of its own
writing).

I agree passing around the reader is a bit weird; I wonder if we should
be representing the full-duplex connection more clearly as a single
struct. But I suspect that creates other headaches, and what you have
here doesn't look _too_ bad. As you note, it probably doesn't cover all
code paths, but it at least fixes some of them, and gives us a template
for addressing the others.

>   } else {
> - if (write_in_full(fd, buf->buf, buf->len) < 0)
> + if (write_in_full(fd, buf->buf, buf->len) < 0) {
> + int save_errno = errno;
> + /*
> +  * Read everything the remote has sent to us.
> +  * If there is an ERR packet, then the loop die()s
> +  * with the received error message.
> +  * If we reach EOF without seeing an ERR, then die()
> +  * with a generic error message, most likely "Broken
> +  * pipe".
> +  */
> + while (packet_reader_read(reader) != PACKET_READ_EOF);
> + errno = save_errno;
>   die_errno(_("unable to write to remote"));
> + }

One unfortunate thing here is that we could block indefinitely in
packet_reader_read(). That shouldn't happen, I don't think, but since
this is an error case where we've been cutoff, anything's possible.

We maybe could get away with using non-blocking I/O. We're looking for
an ERR packet the other side sent us _before_ it hung up, so in theory
we've have received the data before any FIN packet (or EOF on a pipe).
But I'm wary of introducing new races there.

It might be enough to put in an actual timer, waiting for an ERR packet,
EOF, or something like 5 seconds. Or maybe I'm just being overly
paranoid.

-Peff


Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-08-30 Thread SZEDER Gábor
On Fri, Aug 30, 2019 at 12:06:30AM +0200, SZEDER Gábor wrote:
> On Thu, Aug 29, 2019 at 11:58:18PM +0200, SZEDER Gábor wrote:
> > On Thu, Aug 29, 2019 at 10:38:05AM -0400, Jeff King wrote:
> > > So any fixes there have to happen on the client side. I am still
> > > confused about why the client is writing in this case, per the argument
> > > in 014ade7484 (upload-pack: send ERR packet for non-tip objects,
> > > 2019-04-13). It would be nice to use GIT_TRACE_PACKET to see what it's
> > > trying to write, but I still haven't been able to reproduce the issue.
> > 
> > It's the "done" line:
> > 
> >   + cat trace-packet
[...]
> >   packet:  upload-pack> 
> >   packet:fetch> done
> > 
> > In the avarage successful run that "fetch> done" pkt-line is
> > immediately after the "fetch> ".

So instead of immediately die()int after write_in_full() returned an
error, fetch should first try to read all incoming packets in the hope
that the remote did send an ERR packet before it died, and then die
with the error in that packet, or fall back to the current generic
error message if there is no ERR packet (e.g. remote segfaulted or
something similarly horrible).  This fixes the test failure with that
strategically-placed sleep() in 'fetch-pack.c'.

  https://travis-ci.org/szeder/git/jobs/578778749#L2689

Alas, passing a 'reader' to a function called send_request() doesn't
look quite right, does it...  And I'm not sure about the stateless
communication, it still uses write_or_die().


diff --git a/fetch-pack.c b/fetch-pack.c
index e18864458b..773d9c7904 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -186,14 +186,27 @@ static enum ack_type get_ack(struct packet_reader *reader,
 }
 
 static void send_request(struct fetch_pack_args *args,
-int fd, struct strbuf *buf)
+int fd, struct strbuf *buf,
+struct packet_reader *reader)
 {
if (args->stateless_rpc) {
send_sideband(fd, -1, buf->buf, buf->len, LARGE_PACKET_MAX);
packet_flush(fd);
} else {
-   if (write_in_full(fd, buf->buf, buf->len) < 0)
+   if (write_in_full(fd, buf->buf, buf->len) < 0) {
+   int save_errno = errno;
+   /*
+* Read everything the remote has sent to us.
+* If there is an ERR packet, then the loop die()s
+* with the received error message.
+* If we reach EOF without seeing an ERR, then die()
+* with a generic error message, most likely "Broken
+* pipe".
+*/
+   while (packet_reader_read(reader) != PACKET_READ_EOF);
+   errno = save_errno;
die_errno(_("unable to write to remote"));
+   }
}
 }
 
@@ -353,7 +366,7 @@ static int find_common(struct fetch_negotiator *negotiator,
const char *arg;
struct object_id oid;
 
-   send_request(args, fd[1], &req_buf);
+   send_request(args, fd[1], &req_buf, &reader);
while (packet_reader_read(&reader) == PACKET_READ_NORMAL) {
if (skip_prefix(reader.line, "shallow ", &arg)) {
if (get_oid_hex(arg, &oid))
@@ -376,7 +389,7 @@ static int find_common(struct fetch_negotiator *negotiator,
die(_("expected shallow/unshallow, got %s"), 
reader.line);
}
} else if (!args->stateless_rpc)
-   send_request(args, fd[1], &req_buf);
+   send_request(args, fd[1], &req_buf, &reader);
 
if (!args->stateless_rpc) {
/* If we aren't using the stateless-rpc interface
@@ -398,7 +411,7 @@ static int find_common(struct fetch_negotiator *negotiator,
int ack;
 
packet_buf_flush(&req_buf);
-   send_request(args, fd[1], &req_buf);
+   send_request(args, fd[1], &req_buf, &reader);
strbuf_setlen(&req_buf, state_len);
flushes++;
flush_at = next_flush(args->stateless_rpc, count);
@@ -473,7 +486,7 @@ static int find_common(struct fetch_negotiator *negotiator,
if (!got_ready || !no_done) {
sleep(1);
packet_buf_write(&req_buf, "done\n");
-   send_request(args, fd[1], &req_buf);
+   send_request(args, fd[1], &req_buf, &reader);
}
print_verbose(args, _("done"));
if (retval != 0) {



Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-08-29 Thread SZEDER Gábor
On Thu, Aug 29, 2019 at 11:58:18PM +0200, SZEDER Gábor wrote:
> On Thu, Aug 29, 2019 at 10:38:05AM -0400, Jeff King wrote:
> > So any fixes there have to happen on the client side. I am still
> > confused about why the client is writing in this case, per the argument
> > in 014ade7484 (upload-pack: send ERR packet for non-tip objects,
> > 2019-04-13). It would be nice to use GIT_TRACE_PACKET to see what it's
> > trying to write, but I still haven't been able to reproduce the issue.
> 
> It's the "done" line:
> 
>   + cat trace-packet
>   packet:  upload-pack> 5e26403b4485d7a44fd8b65dc3f71e21c0dd6fad 
> HEAD\0multi_ack thin-pack side-band side-band-64k ofs-delta shallow 
> deepen-since deepen-not deepen-relative no-progress include-tag 
> multi_ack_detailed allow-tip-sha1-in-want allow-reachable-sha1-in-want 
> symref=HEAD:refs/heads/master agent=git/2.23.0.40.g4d780d4382.dirty
>   packet:  upload-pack> 5e26403b4485d7a44fd8b65dc3f71e21c0dd6fad 
> refs/heads/master
>   packet:  upload-pack> 
>   packet:fetch< 5e26403b4485d7a44fd8b65dc3f71e21c0dd6fad 
> HEAD\0multi_ack thin-pack side-band side-band-64k ofs-delta shallow 
> deepen-since deepen-not deepen-relative no-progress include-tag 
> multi_ack_detailed allow-tip-sha1-in-want allow-reachable-sha1-in-want 
> symref=HEAD:refs/heads/master agent=git/2.23.0.40.g4d780d4382.dirty
>   packet:fetch< 5e26403b4485d7a44fd8b65dc3f71e21c0dd6fad 
> refs/heads/master
>   packet:fetch< 
>   packet:fetch> want 64ea4c133d59fa98e86a771eda009872d6ab2886 
> multi_ack_detailed side-band-64k thin-pack no-progress ofs-delta deepen-since 
> deepen-not agent=git/2.23.0.40.g4d780d4382.dirty
>   packet:fetch> 
>   packet:  upload-pack< want 64ea4c133d59fa98e86a771eda009872d6ab2886 
> multi_ack_detailed side-band-64k thin-pack no-progress ofs-delta deepen-since 
> deepen-not agent=git/2.23.0.40.g4d780d4382.dirty
>   packet:  upload-pack< 
>   packet:  upload-pack> ERR upload-pack: not our ref 
> 64ea4c133d59fa98e86a771eda009872d6ab2886
>   packet:  upload-pack> 
>   packet:fetch> done
> 
> In the avarage successful run that "fetch> done" pkt-line is
> immediately after the "fetch> ".
> 
> The above is on my Linux box; here is a failure on macOS, essentially
> the same:
> 
>   https://travis-ci.org/szeder/git/jobs/578555606#L3453
> 

Try this patch to reliably reproduce this failure:

diff --git a/fetch-pack.c b/fetch-pack.c
index 65be043f2a..e18864458b 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -471,6 +471,7 @@ static int find_common(struct fetch_negotiator *negotiator,
}
 done:
if (!got_ready || !no_done) {
+   sleep(1);
packet_buf_write(&req_buf, "done\n");
send_request(args, fd[1], &req_buf);
}
diff --git a/t/t5516-fetch-push.sh b/t/t5516-fetch-push.sh
index c81ca360ac..ebecc81c89 100755
--- a/t/t5516-fetch-push.sh
+++ b/t/t5516-fetch-push.sh
@@ -1252,7 +1252,9 @@ do
git fetch ../testrepo/.git $SHA1_2 &&
git cat-file commit $SHA1_2 &&
test_must_fail env GIT_TEST_PROTOCOL_VERSION= \
+   GIT_TRACE_PACKET="$(pwd)/trace-packet" \
git fetch ../testrepo/.git $SHA1_3 2>err &&
+   cat trace-packet &&
test_i18ngrep "remote error:.*not our ref.*$SHA1_3\$" 
err
)
'



Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-08-29 Thread SZEDER Gábor
On Thu, Aug 29, 2019 at 10:38:05AM -0400, Jeff King wrote:
> So any fixes there have to happen on the client side. I am still
> confused about why the client is writing in this case, per the argument
> in 014ade7484 (upload-pack: send ERR packet for non-tip objects,
> 2019-04-13). It would be nice to use GIT_TRACE_PACKET to see what it's
> trying to write, but I still haven't been able to reproduce the issue.

It's the "done" line:

  + cat trace-packet
  packet:  upload-pack> 5e26403b4485d7a44fd8b65dc3f71e21c0dd6fad 
HEAD\0multi_ack thin-pack side-band side-band-64k ofs-delta shallow 
deepen-since deepen-not deepen-relative no-progress include-tag 
multi_ack_detailed allow-tip-sha1-in-want allow-reachable-sha1-in-want 
symref=HEAD:refs/heads/master agent=git/2.23.0.40.g4d780d4382.dirty
  packet:  upload-pack> 5e26403b4485d7a44fd8b65dc3f71e21c0dd6fad 
refs/heads/master
  packet:  upload-pack> 
  packet:fetch< 5e26403b4485d7a44fd8b65dc3f71e21c0dd6fad 
HEAD\0multi_ack thin-pack side-band side-band-64k ofs-delta shallow 
deepen-since deepen-not deepen-relative no-progress include-tag 
multi_ack_detailed allow-tip-sha1-in-want allow-reachable-sha1-in-want 
symref=HEAD:refs/heads/master agent=git/2.23.0.40.g4d780d4382.dirty
  packet:fetch< 5e26403b4485d7a44fd8b65dc3f71e21c0dd6fad 
refs/heads/master
  packet:fetch< 
  packet:fetch> want 64ea4c133d59fa98e86a771eda009872d6ab2886 
multi_ack_detailed side-band-64k thin-pack no-progress ofs-delta deepen-since 
deepen-not agent=git/2.23.0.40.g4d780d4382.dirty
  packet:fetch> 
  packet:  upload-pack< want 64ea4c133d59fa98e86a771eda009872d6ab2886 
multi_ack_detailed side-band-64k thin-pack no-progress ofs-delta deepen-since 
deepen-not agent=git/2.23.0.40.g4d780d4382.dirty
  packet:  upload-pack< 
  packet:  upload-pack> ERR upload-pack: not our ref 
64ea4c133d59fa98e86a771eda009872d6ab2886
  packet:  upload-pack> 
  packet:fetch> done

In the avarage successful run that "fetch> done" pkt-line is
immediately after the "fetch> ".

The above is on my Linux box; here is a failure on macOS, essentially
the same:

  https://travis-ci.org/szeder/git/jobs/578555606#L3453




Re: error: cannot cherry-pick during a revert

2019-08-29 Thread Phillip Wood

Hi Mike

On 29/08/2019 00:25, Mike Hommey wrote:

Hi,

This just happened to me while cherry-pick'ing:

$ git cherry-pick HEAD@{1}
error: could not apply 614fe5e629b84... try
hint: after resolving the conflicts, mark the corrected paths
hint: with 'git add ' or 'git rm '
hint: and commit the result with 'git commit'
Recorded preimage for 'taskcluster/ci/build/linux.yml'

(... this is where I fix my conflict ...)

$ git add -u
$ git cherry-pick --continue
error: cannot cherry-pick during a revert.
fatal: cherry-pick failed


Oh dear that's not good


So apparently, cherry-pick thinks it was doing a revert when it hit a
conflict?

(This is with git 2.23)


I wondered if this was due to some of the recent changes adding --skip 
to cherry-pick and revert but I can't see anything obvious at the 
moment. To get that error the sequencer has loaded a todo file (in 
read_populate_todo()) which starts with a revert command. Is it possible 
you were reverting a sequence of commits before you ran the cherry-pick? 
(a single pick or revert does not create a todo list). It could be that 
there was an old todo list left over from a while ago - historically the 
sequencer hasn't been all that good at cleaning up after itself if the 
user committed the final pick or revert with 'git commit' and forgot to 
run 'cherry-pick/revert --continue' afterwards.


Best Wishes

Phillip


Mike



Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-08-29 Thread Jeff King
On Thu, Aug 29, 2019 at 10:27:16AM -0400, Derrick Stolee wrote:

> > I don't think we should need such a call. For one thing, if it were
> > necessary, that would mean we're not writing out the packet at all. But
> > your whole problem is that we're writing the message twice, one of which
> > comes from the packet.
> 
> The problem the flush() was trying to solve was the new "Broken pipe" error,
> which I had assumed was due to a communication race. (Looking at the message
> more closely now, I see that Szeder was able to repro this broken pipe both
> with and without my change. I am still unable to repro the broken pipe.)

I think the broken pipe is coming the _other_ way. We do send the packet
from the server to the client, but since the client is still writing
when the server has hung up, we get a write error instead of seeing the
error packet.

So any fixes there have to happen on the client side. I am still
confused about why the client is writing in this case, per the argument
in 014ade7484 (upload-pack: send ERR packet for non-tip objects,
2019-04-13). It would be nice to use GIT_TRACE_PACKET to see what it's
trying to write, but I still haven't been able to reproduce the issue.

-Peff


Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-08-29 Thread Derrick Stolee
On 8/29/2019 10:13 AM, Jeff King wrote:
> On Thu, Aug 29, 2019 at 08:58:55AM -0400, Derrick Stolee wrote:
> 
>> However, I do have a theory: the process exits before flushing the
>> packet line. Adding this line before exit(1) should fix it:
>>
>>  packet_writer_flush(writer);
>>
>> I can send this in a v2, but it would be nice if you could test this
>> in your environment that already demonstrated the failure.
> 
> I don't think we should need such a call. For one thing, if it were
> necessary, that would mean we're not writing out the packet at all. But
> your whole problem is that we're writing the message twice, one of which
> comes from the packet.

The problem the flush() was trying to solve was the new "Broken pipe" error,
which I had assumed was due to a communication race. (Looking at the message
more closely now, I see that Szeder was able to repro this broken pipe both
with and without my change. I am still unable to repro the broken pipe.)

> Second is that this is not "flush the output stream", but "write a flush
> packet". The packet_writer_error() function immediately calls write()
> without buffering. And no matter where we are in the conversation, a
> flush packet would not be necessary, because the error packet we send
> would be interpreted immediately by the client as aborting the
> connection.

This clearly shows that my proposed solution is absolutely wrong.

Thanks,
-Stolee



Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-08-29 Thread Jeff King
On Thu, Aug 29, 2019 at 08:58:55AM -0400, Derrick Stolee wrote:

> However, I do have a theory: the process exits before flushing the
> packet line. Adding this line before exit(1) should fix it:
> 
>   packet_writer_flush(writer);
> 
> I can send this in a v2, but it would be nice if you could test this
> in your environment that already demonstrated the failure.

I don't think we should need such a call. For one thing, if it were
necessary, that would mean we're not writing out the packet at all. But
your whole problem is that we're writing the message twice, one of which
comes from the packet.

Second is that this is not "flush the output stream", but "write a flush
packet". The packet_writer_error() function immediately calls write()
without buffering. And no matter where we are in the conversation, a
flush packet would not be necessary, because the error packet we send
would be interpreted immediately by the client as aborting the
connection.

-Peff


Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-08-29 Thread Derrick Stolee
On 8/28/2019 12:15 PM, SZEDER Gábor wrote:
> On Wed, Aug 28, 2019 at 11:39:44AM -0400, Jeff King wrote:
>> On Wed, Aug 28, 2019 at 10:54:12AM -0400, Jeff King wrote:
>>
>>>> Unfortunately, however, while running './t5516-fetch-push.sh -r 1,79
>>>> --stress' to try to reproduce a failure caused by those mingled
>>>> messages, the same check only failed for a different reason so far
>>>> (both on Linux and macOS (on Travis CI)):
>>>
>>> There's some hand-waving argument that this should be race-free in
>>> 014ade7484 (upload-pack: send ERR packet for non-tip objects,
>>> 2019-04-13), but I am not too surprised if there is a flaw in that
>>> logic.
>>
>> By the way, I've not been able to reproduce this locally after ~10
>> minutes of running "./t5516-fetch-push.sh -r 1,79 --stress" on my Linux
>> box. I wonder what's different.
>>
>> Are you running the tip of master?
> 
> Yeah, but this seems to be one of those "you have to be really lucky,
> even with --stress" cases.
> 
> So...  I was away for keyboard for over an hour and let it run on
> 'master', but it didn't fail.  Then I figured that I give it a try
> with Derrick's patch, because, well, why not, and then I got this
> broken pipe error in ~150 repetitions.  Run it again, same error after
> ~200 reps.  However, I didn't understand how that patch could lead to
> broken pipe, so went back to stressing master...  nothing.  So I
> started writing the reply to that patch saying that it seems to cause
> some racy failures on Linux, and was already proofreading before
> sending when the damn thing finally did fail.  Oh, well.
> 
> Then tried it on macOS, and it failed fairly quickly.  For lack of
> better options I used Travis CI's debug shell to access a mac VM, and
> could reproduce the failure both with and without the patch before it
> timeouted.

I'm running these tests under --stress now, but not seeing the error
you saw.

However, I do have a theory: the process exits before flushing the
packet line. Adding this line before exit(1) should fix it:

packet_writer_flush(writer);

I can send this in a v2, but it would be nice if you could test this
in your environment that already demonstrated the failure.

Thanks,
-Stolee



error: cannot cherry-pick during a revert

2019-08-28 Thread Mike Hommey
Hi,

This just happened to me while cherry-pick'ing:

$ git cherry-pick HEAD@{1}
error: could not apply 614fe5e629b84... try
hint: after resolving the conflicts, mark the corrected paths
hint: with 'git add ' or 'git rm '
hint: and commit the result with 'git commit'
Recorded preimage for 'taskcluster/ci/build/linux.yml'

(... this is where I fix my conflict ...)

$ git add -u
$ git cherry-pick --continue 
error: cannot cherry-pick during a revert.
fatal: cherry-pick failed

So apparently, cherry-pick thinks it was doing a revert when it hit a
conflict?

(This is with git 2.23)

Mike


Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-08-28 Thread SZEDER Gábor
On Wed, Aug 28, 2019 at 11:39:44AM -0400, Jeff King wrote:
> On Wed, Aug 28, 2019 at 10:54:12AM -0400, Jeff King wrote:
> 
> > > Unfortunately, however, while running './t5516-fetch-push.sh -r 1,79
> > > --stress' to try to reproduce a failure caused by those mingled
> > > messages, the same check only failed for a different reason so far
> > > (both on Linux and macOS (on Travis CI)):
> > 
> > There's some hand-waving argument that this should be race-free in
> > 014ade7484 (upload-pack: send ERR packet for non-tip objects,
> > 2019-04-13), but I am not too surprised if there is a flaw in that
> > logic.
> 
> By the way, I've not been able to reproduce this locally after ~10
> minutes of running "./t5516-fetch-push.sh -r 1,79 --stress" on my Linux
> box. I wonder what's different.
> 
> Are you running the tip of master?

Yeah, but this seems to be one of those "you have to be really lucky,
even with --stress" cases.

So...  I was away for keyboard for over an hour and let it run on
'master', but it didn't fail.  Then I figured that I give it a try
with Derrick's patch, because, well, why not, and then I got this
broken pipe error in ~150 repetitions.  Run it again, same error after
~200 reps.  However, I didn't understand how that patch could lead to
broken pipe, so went back to stressing master...  nothing.  So I
started writing the reply to that patch saying that it seems to cause
some racy failures on Linux, and was already proofreading before
sending when the damn thing finally did fail.  Oh, well.

Then tried it on macOS, and it failed fairly quickly.  For lack of
better options I used Travis CI's debug shell to access a mac VM, and
could reproduce the failure both with and without the patch before it
timeouted.



Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-08-28 Thread Jeff King
On Wed, Aug 28, 2019 at 10:54:12AM -0400, Jeff King wrote:

> > Unfortunately, however, while running './t5516-fetch-push.sh -r 1,79
> > --stress' to try to reproduce a failure caused by those mingled
> > messages, the same check only failed for a different reason so far
> > (both on Linux and macOS (on Travis CI)):
> 
> There's some hand-waving argument that this should be race-free in
> 014ade7484 (upload-pack: send ERR packet for non-tip objects,
> 2019-04-13), but I am not too surprised if there is a flaw in that
> logic.

By the way, I've not been able to reproduce this locally after ~10
minutes of running "./t5516-fetch-push.sh -r 1,79 --stress" on my Linux
box. I wonder what's different.

Are you running the tip of master?

-Peff


Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-08-28 Thread Jeff King
On Wed, Aug 28, 2019 at 11:34:08AM +0200, SZEDER Gábor wrote:

> On Tue, Aug 27, 2019 at 06:43:29PM -0700, Derrick Stolee via GitGitGadget 
> wrote:
> > Test t5516-fetch-push.sh has a test 'deny fetch unreachable SHA1,
> > allowtipsha1inwant=true' that checks stderr for a specific error
> > string from the remote. In some build environments the error sent
> > over the remote connection gets mingled with the error from the
> > die() statement. Since both signals are being output to the same
> > file descriptor (but from parent and child processes), the output
> > we are matching with grep gets split.
> 
> In the spirit of "An error message is worth a thousand words", I think
> it's worth to include the error message causing the failure:
> 
>   error: 'grep not our ref.*64ea4c133d59fa98e86a771eda009872d6ab2886 err' 
> didn't find a match in:
>   fatal: git upload-pack: not our ref 64ea4c13fatal: remote error: 
> upload-pack: not our ref 63d59fa98e86a771eda009872d6ab2886
>   4ea4c133d59fa98e86a771eda009872d6ab2886
>   error: last command exited with $?=1
> 
> I tried to reproduce this specific error on Linux and macOS, but
> couldn't yet.

I suspect it depends on write() to a file not being atomic, since we
should be able to get the full message out to a single write in both
cases. It's _possible_ that stderr is fully buffered on Windows, but it
generally shouldn't be. If it is, then this might help:

diff --git a/usage.c b/usage.c
index 2fdb20086b..d6df31bc5b 100644
--- a/usage.c
+++ b/usage.c
@@ -10,13 +10,19 @@ void vreportf(const char *prefix, const char *err, va_list 
params)
 {
char msg[4096];
char *p;
-
-   vsnprintf(msg, sizeof(msg), err, params);
+   size_t len;
+
+   /* truncation is OK here, but make sure we have a newline */
+   len = xsnprintf(msg, sizeof(msg), "%s", prefix);
+   len += vsnprintf(msg + len, sizeof(msg) - len, err, params);
+   if (len >= sizeof(msg) - 1)
+   len--;
+   len += xsnprintf(msg + len, sizeof(msg) - len, "\n");
for (p = msg; *p; p++) {
if (iscntrl(*p) && *p != '\t' && *p != '\n')
*p = '?';
}
-   fprintf(stderr, "%s%s\n", prefix, msg);
+   write(2, msg, len);
 }
 
 static NORETURN void usage_builtin(const char *err, va_list params)

But again, I'm doubtful.

> Here you talk about reducing the risk ...
> 
> > 1. Write an error message to stderr.
> > 2. Write an error message across the connection.
> > 3. exit(1).
> > 
> > This reorders the events so the error is written entirely before
> > the client receives a message from the remote, removing the race
> > condition.
> 
> ... but here you talk about removing the race condition.
> 
> I don't understand how this change would remove the race condition.
> After all, the occasional failure is caused by two messages racing
> through different file descriptors, and merely sending them in reverse
> order doesn't change that they would still be racing.

I had the same puzzlement. I think the answer might be that in the
original, we can have two write()s happening simultaneously:

  1. upload-pack sends the packet to the client

  2. client receives packet

  3a. upload-pack then writes to stderr

  3b. simultaneously, the client writes to stderr

But by reordering, step 3a is completed before step 1.

> > +   warning("git upload-pack: not our ref %s",
> > +   oid_to_hex(&o->oid));
> > packet_writer_error(writer,
> > "upload-pack: not our ref %s",
> > oid_to_hex(&o->oid));
> > -   die("git upload-pack: not our ref %s",
> > -   oid_to_hex(&o->oid));
> > +   exit(1);
> 
> 
> So, the error coming from the 'git fetch' command in question
> currently looks like this:
> 
>   fatal: git upload-pack: not our ref 64ea4c133d59fa98e86a771eda009872d6ab2886
>   fatal: remote error: upload-pack: not our ref 
> 64ea4c133d59fa98e86a771eda009872d6ab2886
> 
> Changing die() to a warning() changes the prefix of the message from
> "fatal:" to "warning:", i.e. with this patch I got this:
> 
>   warning: git upload-pack: not our ref 
> 64ea4c133d59fa98e86a771eda009872d6ab2886
>   fatal: remote error: upload-pack: not our ref 
> 64ea4c133d59fa98e86a771eda009872d6ab2886
> 
> I don't think that "demoting" that message from fatal to warning
> matter

Re: [PATCH 1/1] upload-pack: fix race condition in error messages

2019-08-28 Thread SZEDER Gábor
On Tue, Aug 27, 2019 at 06:43:29PM -0700, Derrick Stolee via GitGitGadget wrote:
> Test t5516-fetch-push.sh has a test 'deny fetch unreachable SHA1,
> allowtipsha1inwant=true' that checks stderr for a specific error
> string from the remote. In some build environments the error sent
> over the remote connection gets mingled with the error from the
> die() statement. Since both signals are being output to the same
> file descriptor (but from parent and child processes), the output
> we are matching with grep gets split.

In the spirit of "An error message is worth a thousand words", I think
it's worth to include the error message causing the failure:

  error: 'grep not our ref.*64ea4c133d59fa98e86a771eda009872d6ab2886 err' 
didn't find a match in:
  fatal: git upload-pack: not our ref 64ea4c13fatal: remote error: upload-pack: 
not our ref 63d59fa98e86a771eda009872d6ab2886
  4ea4c133d59fa98e86a771eda009872d6ab2886
  error: last command exited with $?=1

I tried to reproduce this specific error on Linux and macOS, but
couldn't yet.

> To reduce the risk of this failure, follow this process instead:

Here you talk about reducing the risk ...

> 1. Write an error message to stderr.
> 2. Write an error message across the connection.
> 3. exit(1).
> 
> This reorders the events so the error is written entirely before
> the client receives a message from the remote, removing the race
> condition.

... but here you talk about removing the race condition.

I don't understand how this change would remove the race condition.
After all, the occasional failure is caused by two messages racing
through different file descriptors, and merely sending them in reverse
order doesn't change that they would still be racing.

> Signed-off-by: Derrick Stolee 
> ---
>  upload-pack.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/upload-pack.c b/upload-pack.c
> index 222cd3ad89..b0d3e028d1 100644
> --- a/upload-pack.c
> +++ b/upload-pack.c
> @@ -613,11 +613,12 @@ static void check_non_tip(struct object_array *want_obj,
>   for (i = 0; i < want_obj->nr; i++) {
>   struct object *o = want_obj->objects[i].item;
>   if (!is_our_ref(o)) {
> + warning("git upload-pack: not our ref %s",
> + oid_to_hex(&o->oid));
>   packet_writer_error(writer,
>   "upload-pack: not our ref %s",
>       oid_to_hex(&o->oid));
> - die("git upload-pack: not our ref %s",
> - oid_to_hex(&o->oid));
> + exit(1);


So, the error coming from the 'git fetch' command in question
currently looks like this:

  fatal: git upload-pack: not our ref 64ea4c133d59fa98e86a771eda009872d6ab2886
  fatal: remote error: upload-pack: not our ref 
64ea4c133d59fa98e86a771eda009872d6ab2886

Changing die() to a warning() changes the prefix of the message from
"fatal:" to "warning:", i.e. with this patch I got this:

  warning: git upload-pack: not our ref 64ea4c133d59fa98e86a771eda009872d6ab2886
  fatal: remote error: upload-pack: not our ref 
64ea4c133d59fa98e86a771eda009872d6ab2886

I don't think that "demoting" that message from fatal to warning
matters much to users, because they would see the (arguably redundant)
second line starting with "fatal:".

As for the problematic test, it checks this error with:

  test_i18ngrep "remote error:.*not our ref.*$SHA1_3\$" err

so changing that prefix shouldn't affect the test, either.


Unfortunately, however, while running './t5516-fetch-push.sh -r 1,79
--stress' to try to reproduce a failure caused by those mingled
messages, the same check only failed for a different reason so far
(both on Linux and macOS (on Travis CI)):

  error: 'grep remote error:.*not our 
ref.*64ea4c133d59fa98e86a771eda009872d6ab2886$ err' didn't find a match in:
  fatal: git upload-pack: not our ref 64ea4c133d59fa98e86a771eda009872d6ab2886
  fatal: unable to write to remote: Broken pipe
  error: last command exited with $?=1

And with this patch:

  error: 'grep remote error:.*not our 
ref.*64ea4c133d59fa98e86a771eda009872d6ab2886$ err' didn't find a match in:
  warning: git upload-pack: not our ref 64ea4c133d59fa98e86a771eda009872d6ab2886
  fatal: unable to write to remote: Broken pipe
  error: last command exited with $?=1

We could make the test pass by relaxing the 'grep' pattern to look
only for 'not our ref.*', but I doubt that ignoring a broken
pipe is a such good idea.



[PATCH 1/1] upload-pack: fix race condition in error messages

2019-08-27 Thread Derrick Stolee via GitGitGadget
From: Derrick Stolee 

Test t5516-fetch-push.sh has a test 'deny fetch unreachable SHA1,
allowtipsha1inwant=true' that checks stderr for a specific error
string from the remote. In some build environments the error sent
over the remote connection gets mingled with the error from the
die() statement. Since both signals are being output to the same
file descriptor (but from parent and child processes), the output
we are matching with grep gets split.

To reduce the risk of this failure, follow this process instead:

1. Write an error message to stderr.
2. Write an error message across the connection.
3. exit(1).

This reorders the events so the error is written entirely before
the client receives a message from the remote, removing the race
condition.

Signed-off-by: Derrick Stolee 
---
 upload-pack.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/upload-pack.c b/upload-pack.c
index 222cd3ad89..b0d3e028d1 100644
--- a/upload-pack.c
+++ b/upload-pack.c
@@ -613,11 +613,12 @@ static void check_non_tip(struct object_array *want_obj,
for (i = 0; i < want_obj->nr; i++) {
struct object *o = want_obj->objects[i].item;
if (!is_our_ref(o)) {
+   warning("git upload-pack: not our ref %s",
+   oid_to_hex(&o->oid));
packet_writer_error(writer,
"upload-pack: not our ref %s",
oid_to_hex(&o->oid));
-   die("git upload-pack: not our ref %s",
-   oid_to_hex(&o->oid));
+   exit(1);
}
}
 }
-- 
gitgitgadget


Re: [PATCH] get_next_submodule(): format error string as an error

2019-08-14 Thread Junio C Hamano
Jeff King  writes:

> On Wed, Aug 14, 2019 at 09:57:50AM +, Paolo Pettinato (ppettina) wrote:
>
>> Could not access submodule 'sm' # fails, plus no newline here :P!
>
> This part seems easy enough to fix.

Indeed, it is ;-)

> diff --git a/submodule.c b/submodule.c
> index 0f199c5137..a5ba57ac36 100644
> --- a/submodule.c
> +++ b/submodule.c
> @@ -1478,7 +1478,7 @@ static int get_next_submodule(struct child_process *cp,
>   !is_empty_dir(ce->name)) {
>   spf->result = 1;
>   strbuf_addf(err,
> - _("Could not access submodule 
> '%s'"),
> + _("error: could not access 
> submodule '%s'\n"),
>   ce->name);
>   }
>   }


[PATCH] get_next_submodule(): format error string as an error

2019-08-14 Thread Jeff King
On Wed, Aug 14, 2019 at 09:57:50AM +, Paolo Pettinato (ppettina) wrote:

> Could not access submodule 'sm' # fails, plus no newline here :P!

This part seems easy enough to fix.

-- >8 --
Subject: get_next_submodule(): format error string as an error

The run_processes_parallel() interface passes its callback functions an
"err" strbuf in which they can accumulate errors. However, this differs
from our usual "err" strbufs in that the result is not simply passed to
error(), like:

  if (frob_repo(&err) < 0)
error("frobnication failed: %s", err.buf);

Instead, we append the error buffer as-is to a buffer collecting the
sub-process stderr, adding neither a prefix nor a trailing newline. This
gives callbacks more flexibility (e.g., get_next_submodule() adds its
own "Fetching submodule foo" informational lines), but it means they're
also responsible for formatting any errors themselves.

We forgot to do so in the single error message in get_next_submodule(),
meaning that it was output without a trailing newline. While we're
fixing that, let's also give it the usual "error:" prefix and downcase
the start of the message. We can't use error() here, because it always
outputs directly to stderr.

Looking at other users of run_processes_parallel(), there are a few
similar messages in update_clone_task_finished(). But those sites do
correctly add a newline (they don't use an "error" prefix, but it
doesn't make as much sense there).

Signed-off-by: Jeff King 
---
 submodule.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/submodule.c b/submodule.c
index 0f199c5137..a5ba57ac36 100644
--- a/submodule.c
+++ b/submodule.c
@@ -1478,7 +1478,7 @@ static int get_next_submodule(struct child_process *cp,
!is_empty_dir(ce->name)) {
spf->result = 1;
strbuf_addf(err,
-   _("Could not access submodule 
'%s'"),
+   _("error: could not access 
submodule '%s'\n"),
ce->name);
}
}
-- 
2.23.0.rc2.479.gbd16c8906f



Re: [PATCH] worktree remove: clarify error message on dirty worktree

2019-08-13 Thread Junio C Hamano
Eric Sunshine  writes:

> On Tue, Aug 13, 2019 at 2:04 PM SZEDER Gábor  wrote:
>> To avoid data loss, 'git worktree remove' refuses to delete a worktree
>> if it's dirty or contains untracked files.  However, the error message
>> only mentions that the worktree "is dirty", even if the worktree in
>> question is in fact clean, but contains untracked files:
>> [...]
>> Clarify this error message to say that the worktree "contains modified
>> or untracked files".
>>
>> Signed-off-by: SZEDER Gábor 
>> ---
>> diff --git a/builtin/worktree.c b/builtin/worktree.c
>> @@ -880,7 +880,7 @@ static void check_clean_worktree(struct worktree *wt,
>> ret = xread(cp.out, buf, sizeof(buf));
>> if (ret)
>> -   die(_("'%s' is dirty, use --force to delete it"),
>> +   die(_("'%s' contains modified or untracked files, use 
>> --force to delete it"),
>> original_path);
>
> Makes sense. This is a different type of "dirtiness" than, say, "git
> rebase --interactive" which cares about unstaged changes but generally
> doesn't mind untracked files. So, it deserves an error message which
> mentions untracked files explicitly.
>
> We could actually parse the output of "git status --porcelain" (which
> is invoked just above this spot) and provide a more specific error
> message ("...contains modified files" or "...contains untracked
> files") but that's probably not worth the effort.
>
> Anyhow, for what it's worth:
> Reviewed-by: Eric Sunshine 

Thanks, both.


Re: [PATCH] worktree remove: clarify error message on dirty worktree

2019-08-13 Thread Eric Sunshine
On Tue, Aug 13, 2019 at 2:04 PM SZEDER Gábor  wrote:
> To avoid data loss, 'git worktree remove' refuses to delete a worktree
> if it's dirty or contains untracked files.  However, the error message
> only mentions that the worktree "is dirty", even if the worktree in
> question is in fact clean, but contains untracked files:
> [...]
> Clarify this error message to say that the worktree "contains modified
> or untracked files".
>
> Signed-off-by: SZEDER Gábor 
> ---
> diff --git a/builtin/worktree.c b/builtin/worktree.c
> @@ -880,7 +880,7 @@ static void check_clean_worktree(struct worktree *wt,
> ret = xread(cp.out, buf, sizeof(buf));
> if (ret)
> -   die(_("'%s' is dirty, use --force to delete it"),
> +   die(_("'%s' contains modified or untracked files, use --force 
> to delete it"),
> original_path);

Makes sense. This is a different type of "dirtiness" than, say, "git
rebase --interactive" which cares about unstaged changes but generally
doesn't mind untracked files. So, it deserves an error message which
mentions untracked files explicitly.

We could actually parse the output of "git status --porcelain" (which
is invoked just above this spot) and provide a more specific error
message ("...contains modified files" or "...contains untracked
files") but that's probably not worth the effort.

Anyhow, for what it's worth:
Reviewed-by: Eric Sunshine 


[PATCH] worktree remove: clarify error message on dirty worktree

2019-08-13 Thread SZEDER Gábor
To avoid data loss, 'git worktree remove' refuses to delete a worktree
if it's dirty or contains untracked files.  However, the error message
only mentions that the worktree "is dirty", even if the worktree in
question is in fact clean, but contains untracked files:

  $ git worktree add test-worktree
  Preparing worktree (new branch 'test-worktree')
  HEAD is now at aa53e60 Initial
  $ >test-worktree/untracked-file
  $ git worktree remove test-worktree/
  fatal: 'test-worktree/' is dirty, use --force to delete it
  $ git -C test-worktree/ diff
  $ git -C test-worktree/ diff --cached
  $ # Huh?  Where are those dirty files?!

Clarify this error message to say that the worktree "contains modified
or untracked files".

Signed-off-by: SZEDER Gábor 
---

I spent more time searching for those dirty files that I would like to
admit...

 builtin/worktree.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/builtin/worktree.c b/builtin/worktree.c
index a5bb02b207..7f094f8170 100644
--- a/builtin/worktree.c
+++ b/builtin/worktree.c
@@ -880,7 +880,7 @@ static void check_clean_worktree(struct worktree *wt,
  original_path);
ret = xread(cp.out, buf, sizeof(buf));
if (ret)
-   die(_("'%s' is dirty, use --force to delete it"),
+   die(_("'%s' contains modified or untracked files, use --force 
to delete it"),
original_path);
close(cp.out);
ret = finish_command(&cp);
-- 
2.23.0.rc2.350.gf4fdc32db7



[PATCH v3 4/7] trace2: trim trailing whitespace in normal format error message

2019-08-09 Thread Jeff Hostetler via GitGitGadget
From: Jeff Hostetler 

Avoid creating unnecessary trailing whitespace in normal
target format error messages when the message is omitted.

Signed-off-by: Jeff Hostetler 
---
 trace2/tr2_tgt_normal.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/trace2/tr2_tgt_normal.c b/trace2/tr2_tgt_normal.c
index 47a1882557..213724d5cb 100644
--- a/trace2/tr2_tgt_normal.c
+++ b/trace2/tr2_tgt_normal.c
@@ -142,8 +142,11 @@ static void fn_error_va_fl(const char *file, int line, 
const char *fmt,
 {
struct strbuf buf_payload = STRBUF_INIT;
 
-   strbuf_addstr(&buf_payload, "error ");
-   maybe_append_string_va(&buf_payload, fmt, ap);
+   strbuf_addstr(&buf_payload, "error");
+   if (fmt && *fmt) {
+   strbuf_addch(&buf_payload, ' ');
+   maybe_append_string_va(&buf_payload, fmt, ap);
+   }
normal_io_write_fl(file, line, &buf_payload);
strbuf_release(&buf_payload);
 }
-- 
gitgitgadget



[PATCH v2 4/7] trace2: trim trailing whitespace in normal format error message

2019-08-08 Thread Jeff Hostetler via GitGitGadget
From: Jeff Hostetler 

Avoid creating unnecessary trailing whitespace in normal
target format error messages when the message is omitted.

Signed-off-by: Jeff Hostetler 
---
 trace2/tr2_tgt_normal.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/trace2/tr2_tgt_normal.c b/trace2/tr2_tgt_normal.c
index 47a1882557..213724d5cb 100644
--- a/trace2/tr2_tgt_normal.c
+++ b/trace2/tr2_tgt_normal.c
@@ -142,8 +142,11 @@ static void fn_error_va_fl(const char *file, int line, 
const char *fmt,
 {
struct strbuf buf_payload = STRBUF_INIT;
 
-   strbuf_addstr(&buf_payload, "error ");
-   maybe_append_string_va(&buf_payload, fmt, ap);
+   strbuf_addstr(&buf_payload, "error");
+   if (fmt && *fmt) {
+   strbuf_addch(&buf_payload, ' ');
+   maybe_append_string_va(&buf_payload, fmt, ap);
+   }
normal_io_write_fl(file, line, &buf_payload);
strbuf_release(&buf_payload);
 }
-- 
gitgitgadget



Re: [RFC PATCH v3] grep: treat PCRE2 jit compilation memory error as non fatal

2019-08-05 Thread Junio C Hamano
Carlo Arenas  writes:

> * code is suspiciously similar to one[2] that was rejected, but
> hopefully commit message is better
> ...
> [2] https://public-inbox.org/git/20181209230024.43444-3-care...@gmail.com/

I do not recall ever rejecting that one.

It did not come with a good proposed log message to be accepted
as-is, so I do not find it surprising that I did not pick it up, was
waiting for a new iteration and then everybody forgot about it.

But that is quite different from getting rejected (with the
connotation that "don't attempt this bad idea again, unless the
world changes drastically").

In any case, this round looks a lot more reasoned.  I personally do
not think the warning() is a good idea.  As I said in the old
discussion, we by default should treat JIT as a mere optimization,
and we should stay out of the way most of the time.

An additional "must have JIT or we will die" [*1*] can be added on
top of this change, if somebody really cares.

Thanks.


[Reference]

*1* https://public-inbox.org/git/87pnu9yekk@evledraar.gmail.com/


Re: [PATCH 3/3] commit-graph: error out on invalid commit oids in 'write --stdin-commits'

2019-08-05 Thread SZEDER Gábor
On Mon, Aug 05, 2019 at 09:57:19AM -0400, Derrick Stolee wrote:
> On 8/5/2019 4:02 AM, SZEDER Gábor wrote:
> > While 'git commit-graph write --stdin-commits' expects commit object
> > ids as input, it accepts and silently skips over any invalid commit
> > object ids, and still exits with success:
> > 
> >   # nonsense
> >   $ echo not-a-commit-oid | git commit-graph write --stdin-commits
> >   $ echo $?
> >   0
> >   # sometimes I forgot that refs are not good...
> >   $ echo HEAD | git commit-graph write --stdin-commits
> >   $ echo $?
> >   0
> >   # valid tree OID, but not a commit OID
> >   $ git rev-parse HEAD^{tree} | git commit-graph write --stdin-commits
> >   $ echo $?
> >   0
> >   $ ls -l .git/objects/info/commit-graph
> >   ls: cannot access '.git/objects/info/commit-graph': No such file or 
> > directory
> > 
> > Check that all input records are indeed valid commit object ids and
> > return with error otherwise, the same way '--stdin-packs' handles
> > invalid input; see e103f7276f (commit-graph: return with errors during
> > write, 2019-06-12).
> 
> Consistency is good. We should definitely make these modes match.

I was also wondering whether it would be worth accepting refs as well,
either as DWIMery or only when a '--revs' option is given (similar to
'git pack-objects --revs').  Dunno, I'm a bit hesitant about always
accepting refs as a DWIMery, this is plumbing after all.  And I don't
really care whether I correct my bogus command by replacing 'echo'
with 'git rev-parse' or by adding a '--revs' argument; the important
thing is that the command should tell me that I gave it junk.  And
that would be a new feature, while this patch is a bugfix IMO.

> > Note that it should only return with error when encountering an
> > invalid commit object id coming from standard input.  However,
> > '--reachable' uses the same code path to process object ids pointed to
> > by all refs, and that includes tag object ids as well, which should
> > still be skipped over.  Therefore add a new flag to 'enum
> > commit_graph_write_flags' and a corresponding field to 'struct
> > write_commit_graph_context', so we can differentiate between those two
> > cases.
> 
> Thank you for the care here.

Well, to be honest, I wasn't careful... :)  but running the test suite
with GIT_TEST_COMMIT_GRAPH=1 resulted in about a dozen failed test
scripts that traced back to this.



Re: [PATCH 3/3] commit-graph: error out on invalid commit oids in 'write --stdin-commits'

2019-08-05 Thread Derrick Stolee
On 8/5/2019 4:02 AM, SZEDER Gábor wrote:
> While 'git commit-graph write --stdin-commits' expects commit object
> ids as input, it accepts and silently skips over any invalid commit
> object ids, and still exits with success:
> 
>   # nonsense
>   $ echo not-a-commit-oid | git commit-graph write --stdin-commits
>   $ echo $?
>   0
>   # sometimes I forgot that refs are not good...
>   $ echo HEAD | git commit-graph write --stdin-commits
>   $ echo $?
>   0
>   # valid tree OID, but not a commit OID
>   $ git rev-parse HEAD^{tree} | git commit-graph write --stdin-commits
>   $ echo $?
>   0
>   $ ls -l .git/objects/info/commit-graph
>   ls: cannot access '.git/objects/info/commit-graph': No such file or 
> directory
> 
> Check that all input records are indeed valid commit object ids and
> return with error otherwise, the same way '--stdin-packs' handles
> invalid input; see e103f7276f (commit-graph: return with errors during
> write, 2019-06-12).

Consistency is good. We should definitely make these modes match.

> Note that it should only return with error when encountering an
> invalid commit object id coming from standard input.  However,
> '--reachable' uses the same code path to process object ids pointed to
> by all refs, and that includes tag object ids as well, which should
> still be skipped over.  Therefore add a new flag to 'enum
> commit_graph_write_flags' and a corresponding field to 'struct
> write_commit_graph_context', so we can differentiate between those two
> cases.

Thank you for the care here.

[snip]
> @@ -1215,20 +1216,21 @@ static void fill_oids_from_commit_hex(struct 
> write_commit_graph_context *ctx,
>   struct commit *result;
>  
>   display_progress(ctx->progress, i + 1);
> - if (commit_hex->items[i].string &&
> - parse_oid_hex(commit_hex->items[i].string, &oid, &end))
> - continue;
> -
> - result = lookup_commit_reference_gently(ctx->r, &oid, 1);
> -
> - if (result) {
> + if (!parse_oid_hex(commit_hex->items[i].string, &oid, &end) &&
> + (result = lookup_commit_reference_gently(ctx->r, &oid, 1))) 
> {
>   ALLOC_GROW(ctx->oids.list, ctx->oids.nr + 1, 
> ctx->oids.alloc);
>   oidcpy(&ctx->oids.list[ctx->oids.nr], 
> &(result->object.oid));
>   ctx->oids.nr++;
> + } else if (ctx->check_oids) {
> + error(_("invalid commit object id: %s"),
> + commit_hex->items[i].string);
> + return -1;
>   }
>   }
>   stop_progress(&ctx->progress);
>   strbuf_release(&progress_title);
> +
> + return 0;
>  }

This is the critical bit. I notice that you are not checking 
commit_hex->items[i].string
for NULL, but it should never be NULL here anyway.

> @@ -1775,6 +1777,7 @@ int write_commit_graph(const char *obj_dir,
>   ctx->append = flags & COMMIT_GRAPH_WRITE_APPEND ? 1 : 0;
>   ctx->report_progress = flags & COMMIT_GRAPH_WRITE_PROGRESS ? 1 : 0;
>   ctx->split = flags & COMMIT_GRAPH_WRITE_SPLIT ? 1 : 0;
> + ctx->check_oids = flags & COMMIT_GRAPH_WRITE_CHECK_OIDS ? 1 : 0;
>   ctx->split_opts = split_opts;

Using the enum for the function and the bitfield for internal logic matches the
existing pattern. Thanks.

> @@ -1829,8 +1832,10 @@ int write_commit_graph(const char *obj_dir,
>   goto cleanup;
>   }
>  
> - if (commit_hex)
> - fill_oids_from_commit_hex(ctx, commit_hex);
> + if (commit_hex) {
> + if ((res = fill_oids_from_commit_hex(ctx, commit_hex)))
> + goto cleanup;
> + }

And this links the low-level error to a return code.

Thanks for this! The changes here look good and justify the two cleanup
patches.

-Stolee


Re: [PATCH 0/3] commit-graph: error out on invalid commit oids in 'write --stdin-commits'

2019-08-05 Thread SZEDER Gábor
On Mon, Aug 05, 2019 at 10:02:37AM +0200, SZEDER Gábor wrote:
> While 'git commit-graph write --stdin-commits' expects commit object
> ids as input, it accepts and silently skips over any invalid commit
> object ids, and still exits with success:
> 
>   $ echo not-a-commit-oid | git commit-graph write --stdin-commits
>   $ echo $?

Oh, messed up the copy-paste; this prints 0.

>   $ ls -l .git/objects/info/commit-graph
>   ls: cannot access '.git/objects/info/commit-graph': No such file or 
> directory


[PATCH 3/3] commit-graph: error out on invalid commit oids in 'write --stdin-commits'

2019-08-05 Thread SZEDER Gábor
While 'git commit-graph write --stdin-commits' expects commit object
ids as input, it accepts and silently skips over any invalid commit
object ids, and still exits with success:

  # nonsense
  $ echo not-a-commit-oid | git commit-graph write --stdin-commits
  $ echo $?
  0
  # sometimes I forgot that refs are not good...
  $ echo HEAD | git commit-graph write --stdin-commits
  $ echo $?
  0
  # valid tree OID, but not a commit OID
  $ git rev-parse HEAD^{tree} | git commit-graph write --stdin-commits
  $ echo $?
  0
  $ ls -l .git/objects/info/commit-graph
  ls: cannot access '.git/objects/info/commit-graph': No such file or directory

Check that all input records are indeed valid commit object ids and
return with error otherwise, the same way '--stdin-packs' handles
invalid input; see e103f7276f (commit-graph: return with errors during
write, 2019-06-12).

Note that it should only return with error when encountering an
invalid commit object id coming from standard input.  However,
'--reachable' uses the same code path to process object ids pointed to
by all refs, and that includes tag object ids as well, which should
still be skipped over.  Therefore add a new flag to 'enum
commit_graph_write_flags' and a corresponding field to 'struct
write_commit_graph_context', so we can differentiate between those two
cases.

Signed-off-by: SZEDER Gábor 
---
 builtin/commit-graph.c  |  4 +++-
 commit-graph.c  | 29 +
 commit-graph.h  |  4 +++-
 t/t5318-commit-graph.sh | 11 ++-
 4 files changed, 33 insertions(+), 15 deletions(-)

diff --git a/builtin/commit-graph.c b/builtin/commit-graph.c
index 64eccde314..57863619b7 100644
--- a/builtin/commit-graph.c
+++ b/builtin/commit-graph.c
@@ -213,8 +213,10 @@ static int graph_write(int argc, const char **argv)
 
if (opts.stdin_packs)
pack_indexes = &lines;
-   if (opts.stdin_commits)
+   if (opts.stdin_commits) {
commit_hex = &lines;
+   flags |= COMMIT_GRAPH_WRITE_CHECK_OIDS;
+   }
 
UNLEAK(buf);
}
diff --git a/commit-graph.c b/commit-graph.c
index 04324a4648..821900675b 100644
--- a/commit-graph.c
+++ b/commit-graph.c
@@ -782,7 +782,8 @@ struct write_commit_graph_context {
 
unsigned append:1,
 report_progress:1,
-split:1;
+split:1,
+check_oids:1;
 
const struct split_commit_graph_opts *split_opts;
 };
@@ -1193,8 +1194,8 @@ static int fill_oids_from_packs(struct 
write_commit_graph_context *ctx,
return 0;
 }
 
-static void fill_oids_from_commit_hex(struct write_commit_graph_context *ctx,
- struct string_list *commit_hex)
+static int fill_oids_from_commit_hex(struct write_commit_graph_context *ctx,
+struct string_list *commit_hex)
 {
uint32_t i;
struct strbuf progress_title = STRBUF_INIT;
@@ -1215,20 +1216,21 @@ static void fill_oids_from_commit_hex(struct 
write_commit_graph_context *ctx,
struct commit *result;
 
display_progress(ctx->progress, i + 1);
-   if (commit_hex->items[i].string &&
-   parse_oid_hex(commit_hex->items[i].string, &oid, &end))
-   continue;
-
-   result = lookup_commit_reference_gently(ctx->r, &oid, 1);
-
-   if (result) {
+   if (!parse_oid_hex(commit_hex->items[i].string, &oid, &end) &&
+   (result = lookup_commit_reference_gently(ctx->r, &oid, 1))) 
{
ALLOC_GROW(ctx->oids.list, ctx->oids.nr + 1, 
ctx->oids.alloc);
oidcpy(&ctx->oids.list[ctx->oids.nr], 
&(result->object.oid));
ctx->oids.nr++;
+   } else if (ctx->check_oids) {
+   error(_("invalid commit object id: %s"),
+   commit_hex->items[i].string);
+   return -1;
}
}
stop_progress(&ctx->progress);
strbuf_release(&progress_title);
+
+   return 0;
 }
 
 static void fill_oids_from_all_packs(struct write_commit_graph_context *ctx)
@@ -1775,6 +1777,7 @@ int write_commit_graph(const char *obj_dir,
ctx->append = flags & COMMIT_GRAPH_WRITE_APPEND ? 1 : 0;
ctx->report_progress = flags & COMMIT_GRAPH_WRITE_PROGRESS ? 1 : 0;
ctx->split = flags & COMMIT_GRAPH_WRITE_SPLIT ? 1 : 0;
+   ctx->check_oids = flags & COMMIT_GRAPH_WRITE_CHECK_OIDS ? 1 : 0;
ctx->split_opts = split_opts;
 
if (ctx->split) {
@@ -1829,8 +1832,10 @@ int write_commit_graph(const char *obj_dir,

[PATCH 0/3] commit-graph: error out on invalid commit oids in 'write --stdin-commits'

2019-08-05 Thread SZEDER Gábor
While 'git commit-graph write --stdin-commits' expects commit object
ids as input, it accepts and silently skips over any invalid commit
object ids, and still exits with success:

  $ echo not-a-commit-oid | git commit-graph write --stdin-commits
  $ echo $?
  $ ls -l .git/objects/info/commit-graph
  ls: cannot access '.git/objects/info/commit-graph': No such file or directory

The last patch in this series fixes this issue, with a bit of
preparatory refactoring in the second and a while-at-it cleanup in the
first patches.

SZEDER Gábor (3):
  t5318-commit-graph: use 'test_expect_code'
  commit-graph: turn a group of write-related macro flags into an enum
  commit-graph: error out on invalid commit oids in 'write
--stdin-commits'

 builtin/commit-graph.c  | 10 ++
 builtin/gc.c|  2 +-
 commit-graph.c  | 40 +++-
 commit-graph.h  | 15 ++-
 t/t5318-commit-graph.sh | 14 +++---
 5 files changed, 51 insertions(+), 30 deletions(-)

-- 
2.23.0.rc1.309.g896d8c5f5f



Re: [RFC PATCH v3] grep: treat PCRE2 jit compilation memory error as non fatal

2019-08-04 Thread Carlo Arenas
PROs:
* it works (only for PCRE2) and tested in OpenBSD, NetBSD, macOS, Linux (Debian)
* it applies everywhere (even pu) without conflicts
* it doesn't introduce any regressions in tests (tested in Debian with
SElinux in enforcing mode)
* it is simple

CONs:
* HardenedBSD still segfaults (bugfix proposed[1] to sljit/pcre)
* warning is noisy (at least once per thread) and might be even
ineffective as it goes to stderr while stdout with most the output
goes to a pager
* too conservative (pcre2grep shows all errors from pcre2_jit_compile
should be ignored)
* no tests

Known Issues:
* code is ugly (it even triggers a warning if you have the right compiler)
* code is suspiciously similar to one[2] that was rejected, but
hopefully commit message is better
* code is incomplete (PCRE1 has too many conflicting changes in flight
to attempt a similar fix)
* there are obvious blind spots in the tests that need fixing, and a
lot more testing in other platforms/architectures
* git still will sometimes die because the non fast path has UTF-8 issues

I still think the pcre.jit flag knob might be useful to workaround
some of the issues detailed in CONs but probably with a different
definition:
unset -> fallback (try JIT but use interpreter if that didn't work)
false -> don't even try to use JIT
true -> print warning and maybe even die (if we really think that is useful)

some performance numbers below for the perl tests

with JIT enabled (in non enforcing SELinux)

Testthis tree
---
7820.3: perl grep 'how.to'  0.56(0.29+0.60)
7820.7: perl grep '^how to' 0.49(0.29+0.54)
7820.11: perl grep '[how] to'   0.54(0.39+0.51)
7820.15: perl grep '(e.t[^ ]*|v.ry) rare'   0.60(0.45+0.58)
7820.19: perl grep 'm(ú|u)lt.b(æ|y)te'  0.58(0.30+0.61)

with "fallback to interpreter" (in enforcing SELinux)

Testthis tree
---
7820.3: perl grep 'how.to'  0.64(0.59+0.56)
7820.7: perl grep '^how to' 1.83(2.91+0.56)
7820.11: perl grep '[how] to'   2.07(3.33+0.61)
7820.15: perl grep '(e.t[^ ]*|v.ry) rare'   2.89(4.91+0.66)
7820.19: perl grep 'm(ú|u)lt.b(æ|y)te'  0.78(0.86+0.55)

[1] https://github.com/zherczeg/sljit/pull/2
[2] https://public-inbox.org/git/20181209230024.43444-3-care...@gmail.com/


[RFC PATCH v3] grep: treat PCRE2 jit compilation memory error as non fatal

2019-08-03 Thread Carlo Marcelo Arenas Belón
94da9193a6 (grep: add support for PCRE v2, 2017-06-01) uses the
JIT fast path unless JIT support has not been compiled in the
linked library.

Starting from 10.23 of PCRE2, pcre2grep ignores any errors from
pcre2_jit_cpmpile as a workaround for their bug1749[1] and we
should do too, so that the interpreter could be used as a fallback
in cases where JIT was not available because of a security policy.

To be conservative, we are restricting initially the error to the
known error that would be returned in that case (and to be documented
as such in a future release of PCRE) and printing a warning so that
corrective action could be taken.

[1] https://bugs.exim.org/show_bug.cgi?id=1749

Signed-off-by: Carlo Marcelo Arenas Belón 
---
 grep.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/grep.c b/grep.c
index f7c3a5803e..593a1cb7a0 100644
--- a/grep.c
+++ b/grep.c
@@ -525,7 +525,13 @@ static void compile_pcre2_pattern(struct grep_pat *p, 
const struct grep_opt *opt
if (p->pcre2_jit_on == 1) {
jitret = pcre2_jit_compile(p->pcre2_pattern, 
PCRE2_JIT_COMPLETE);
if (jitret)
-   die("Couldn't JIT the PCRE2 pattern '%s', got '%d'\n", 
p->pattern, jitret);
+   if (jitret == PCRE2_ERROR_NOMEMORY) {
+   warning("JIT couldn't be used in PCRE2");
+   p->pcre2_jit_on = 0;
+   return;
+   }
+   else
+   die("Couldn't JIT the PCRE2 pattern '%s', got 
'%d'\n", p->pattern, jitret);
 
/*
 * The pcre2_config(PCRE2_CONFIG_JIT, ...) call just
-- 
2.23.0.rc1



[PATCH v2 12/23] contrib/buildsystems: error out on unknown option

2019-07-29 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

One time too many did this developer call the `generate` script passing
a `--make-out=` option that was happily ignored (because there
should be a space, not an equal sign, between `--make-out` and the
path).

And one time too many, this script not only ignored it but did not even
complain. Let's fix that.

Signed-off-by: Johannes Schindelin 
---
 contrib/buildsystems/engine.pl | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/contrib/buildsystems/engine.pl b/contrib/buildsystems/engine.pl
index 732239d817..1a12f4d556 100755
--- a/contrib/buildsystems/engine.pl
+++ b/contrib/buildsystems/engine.pl
@@ -57,6 +57,8 @@ sub showUsage
 open(F, "<$infile") || die "Couldn't open file $infile";
 @makedry = ;
 close(F);
+} else {
+die "Unknown option: " . $arg;
 }
 }
 
-- 
gitgitgadget



[PATCH v2 07/23] contrib/buildsystems: fix misleading error message

2019-07-29 Thread Philip Oakley via GitGitGadget
From: Philip Oakley 

The error message talked about a "lib option", but it clearly referred
to a link option.

Signed-off-by: Philip Oakley 
Signed-off-by: Johannes Schindelin 
---
 contrib/buildsystems/engine.pl | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/contrib/buildsystems/engine.pl b/contrib/buildsystems/engine.pl
index 53e65d4db7..11f0e16dda 100755
--- a/contrib/buildsystems/engine.pl
+++ b/contrib/buildsystems/engine.pl
@@ -333,7 +333,7 @@ sub handleLinkLine
 } elsif ($part =~ /\.obj$/) {
 # do nothing, 'make' should not be producing .obj, only .o files
 } else {
-die "Unhandled lib option @ line $lineno: $part";
+die "Unhandled link option @ line $lineno: $part";
 }
 }
 #print "AppOut: '$appout'\nLFlags: @lflags\nLibs  : @libs\nOfiles: 
@objfiles\n";
-- 
gitgitgadget



Re: [PATCH v2 01/20] merge-recursive: fix minor memory leak in error condition

2019-07-29 Thread Derrick Stolee
On 7/26/2019 2:31 PM, Junio C Hamano wrote:
> Elijah Newren  writes:
> 
>> Returning before freeing the allocated buffer is suboptimal; as with
>> elsewhere in the same function, make sure buf gets free'd.
> 
> I do not have a real objection to the patch text, but the above
> justification does not make much sense to me.  The original code
> returned an error when buf is NULL, so there is no leak returning
> directly, without jumping to free_buf: label (whose only effect is
> to free(buf) and return the value in ret).
> 
> The real value of this change is it may future-proof the codepath by
> making sure that everybody after an error goes to the same place to
> free all resources---which happens to be only buf in the current
> code, but this change allows it to change in the future, where some
> additional resources may have been allocated before the call to
> read_object_file() and freed after free_buf: label.

It should be noted that any future change to make the "free_buf:" label
mean "free everything" then it should be renamed to "cleanup:" or similar.

Not that we should do that now, as that would muddy the blame.

Thanks,
-Stolee


Re: [PATCH v2 01/20] merge-recursive: fix minor memory leak in error condition

2019-07-26 Thread Elijah Newren
On Fri, Jul 26, 2019 at 11:31 AM Junio C Hamano  wrote:
>
> Elijah Newren  writes:
>
> > Returning before freeing the allocated buffer is suboptimal; as with
> > elsewhere in the same function, make sure buf gets free'd.
>
> I do not have a real objection to the patch text, but the above
> justification does not make much sense to me.  The original code
> returned an error when buf is NULL, so there is no leak returning
> directly, without jumping to free_buf: label (whose only effect is
> to free(buf) and return the value in ret).
>
> The real value of this change is it may future-proof the codepath by
> making sure that everybody after an error goes to the same place to
> free all resources---which happens to be only buf in the current
> code, but this change allows it to change in the future, where some
> additional resources may have been allocated before the call to
> read_object_file() and freed after free_buf: label.

Indeed, not sure how I overlooked that buf was NULL since that was the
precise check I was modifying.  I'll clean up the commit message.


Re: [PATCH v2 01/20] merge-recursive: fix minor memory leak in error condition

2019-07-26 Thread Junio C Hamano
Elijah Newren  writes:

> Returning before freeing the allocated buffer is suboptimal; as with
> elsewhere in the same function, make sure buf gets free'd.

I do not have a real objection to the patch text, but the above
justification does not make much sense to me.  The original code
returned an error when buf is NULL, so there is no leak returning
directly, without jumping to free_buf: label (whose only effect is
to free(buf) and return the value in ret).

The real value of this change is it may future-proof the codepath by
making sure that everybody after an error goes to the same place to
free all resources---which happens to be only buf in the current
code, but this change allows it to change in the future, where some
additional resources may have been allocated before the call to
read_object_file() and freed after free_buf: label.

> Signed-off-by: Elijah Newren 
> ---
>  merge-recursive.c | 8 +---
>  1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/merge-recursive.c b/merge-recursive.c
> index 12300131fc..1163508811 100644
> --- a/merge-recursive.c
> +++ b/merge-recursive.c
> @@ -934,9 +934,11 @@ static int update_file_flags(struct merge_options *opt,
>   }
>  
>   buf = read_object_file(&contents->oid, &type, &size);
> - if (!buf)
> - return err(opt, _("cannot read object %s '%s'"),
> -oid_to_hex(&contents->oid), path);
> + if (!buf) {
> + ret = err(opt, _("cannot read object %s '%s'"),
> +   oid_to_hex(&contents->oid), path);
> + goto free_buf;
> + }
>   if (type != OBJ_BLOB) {
>   ret = err(opt, _("blob expected for %s '%s'"),
> oid_to_hex(&contents->oid), path);


[PATCH v2 01/20] merge-recursive: fix minor memory leak in error condition

2019-07-26 Thread Elijah Newren
Returning before freeing the allocated buffer is suboptimal; as with
elsewhere in the same function, make sure buf gets free'd.

Signed-off-by: Elijah Newren 
---
 merge-recursive.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/merge-recursive.c b/merge-recursive.c
index 12300131fc..1163508811 100644
--- a/merge-recursive.c
+++ b/merge-recursive.c
@@ -934,9 +934,11 @@ static int update_file_flags(struct merge_options *opt,
}
 
buf = read_object_file(&contents->oid, &type, &size);
-   if (!buf)
-   return err(opt, _("cannot read object %s '%s'"),
-  oid_to_hex(&contents->oid), path);
+   if (!buf) {
+   ret = err(opt, _("cannot read object %s '%s'"),
+ oid_to_hex(&contents->oid), path);
+   goto free_buf;
+   }
if (type != OBJ_BLOB) {
ret = err(opt, _("blob expected for %s '%s'"),
  oid_to_hex(&contents->oid), path);
-- 
2.22.0.550.g71c37a0928.dirty



[PATCH 01/19] merge-recursive: fix minor memory leak in error condition

2019-07-25 Thread Elijah Newren
Returning before freeing the allocated buffer is suboptimal; as with
elsewhere in the same function, make sure buf gets free'd.

Signed-off-by: Elijah Newren 
---
 merge-recursive.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/merge-recursive.c b/merge-recursive.c
index 12300131fc..1163508811 100644
--- a/merge-recursive.c
+++ b/merge-recursive.c
@@ -934,9 +934,11 @@ static int update_file_flags(struct merge_options *opt,
}
 
buf = read_object_file(&contents->oid, &type, &size);
-   if (!buf)
-   return err(opt, _("cannot read object %s '%s'"),
-  oid_to_hex(&contents->oid), path);
+   if (!buf) {
+   ret = err(opt, _("cannot read object %s '%s'"),
+ oid_to_hex(&contents->oid), path);
+   goto free_buf;
+   }
if (type != OBJ_BLOB) {
ret = err(opt, _("blob expected for %s '%s'"),
  oid_to_hex(&contents->oid), path);
-- 
2.22.0.559.g28a8880890.dirty



Re: [PATCH] Close transport helper on protocol error

2019-07-23 Thread Junio C Hamano
Jeff King  writes:

> ...
> At which point I do wonder if this is better handled by a wrapper
> process which simply reaps everything. And indeed, people have already
> come up with similar solutions for containers:
>
>   https://github.com/Yelp/dumb-init
>
> So I dunno. I am not really opposed to this patch, as it is just adding
> some extra cleanup. But it seems like it is really hitting the tip of
> the iceberg, and I'm not sure it's an iceberg I'd like to get to the
> bottom of.

Thanks for stating what I had on mind a lot better than I said ;-)


Re: [PATCH] Close transport helper on protocol error

2019-07-23 Thread Jeff King
On Tue, Jul 23, 2019 at 10:33:10AM -0700, Junio C Hamano wrote:

> > +   if (recvline(data, &buf)){
> > +   release_helper(transport);
> > exit(128);
> > +   }
> 
> This, together with other exit(128) we see in this patch now have
> release_helepr() in front of it, which is in line with what the log
> message claims that the patch does.
> 
> I however wonder if we want to do a bit more, perhaps with atexit().
> I am not hinting-suggesting to do so (as you said, if the init
> process ought to take care of the zombies, the patch under review
> might already be unneeded, and atexit() makes things even worse),
> but having trouble to convince that this patch stops at the right
> place.

I was just writing a similar comment when I read this. It probably fixes
the particular case the OP saw, but Git is quite happy to die() in
various code-paths when it encounters an error.

Rather than try to annotate every possible exit, atexit() might be a
better solution. But isn't this a more general problem even than that?
Lots of parts of Git may spawn a sub-process and assume that the
children will be reaped automatically (as long as they do exit, but that
is usually guaranteed by us closing their input pipes when we ourselves
exit).

So I think you'd have to atexit() quite a few places. Or possibly
instrument run_command() to do so, though it might need some extra
annotation to mark whether a particular sub-process should be waited for
(there is some prior art in the child_process.clean_on_exit option).

At which point I do wonder if this is better handled by a wrapper
process which simply reaps everything. And indeed, people have already
come up with similar solutions for containers:

  https://github.com/Yelp/dumb-init

So I dunno. I am not really opposed to this patch, as it is just adding
some extra cleanup. But it seems like it is really hitting the tip of
the iceberg, and I'm not sure it's an iceberg I'd like to get to the
bottom of.

-Peff


Re: [PATCH] Close transport helper on protocol error

2019-07-23 Thread Junio C Hamano
thibault.ja...@gmail.com writes:

> Subject: Re: [PATCH] Close transport helper on protocol error

Perhaps

Subject: transport: close helper on protocol error

> +static int disconnect_helper(struct transport *transport);
> +
> +static int disconnect_helper_data(struct helper_data *data);

Even after reading it twice, disconnect_helper_data() does not ring
the "this is to disconnect the helper process, based on what is
contained in a helper_data instance" bell, which you wanted to ring.
It sounds like it is trying to disconnect "helper_data" from
something unsaid.

I think the root cause of this awkwardness is because this split of
the function into two is suboptimal.  There is only one existing
caller of disconnect_helper() and it passes transport->data (and the
"data" is of type helper_data).  As it is a file-scope static
function, why not just change the type of the parameter from the
whole transport to just helper_data, without introducing the new
function to hold the bulk of the original code?

> +static int release_helper(struct transport *transport);
> +
>  static struct child_process *get_helper(struct transport *transport)
>  {
>   struct helper_data *data = transport->data;
> @@ -155,8 +161,10 @@ static struct child_process *get_helper(struct transport 
> *transport)
>   while (1) {
>   const char *capname, *arg;
>   int mandatory = 0;
> - if (recvline(data, &buf))
> + if (recvline(data, &buf)){
> + release_helper(transport);
>   exit(128);
> + }

This, together with other exit(128) we see in this patch now have
release_helepr() in front of it, which is in line with what the log
message claims that the patch does.

I however wonder if we want to do a bit more, perhaps with atexit().
I am not hinting-suggesting to do so (as you said, if the init
process ought to take care of the zombies, the patch under review
might already be unneeded, and atexit() makes things even worse),
but having trouble to convince that this patch stops at the right
place.

Thanks.


[PATCH] Close transport helper on protocol error

2019-07-22 Thread thibault . jamet
From: Thibault Jamet 

We have noticed that in some cases, when the transport is not fully
compliant with the protocol, git exits with a status of 128 without
closing the transport.

This usually does not have consequences in a standard unix
environment as the process gets then attached to the init one which will
then take care of closing properly the remaining transport.

We remarkably noticed this behaviour on GitHub Enterprise v2.14.24 when
a repository has been migrated to another GitHub Enterprise instance.
The output of the transport is then:

remote: Repository `org/repo' is disabled. Please ask the owner to check their 
account.
fatal: unable to access 'https://github.example.com/org/repo/': The requested 
URL returned error: 403

and the code exits inside function get_refs_list

In container contexts, there might not be such an init process
to take care of terminating the transport process and hence they remain
as zombies, as mentioned in
https://github.com/rancher/rancher/issues/13858 or
https://github.com/coreos/clair/issues/441.

Although there is a work-around running an init inside the container,
clean-up the processes created at the time git exits.

Signed-off-by: Thibault Jamet 
---
 transport-helper.c | 44 +---
 1 file changed, 33 insertions(+), 11 deletions(-)

diff --git a/transport-helper.c b/transport-helper.c
index cec83bd663..34caa75a72 100644
--- a/transport-helper.c
+++ b/transport-helper.c
@@ -101,6 +101,12 @@ static void do_take_over(struct transport *transport)
 
 static void standard_options(struct transport *t);
 
+static int disconnect_helper(struct transport *transport);
+
+static int disconnect_helper_data(struct helper_data *data);
+
+static int release_helper(struct transport *transport);
+
 static struct child_process *get_helper(struct transport *transport)
 {
struct helper_data *data = transport->data;
@@ -155,8 +161,10 @@ static struct child_process *get_helper(struct transport 
*transport)
while (1) {
const char *capname, *arg;
int mandatory = 0;
-   if (recvline(data, &buf))
+   if (recvline(data, &buf)){
+   release_helper(transport);
exit(128);
+   }
 
if (!*buf.buf)
break;
@@ -215,10 +223,14 @@ static struct child_process *get_helper(struct transport 
*transport)
 
 static int disconnect_helper(struct transport *transport)
 {
-   struct helper_data *data = transport->data;
+   return disconnect_helper_data(transport->data);
+}
+
+static int disconnect_helper_data(struct helper_data *data)
+{
int res = 0;
 
-   if (data->helper) {
+   if (data && data->helper) {
if (debug)
fprintf(stderr, "Debug: Disconnecting.\n");
if (!data->no_disconnect_req) {
@@ -261,8 +273,10 @@ static int strbuf_set_helper_option(struct helper_data 
*data,
int ret;
 
sendline(data, buf);
-   if (recvline(data, buf))
+   if (recvline(data, buf)) {
+   disconnect_helper_data(data);
exit(128);
+   }
 
if (!strcmp(buf->buf, "ok"))
ret = 0;
@@ -366,10 +380,12 @@ static void standard_options(struct transport *t)
 static int release_helper(struct transport *transport)
 {
int res = 0;
-   struct helper_data *data = transport->data;
-   refspec_clear(&data->rs);
-   res = disconnect_helper(transport);
-   free(transport->data);
+   if (transport){
+   struct helper_data *data = transport->data;
+   refspec_clear(&data->rs);
+   res = disconnect_helper(transport);
+   free(transport->data);
+   }
return res;
 }
 
@@ -394,8 +410,10 @@ static int fetch_with_fetch(struct transport *transport,
sendline(data, &buf);
 
while (1) {
-   if (recvline(data, &buf))
+   if (recvline(data, &buf)){
+   release_helper(transport);
exit(128);
+   }
 
if (starts_with(buf.buf, "lock ")) {
const char *name = buf.buf + 5;
@@ -561,8 +579,10 @@ static int run_connect(struct transport *transport, struct 
strbuf *cmdbuf)
setvbuf(input, NULL, _IONBF, 0);
 
sendline(data, cmdbuf);
-   if (recvline_fh(input, cmdbuf))
+   if (recvline_fh(input, cmdbuf)){
+   release_helper(transport);
exit(128);
+   }
 
if (!strcmp(cmdbuf->buf, "")) {
data->no_disconnect_req = 1;
@@ -1074,8 +1094,10 @@ static struct ref *get_refs_list(struct transport 
*transport, int for_push,
 
while (1) {
char *eov, *eo

Re: [PATCH v2 1/1] clean: show an error message when the path is too long

2019-07-19 Thread Junio C Hamano
Johannes Schindelin  writes:

> On Thu, 18 Jul 2019, Junio C Hamano wrote:
>
>> "Johannes Schindelin via GitGitGadget" 
>> writes:
>>
>> > From: Johannes Schindelin 
>> >
>> > Without an error message when `lstat()` failed, `git clean` would
>> > abort without an error message, leaving the user quite puzzled.
>>
>> Let's drop the first three words ;-)  Sorry for not catching it
>> earlier and parrotting the same mistake in my variant yesterday.
>
> You mean the first four words.

Eek, yes, I cannot count ;-)

>> > In particular on Windows, where the default maximum path length is quite
>> > small (yet there are ways to circumvent that limit in many cases), it is
>> > very important that users be given an indication why their command
>> > failed because of too long paths when it did.
>>
>> s/it is very important that users be given/it helps to give users/
>
> If you really feel it important to invalidate my personal style of
> expression, sure.

I meant it as a pure improvement---I think the importance of this
change to the affected population is so obvious that it does not
need self promoting exaggeration to convince readers by stating
plainly how it helps.

But if it is very important for you, I'll undo the change before
merging it to 'next'.

Thanks.
t


Re: [PATCH v2 1/1] clean: show an error message when the path is too long

2019-07-19 Thread Johannes Schindelin
Hi Junio,

On Thu, 18 Jul 2019, Junio C Hamano wrote:

> "Johannes Schindelin via GitGitGadget" 
> writes:
>
> > From: Johannes Schindelin 
> >
> > Without an error message when `lstat()` failed, `git clean` would
> > abort without an error message, leaving the user quite puzzled.
>
> Let's drop the first three words ;-)  Sorry for not catching it
> earlier and parrotting the same mistake in my variant yesterday.

You mean the first four words.

> > In particular on Windows, where the default maximum path length is quite
> > small (yet there are ways to circumvent that limit in many cases), it is
> > very important that users be given an indication why their command
> > failed because of too long paths when it did.
>
> s/it is very important that users be given/it helps to give users/

If you really feel it important to invalidate my personal style of
expression, sure.

Ciao,
Dscho


Re: [PATCH v2 1/1] clean: show an error message when the path is too long

2019-07-18 Thread Junio C Hamano
"Johannes Schindelin via GitGitGadget" 
writes:

> From: Johannes Schindelin 
>
> Without an error message when `lstat()` failed, `git clean` would
> abort without an error message, leaving the user quite puzzled.

Let's drop the first three words ;-)  Sorry for not catching it
earlier and parrotting the same mistake in my variant yesterday.

> In particular on Windows, where the default maximum path length is quite
> small (yet there are ways to circumvent that limit in many cases), it is
> very important that users be given an indication why their command
> failed because of too long paths when it did.

s/it is very important that users be given/it helps to give users/



[PATCH 07/24] contrib/buildsystems: fix misleading error message

2019-07-18 Thread Philip Oakley via GitGitGadget
From: Philip Oakley 

The error message talked about a "lib option", but it clearly referred
to a link option.

Signed-off-by: Philip Oakley 
Signed-off-by: Johannes Schindelin 
---
 contrib/buildsystems/engine.pl | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/contrib/buildsystems/engine.pl b/contrib/buildsystems/engine.pl
index 53e65d4db7..11f0e16dda 100755
--- a/contrib/buildsystems/engine.pl
+++ b/contrib/buildsystems/engine.pl
@@ -333,7 +333,7 @@ sub handleLinkLine
 } elsif ($part =~ /\.obj$/) {
 # do nothing, 'make' should not be producing .obj, only .o files
 } else {
-die "Unhandled lib option @ line $lineno: $part";
+die "Unhandled link option @ line $lineno: $part";
 }
 }
 #print "AppOut: '$appout'\nLFlags: @lflags\nLibs  : @libs\nOfiles: 
@objfiles\n";
-- 
gitgitgadget



[PATCH 12/24] contrib/buildsystems: error out on unknown option

2019-07-18 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

One time too many did this developer call the `generate` script passing
a `--make-out=` option that was happily ignored (because there
should be a space, not an equal sign, between `--make-out` and the
path).

And one time too many, this script not only ignored it but did not even
complain. Let's fix that.

Signed-off-by: Johannes Schindelin 
---
 contrib/buildsystems/engine.pl | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/contrib/buildsystems/engine.pl b/contrib/buildsystems/engine.pl
index 732239d817..1a12f4d556 100755
--- a/contrib/buildsystems/engine.pl
+++ b/contrib/buildsystems/engine.pl
@@ -57,6 +57,8 @@ sub showUsage
 open(F, "<$infile") || die "Couldn't open file $infile";
 @makedry = ;
 close(F);
+} else {
+die "Unknown option: " . $arg;
 }
 }
 
-- 
gitgitgadget



[PATCH v2 1/1] clean: show an error message when the path is too long

2019-07-18 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

Without an error message when `lstat()` failed, `git clean` would
abort without an error message, leaving the user quite puzzled.

In particular on Windows, where the default maximum path length is quite
small (yet there are ways to circumvent that limit in many cases), it is
very important that users be given an indication why their command
failed because of too long paths when it did.

This test case makes sure that a warning is issued that would have
helped the user who reported this issue:

https://github.com/git-for-windows/git/issues/521

Note that we temporarily set `core.longpaths = false` in the regression
test; This ensures forward-compatibility with the `core.longpaths`
feature that has not yet been upstreamed from Git for Windows.

Helped-by: René Scharfe 
Helped-by: SZEDER Gábor 
Helped-by: Junio C Hamano 
Signed-off-by: Johannes Schindelin 
---
 builtin/clean.c  |  3 ++-
 t/t7300-clean.sh | 12 
 2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/builtin/clean.c b/builtin/clean.c
index aaba4af3c2..d5579da716 100644
--- a/builtin/clean.c
+++ b/builtin/clean.c
@@ -34,6 +34,7 @@ static const char *msg_would_remove = N_("Would remove %s\n");
 static const char *msg_skip_git_dir = N_("Skipping repository %s\n");
 static const char *msg_would_skip_git_dir = N_("Would skip repository %s\n");
 static const char *msg_warn_remove_failed = N_("failed to remove %s");
+static const char *msg_warn_lstat_failed = N_("could not lstat %s\n");
 
 enum color_clean {
CLEAN_COLOR_RESET = 0,
@@ -194,7 +195,7 @@ static int remove_dirs(struct strbuf *path, const char 
*prefix, int force_flag,
strbuf_setlen(path, len);
strbuf_addstr(path, e->d_name);
if (lstat(path->buf, &st))
-   ; /* fall thru */
+   warning_errno(_(msg_warn_lstat_failed), path->buf);
else if (S_ISDIR(st.st_mode)) {
if (remove_dirs(path, prefix, force_flag, dry_run, 
quiet, &gone))
ret = 1;
diff --git a/t/t7300-clean.sh b/t/t7300-clean.sh
index 7b36954d63..a2c45d1902 100755
--- a/t/t7300-clean.sh
+++ b/t/t7300-clean.sh
@@ -669,4 +669,16 @@ test_expect_success 'git clean -d skips untracked dirs 
containing ignored files'
test_path_is_missing foo/b/bb
 '
 
+test_expect_success MINGW 'handle clean & core.longpaths = false nicely' '
+   test_config core.longpaths false &&
+   a50=aa &&
+   mkdir -p $a50$a50/$a50$a50/$a50$a50 &&
+   : >"$a50$a50/test.txt" 2>"$a50$a50/$a50$a50/$a50$a50/test.txt" &&
+   # create a temporary outside the working tree to hide from "git clean"
+   test_must_fail git clean -xdf 2>.git/err &&
+   # grepping for a strerror string is unportable but it is OK here with
+   # MINGW prereq
+   test_i18ngrep "too long" .git/err
+'
+
 test_done
-- 
gitgitgadget


[PATCH v2 0/1] Show an error if too-long paths are seen by git clean -dfx

2019-07-18 Thread Johannes Schindelin via GitGitGadget
This is particularly important on Windows, where PATH_MAX is very small
compared to Unix/Linux.

Changes since v1:

 * Matched the warning message style to existing ones,
 * Fixed test in multiple ways: * Avoiding touch in favor of : >.
* Using test_config.
* Using test_i18ngrep instead of grep to avoid localization problems.
* Add helpful comments.
   
   
 * The commit message now talks about lstat() instead of stat().
 * The commit message also explains where that core.longpaths comes from.

Johannes Schindelin (1):
  clean: show an error message when the path is too long

 builtin/clean.c  |  3 ++-
 t/t7300-clean.sh | 12 
 2 files changed, 14 insertions(+), 1 deletion(-)


base-commit: aa25c82427ae70aebf3b8f970f2afd54e9a2a8c6
Published-As: 
https://github.com/gitgitgadget/git/releases/tag/pr-219%2Fdscho%2Fclean-long-paths-v2
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git 
pr-219/dscho/clean-long-paths-v2
Pull-Request: https://github.com/gitgitgadget/git/pull/219

Range-diff vs v1:

 1:  36677556a2 ! 1:  c7b11fe410 clean: show an error message when the path is 
too long
 @@ -2,7 +2,7 @@
  
  clean: show an error message when the path is too long
  
 -Without an error message when stat() failed, e.g. `git clean` would
 +Without an error message when `lstat()` failed, `git clean` would
  abort without an error message, leaving the user quite puzzled.
  
  In particular on Windows, where the default maximum path length is 
quite
 @@ -15,18 +15,32 @@
  
  https://github.com/git-for-windows/git/issues/521
  
 +Note that we temporarily set `core.longpaths = false` in the 
regression
 +test; This ensures forward-compatibility with the `core.longpaths`
 +feature that has not yet been upstreamed from Git for Windows.
 +
 +Helped-by: René Scharfe 
 +Helped-by: SZEDER Gábor 
 +Helped-by: Junio C Hamano 
  Signed-off-by: Johannes Schindelin 
  
   diff --git a/builtin/clean.c b/builtin/clean.c
   --- a/builtin/clean.c
   +++ b/builtin/clean.c
 +@@
 + static const char *msg_skip_git_dir = N_("Skipping repository %s\n");
 + static const char *msg_would_skip_git_dir = N_("Would skip repository 
%s\n");
 + static const char *msg_warn_remove_failed = N_("failed to remove %s");
 ++static const char *msg_warn_lstat_failed = N_("could not lstat %s\n");
 + 
 + enum color_clean {
 +  CLEAN_COLOR_RESET = 0,
  @@
strbuf_setlen(path, len);
strbuf_addstr(path, e->d_name);
if (lstat(path->buf, &st))
  - ; /* fall thru */
 -+ warning("Could not stat path '%s': %s",
 -+ path->buf, strerror(errno));
 ++ warning_errno(_(msg_warn_lstat_failed), path->buf);
else if (S_ISDIR(st.st_mode)) {
if (remove_dirs(path, prefix, force_flag, dry_run, 
quiet, &gone))
ret = 1;
 @@ -39,14 +53,15 @@
   '
   
  +test_expect_success MINGW 'handle clean & core.longpaths = false nicely' 
'
 -+ git config core.longpaths false &&
 -+ test_when_finished git config --unset core.longpaths &&
 ++ test_config core.longpaths false &&
  + a50=aa &&
  + mkdir -p $a50$a50/$a50$a50/$a50$a50 &&
 -+ touch $a50$a50/test.txt &&
 -+ touch $a50$a50/$a50$a50/$a50$a50/test.txt &&
 ++ : >"$a50$a50/test.txt" 2>"$a50$a50/$a50$a50/$a50$a50/test.txt" &&
 ++ # create a temporary outside the working tree to hide from "git clean"
  + test_must_fail git clean -xdf 2>.git/err &&
 -+ grep "too long" .git/err
 ++ # grepping for a strerror string is unportable but it is OK here with
 ++ # MINGW prereq
 ++ test_i18ngrep "too long" .git/err
  +'
  +
   test_done

-- 
gitgitgadget


Re: [PATCH 1/1] clean: show an error message when the path is too long

2019-07-18 Thread Johannes Schindelin
Hi,

On Wed, 17 Jul 2019, Junio C Hamano wrote:

> Junio C Hamano  writes:
>
> >> The other warnings in that function are issued using
> >> warning_errno() (shorter code, consistency is enforced) and
> >> messages are marked for translation.  That would be nice to have
> >> here as well, no?
> >
> > Absolutely.  Also, downcase "Could" and perhaps use _() around.
>
>
> This one is easy enough (not just in the technical sense, but in the
> sense that it has little room wasting our time bikeshedding), so let's
> tie the loose ends and move on.
>
> I was tempted to fix the proposed log message to excise exaggeration
> (I prefer not to see "very", "important", etc.---other things that is
> said in the message should be enough to convince readers about the
> importance), but didn't.
>
> What I did do was to not just rephrasing the warning message, but to
> give it its own constant and to feed it to warning_errno(), to match
> the other warning message.
>
> I also saved one (or perhaps two) fork(s) from the test script ;-) and
> added a portability note there.

Thanks!

On top, I integrated Gabór's suggestion to use `test_config` and threw
in a paragraph in the commit message to explain why the `core.longpaths`
variable is touched at all.

v2 incoming,
Dscho


Re: [PATCH 1/1] clean: show an error message when the path is too long

2019-07-17 Thread Junio C Hamano
Junio C Hamano  writes:

>> The other warnings in that function are issued using warning_errno()
>> (shorter code, consistency is enforced) and messages are marked for
>> translation.  That would be nice to have here as well, no?
>
> Absolutely.  Also, downcase "Could" and perhaps use _() around.


This one is easy enough (not just in the technical sense, but in the
sense that it has little room wasting our time bikeshedding), so
let's tie the loose ends and move on.

I was tempted to fix the proposed log message to excise exaggeration
(I prefer not to see "very", "important", etc.---other things that
is said in the message should be enough to convince readers about
the importance), but didn't.  

What I did do was to not just rephrasing the warning message, but to
give it its own constant and to feed it to warning_errno(), to match
the other warning message.

I also saved one (or perhaps two) fork(s) from the test script ;-)
and added a portability note there.

1:  d93f701a2e ! 1:  b1e100aa6a clean: show an error message when the path is 
too long
    @@ Metadata
  ## Commit message ##
 clean: show an error message when the path is too long
 
-Without an error message when stat() failed, e.g. `git clean` would
+Without an error message when lstat() failed, `git clean` would
 abort without an error message, leaving the user quite puzzled.
 
 In particular on Windows, where the default maximum path length is 
quite
@@ Commit message
 https://github.com/git-for-windows/git/issues/521
 
 Signed-off-by: Johannes Schindelin 
+[jc: matched the warning message style to existing ones, fixed test]
 Signed-off-by: Junio C Hamano 
 
  ## builtin/clean.c ##
+@@ builtin/clean.c: static const char *msg_would_remove = N_("Would remove 
%s\n");
+ static const char *msg_skip_git_dir = N_("Skipping repository %s\n");
+ static const char *msg_would_skip_git_dir = N_("Would skip repository 
%s\n");
+ static const char *msg_warn_remove_failed = N_("failed to remove %s");
++static const char *msg_warn_lstat_failed = N_("could not lstat %s\n");
+ 
+ enum color_clean {
+   CLEAN_COLOR_RESET = 0,
 @@ builtin/clean.c: static int remove_dirs(struct strbuf *path, const char 
*prefix, int force_flag,
strbuf_setlen(path, len);
strbuf_addstr(path, e->d_name);
if (lstat(path->buf, &st))
 -  ; /* fall thru */
-+  warning("Could not stat path '%s': %s",
-+  path->buf, strerror(errno));
++  warning_errno(_(msg_warn_lstat_failed), path->buf);
else if (S_ISDIR(st.st_mode)) {
if (remove_dirs(path, prefix, force_flag, dry_run, 
quiet, &gone))
ret = 1;
@@ t/t7300-clean.sh: test_expect_success 'git clean -d skips untracked dirs 
contain
 +  test_when_finished git config --unset core.longpaths &&
 +  a50=aa &&
 +  mkdir -p $a50$a50/$a50$a50/$a50$a50 &&
-+  touch $a50$a50/test.txt &&
-+  touch $a50$a50/$a50$a50/$a50$a50/test.txt &&
++  : >"$a50$a50/test.txt" 2>"$a50$a50/$a50$a50/$a50$a50/test.txt" &&
++  # create a temporary outside the working tree to hide from "git clean"
 +  test_must_fail git clean -xdf 2>.git/err &&
-+  grep "too long" .git/err
++  # grepping for a strerror string is unportable but it is OK here with
++  # MINGW prereq
    ++  test_i18ngrep "too long" .git/err
 +'
 +
  test_done



-- >8 --
From: Johannes Schindelin 
Subject: [PATCH] clean: show an error message when the path is too long

Without an error message when lstat() failed, `git clean` would
abort without an error message, leaving the user quite puzzled.

In particular on Windows, where the default maximum path length is quite
small (yet there are ways to circumvent that limit in many cases), it is
very important that users be given an indication why their command
failed because of too long paths when it did.

This test case makes sure that a warning is issued that would have
helped the user who reported this issue:

https://github.com/git-for-windows/git/issues/521

Signed-off-by: Johannes Schindelin 
[jc: matched the warning message style to existing ones, fixed test]
Signed-off-by: Junio C Hamano 
---
 builtin/clean.c  |  3 ++-
 t/t7300-clean.sh | 13 +
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/builtin/clean.c b/builtin/clean.c
index aaba4af3c2..d5579da716 100644
--- a/built

Re: [PATCH 1/1] clean: show an error message when the path is too long

2019-07-16 Thread Junio C Hamano
René Scharfe  writes:

>> diff --git a/builtin/clean.c b/builtin/clean.c
>> index aaba4af3c2..7be689f480 100644
>> --- a/builtin/clean.c
>> +++ b/builtin/clean.c
>> @@ -194,7 +194,8 @@ static int remove_dirs(struct strbuf *path, const char 
>> *prefix, int force_flag,
>>  strbuf_setlen(path, len);
>>  strbuf_addstr(path, e->d_name);
>>  if (lstat(path->buf, &st))
>> -; /* fall thru */
>
> I don't understand the "fall thru" comment here.  It only makes sense in
> switch statements, doesn't it?  And the code after this if/else-if/else
> block is only executed if we pass through here, so why was it placed way
> down in the first place?  Perhaps for historical reasons.

f538a91e ("git-clean: Display more accurate delete messages",
2013-01-11) introduced that line when it first introduced the
function and it is not inherited from anything else.  As the if/else
cascade has a catch-all else that always continues at the end, failing
lstat is the only way for the entire loop to break out early, so as
you hinted above, having the "fail, break and return" right there would
probably be a better organization of this loop.

> Anyway, I'd keep that strange comment, as I don't see a connection to
> your changes.  (Or explain in the commit message why we no longer "fall
> thru", whatever that may mean.  Or perhaps I'm just thick.)
>
>> +warning("Could not stat path '%s': %s",
>> +path->buf, strerror(errno));
>
> The other warnings in that function are issued using warning_errno()
> (shorter code, consistency is enforced) and messages are marked for
> translation.  That would be nice to have here as well, no?

Absolutely.  Also, downcase "Could" and perhaps use _() around.

As to the "fall thru" comment, I tend to agree that it does not fall
through to the next "case" in the usual sense and is confusing.
Mentioning that we removed a confusing and pointless comment in the
log message would be nice, but I'd vote for removing it if I was
asked.

Thanks.







Re: [PATCH 1/1] clean: show an error message when the path is too long

2019-07-16 Thread SZEDER Gábor
On Tue, Jul 16, 2019 at 07:04:23AM -0700, Johannes Schindelin via GitGitGadget 
wrote:
> +test_expect_success MINGW 'handle clean & core.longpaths = false nicely' '
> + git config core.longpaths false &&
> + test_when_finished git config --unset core.longpaths &&

'test_config core.longpaths false' could replace the above two lines
with a single one.

> + a50=aa &&
> + mkdir -p $a50$a50/$a50$a50/$a50$a50 &&
> + touch $a50$a50/test.txt &&
> + touch $a50$a50/$a50$a50/$a50$a50/test.txt &&

Is there a reason for using 'touch' to create these files here,
instead of the usual '>"$file"' shell redirections?  Something
Windows/MinGW/long path specific, perhaps?

> + test_must_fail git clean -xdf 2>.git/err &&

I was puzzled when I saw that '2>.git/err' first, because why put that
file in the .git directory?!  but of course 'git clean' would delete
that file if it were in the worktree.  OK.

> + grep "too long" .git/err
> +'
> +
>  test_done
> -- 
> gitgitgadget


Re: [PATCH 1/1] clean: show an error message when the path is too long

2019-07-16 Thread René Scharfe
Am 16.07.19 um 16:04 schrieb Johannes Schindelin via GitGitGadget:
> From: Johannes Schindelin 
>
> Without an error message when stat() failed, e.g. `git clean` would
> abort without an error message, leaving the user quite puzzled.
>
> In particular on Windows, where the default maximum path length is quite
> small (yet there are ways to circumvent that limit in many cases), it is
> very important that users be given an indication why their command
> failed because of too long paths when it did.
>
> This test case makes sure that a warning is issued that would have
> helped the user who reported this issue:
>
>   https://github.com/git-for-windows/git/issues/521
>
> Signed-off-by: Johannes Schindelin 
> ---
>  builtin/clean.c  |  3 ++-
>  t/t7300-clean.sh | 11 +++
>  2 files changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/builtin/clean.c b/builtin/clean.c
> index aaba4af3c2..7be689f480 100644
> --- a/builtin/clean.c
> +++ b/builtin/clean.c
> @@ -194,7 +194,8 @@ static int remove_dirs(struct strbuf *path, const char 
> *prefix, int force_flag,
>   strbuf_setlen(path, len);
>   strbuf_addstr(path, e->d_name);
>   if (lstat(path->buf, &st))
> - ; /* fall thru */

I don't understand the "fall thru" comment here.  It only makes sense in
switch statements, doesn't it?  And the code after this if/else-if/else
block is only executed if we pass through here, so why was it placed way
down in the first place?  Perhaps for historical reasons.

dir.c::remove_dir_recurse() has such a comment as well, by the way.

Anyway, I'd keep that strange comment, as I don't see a connection to
your changes.  (Or explain in the commit message why we no longer "fall
thru", whatever that may mean.  Or perhaps I'm just thick.)

> + warning("Could not stat path '%s': %s",
> + path->buf, strerror(errno));

The other warnings in that function are issued using warning_errno()
(shorter code, consistency is enforced) and messages are marked for
translation.  That would be nice to have here as well, no?

>   else if (S_ISDIR(st.st_mode)) {
>   if (remove_dirs(path, prefix, force_flag, dry_run, 
> quiet, &gone))
>   ret = 1;
> diff --git a/t/t7300-clean.sh b/t/t7300-clean.sh
> index 7b36954d63..aa08443f6a 100755
> --- a/t/t7300-clean.sh
> +++ b/t/t7300-clean.sh
> @@ -669,4 +669,15 @@ test_expect_success 'git clean -d skips untracked dirs 
> containing ignored files'
>   test_path_is_missing foo/b/bb
>  '
>
> +test_expect_success MINGW 'handle clean & core.longpaths = false nicely' '
> + git config core.longpaths false &&
> + test_when_finished git config --unset core.longpaths &&
> + a50=aa &&
> + mkdir -p $a50$a50/$a50$a50/$a50$a50 &&
> + touch $a50$a50/test.txt &&
> + touch $a50$a50/$a50$a50/$a50$a50/test.txt &&
> + test_must_fail git clean -xdf 2>.git/err &&
> + grep "too long" .git/err

The pattern "too long" is expected to be supplied by strerror(3), right?
Depending on the locale it might return an message in a different
language, so test_i18ngrep should be used here even if the warning above
is not translated, right?

> +'
> +
>  test_done
>



[PATCH 0/1] Show an error if too-long paths are seen by git clean -dfx

2019-07-16 Thread Johannes Schindelin via GitGitGadget
This is particularly important on Windows, where PATH_MAX is very small
compared to Unix/Linux.

Johannes Schindelin (1):
  clean: show an error message when the path is too long

 builtin/clean.c  |  3 ++-
 t/t7300-clean.sh | 11 +++
 2 files changed, 13 insertions(+), 1 deletion(-)


base-commit: aa25c82427ae70aebf3b8f970f2afd54e9a2a8c6
Published-As: 
https://github.com/gitgitgadget/git/releases/tag/pr-219%2Fdscho%2Fclean-long-paths-v1
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git 
pr-219/dscho/clean-long-paths-v1
Pull-Request: https://github.com/gitgitgadget/git/pull/219
-- 
gitgitgadget


[PATCH 1/1] clean: show an error message when the path is too long

2019-07-16 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

Without an error message when stat() failed, e.g. `git clean` would
abort without an error message, leaving the user quite puzzled.

In particular on Windows, where the default maximum path length is quite
small (yet there are ways to circumvent that limit in many cases), it is
very important that users be given an indication why their command
failed because of too long paths when it did.

This test case makes sure that a warning is issued that would have
helped the user who reported this issue:

https://github.com/git-for-windows/git/issues/521

Signed-off-by: Johannes Schindelin 
---
 builtin/clean.c  |  3 ++-
 t/t7300-clean.sh | 11 +++
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/builtin/clean.c b/builtin/clean.c
index aaba4af3c2..7be689f480 100644
--- a/builtin/clean.c
+++ b/builtin/clean.c
@@ -194,7 +194,8 @@ static int remove_dirs(struct strbuf *path, const char 
*prefix, int force_flag,
strbuf_setlen(path, len);
strbuf_addstr(path, e->d_name);
if (lstat(path->buf, &st))
-   ; /* fall thru */
+   warning("Could not stat path '%s': %s",
+   path->buf, strerror(errno));
else if (S_ISDIR(st.st_mode)) {
if (remove_dirs(path, prefix, force_flag, dry_run, 
quiet, &gone))
ret = 1;
diff --git a/t/t7300-clean.sh b/t/t7300-clean.sh
index 7b36954d63..aa08443f6a 100755
--- a/t/t7300-clean.sh
+++ b/t/t7300-clean.sh
@@ -669,4 +669,15 @@ test_expect_success 'git clean -d skips untracked dirs 
containing ignored files'
test_path_is_missing foo/b/bb
 '
 
+test_expect_success MINGW 'handle clean & core.longpaths = false nicely' '
+   git config core.longpaths false &&
+   test_when_finished git config --unset core.longpaths &&
+   a50=aa &&
+   mkdir -p $a50$a50/$a50$a50/$a50$a50 &&
+   touch $a50$a50/test.txt &&
+   touch $a50$a50/$a50$a50/$a50$a50/test.txt &&
+   test_must_fail git clean -xdf 2>.git/err &&
+   grep "too long" .git/err
+'
+
 test_done
-- 
gitgitgadget


[PATCH v5 05/10] list-objects-filter-options: move error check up

2019-06-27 Thread Matthew DeVore
Move the check that filter_options->choice is set to higher in the call
stack. This can only be set when the gentle parse function is called
from one of the two call sites.

This is important because in an upcoming patch this may or may not be an
error, and whether it is an error is only known to the
parse_list_objects_filter function.

Signed-off-by: Matthew DeVore 
---
 list-objects-filter-options.c | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/list-objects-filter-options.c b/list-objects-filter-options.c
index 75d0236ee2..5fe2814841 100644
--- a/list-objects-filter-options.c
+++ b/list-objects-filter-options.c
@@ -28,25 +28,22 @@ static int parse_combine_filter(
  * expand_list_objects_filter_spec() first).  We also "intern" the arg for the
  * convenience of the current command.
  */
 static int gently_parse_list_objects_filter(
struct list_objects_filter_options *filter_options,
const char *arg,
struct strbuf *errbuf)
 {
const char *v0;
 
-   if (filter_options->choice) {
-   strbuf_addstr(
-   errbuf, _("multiple filter-specs cannot be combined"));
-   return 1;
-   }
+   if (filter_options->choice)
+   BUG("filter_options already populated");
 
if (!strcmp(arg, "blob:none")) {
filter_options->choice = LOFC_BLOB_NONE;
return 0;
 
} else if (skip_prefix(arg, "blob:limit=", &v0)) {
if (git_parse_ulong(v0, &filter_options->blob_limit_value)) {
filter_options->choice = LOFC_BLOB_LIMIT;
return 0;
}
@@ -178,20 +175,22 @@ static int parse_combine_filter(
list_objects_filter_release(filter_options);
memset(filter_options, 0, sizeof(*filter_options));
}
return result;
 }
 
 int parse_list_objects_filter(struct list_objects_filter_options 
*filter_options,
  const char *arg)
 {
struct strbuf buf = STRBUF_INIT;
+   if (filter_options->choice)
+   die(_("multiple filter-specs cannot be combined"));
filter_options->filter_spec = strdup(arg);
if (gently_parse_list_objects_filter(filter_options, arg, &buf))
die("%s", buf.buf);
return 0;
 }
 
 int opt_parse_list_objects_filter(const struct option *opt,
  const char *arg, int unset)
 {
struct list_objects_filter_options *filter_options = opt->value;
-- 
2.21.0



[PATCH v6 02/15] fetch-object: make functions return an error code

2019-06-25 Thread Christian Couder
From: Christian Couder 

The callers of the fetch_object() and fetch_objects() might
be interested in knowing if these functions succeeded or not.

Signed-off-by: Christian Couder 
Signed-off-by: Junio C Hamano 
---
 fetch-object.c | 13 -
 fetch-object.h |  4 ++--
 sha1-file.c|  4 ++--
 3 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/fetch-object.c b/fetch-object.c
index 4266548800..eac4d448ef 100644
--- a/fetch-object.c
+++ b/fetch-object.c
@@ -5,11 +5,12 @@
 #include "transport.h"
 #include "fetch-object.h"
 
-static void fetch_refs(const char *remote_name, struct ref *ref)
+static int fetch_refs(const char *remote_name, struct ref *ref)
 {
struct remote *remote;
struct transport *transport;
int original_fetch_if_missing = fetch_if_missing;
+   int res;
 
fetch_if_missing = 0;
remote = remote_get(remote_name);
@@ -19,12 +20,14 @@ static void fetch_refs(const char *remote_name, struct ref 
*ref)
 
transport_set_option(transport, TRANS_OPT_FROM_PROMISOR, "1");
transport_set_option(transport, TRANS_OPT_NO_DEPENDENTS, "1");
-   transport_fetch_refs(transport, ref);
+   res = transport_fetch_refs(transport, ref);
fetch_if_missing = original_fetch_if_missing;
+
+   return res;
 }
 
-void fetch_objects(const char *remote_name, const struct object_id *oids,
-  int oid_nr)
+int fetch_objects(const char *remote_name, const struct object_id *oids,
+ int oid_nr)
 {
struct ref *ref = NULL;
int i;
@@ -36,5 +39,5 @@ void fetch_objects(const char *remote_name, const struct 
object_id *oids,
new_ref->next = ref;
ref = new_ref;
}
-   fetch_refs(remote_name, ref);
+   return fetch_refs(remote_name, ref);
 }
diff --git a/fetch-object.h b/fetch-object.h
index d6444caa5a..7bcc7cadb0 100644
--- a/fetch-object.h
+++ b/fetch-object.h
@@ -3,7 +3,7 @@
 
 struct object_id;
 
-void fetch_objects(const char *remote_name, const struct object_id *oids,
-  int oid_nr);
+int fetch_objects(const char *remote_name, const struct object_id *oids,
+ int oid_nr);
 
 #endif
diff --git a/sha1-file.c b/sha1-file.c
index 888b6024d5..819d32cdb8 100644
--- a/sha1-file.c
+++ b/sha1-file.c
@@ -1381,8 +1381,8 @@ int oid_object_info_extended(struct repository *r, const 
struct object_id *oid,
!already_retried && r == the_repository &&
!(flags & OBJECT_INFO_SKIP_FETCH_OBJECT)) {
/*
-* TODO Investigate having fetch_object() return
-* TODO error/success and stopping the music here.
+* TODO Investigate checking fetch_object() return
+* TODO value and stopping on error here.
 * TODO Pass a repository struct through fetch_object,
 * such that arbitrary repositories work.
 */
-- 
2.22.0.229.ga13d9ffdf7.dirty



Re: Git status parse error after v.2.22.0 upgrade

2019-06-25 Thread Phillip Wood

On 13/06/2019 18:43, Phillip Wood wrote:

On 13/06/2019 17:24, Jeff King wrote:

On Thu, Jun 13, 2019 at 09:05:16AM -0700, Junio C Hamano wrote:


Two issues "the sequencer" folks may want to address are

  (1) make the one that reads an irrelevant/stale 'todo' file more
  careful to ignore errors in such a file;

  (2) make the sequencer machinery more careful to clean up after it
  is done or it is aborted  (for example, "git reset --hard"
  could remove these state files preemptively even when a rebase
  is not in progress, I would think).

I think we already had some patches toward the latter recently.


Maybe I am not understanding it correctly, but do you mean in (2) that
"git reset --hard" would always clear sequencer state?


I think the commit Junio is referring to is
b07d9bfd17 ("commit/reset: try to clean up sequencer state", 2019-04-16)
which will only remove the sequencer directory if it stops after the
pick was the last one in the series. The idea is that if cherry-pick
stops for a conflict resolution on the last pick user commits the result
directly or run reset without running `cherry-pick --continue`
afterwards the sequencer state gets cleaned up properly.


That seems
undesirable to me, as I often use "git reset" to move the head around
during a rebase. (e.g., when using "rebase -i" to split apart I commit,
I stop on that commit, "git reset" back to the parent, and then
selectively "add -p" the various parts).

Direction (1) seems quite sensible to me, though.


Now that we try harder to clean up the sequencer state I wonder if that
would just cover up bugs where the state has not been removed when it
should have been.


When I wrote that it hadn't dawned on me that if there is an error the 
status will not tell the user that a cherry-pick is in progress which is 
what we really want so they are alerted to the stale sequencer state. 
I've posted a series [1] to address this (sadly gitgitgadget wont let me 
post it on this thread).


Best Wishes

Phillip

[1] 
https://public-inbox.org/git/pull.275.git.gitgitgad...@gmail.com/T/#mf57a4ab95ba907fbf2d06ec64e9b676db158eace




That can lead to unpleasant problems if the user
aborts a single revert/cherry-pick when there is stale sequencer state
around as it rewinds HEAD to the commit when the stale
cherry-pick/revert was started as explained in the message to b07d9bfd17
("commit/reset: try to clean up sequencer state", 2019-04-16)

If we do want to do something then maybe teaching gc not to collect
commits listed in .git/sequencer/todo and
.git/rebase-merge/git-rebase-todo would be useful.

Best Wishes

Phillip


-Peff





Re: windows: error cannot lock ref ... unable to create lock

2019-06-24 Thread Johannes Schindelin
Hi Philip,

On Sat, 22 Jun 2019, Philip Oakley wrote:

> On 18/06/2019 18:01, Eric Sunshine wrote:
> > On Tue, Jun 18, 2019 at 12:39 PM Anthony Sottile  wrote:
> > > + git fetch origin --tags
> > > Unpacking objects: 100% (10/10), done.
> > >  From https://github.com/asottile-archive/git-windows-branch-test
> > >   * [new branch]  master -> origin/master
> > > error: cannot lock ref 'refs/remotes/origin/pr/aux': Unable to create
> > > 'C:/Users/IEUser/x/x/.git/refs/remotes/origin/pr/aux.lock': No such
> > > file or directory
> > >   ! [new branch]  pr/aux -> origin/pr/aux  (unable to update local
> > >   ref)
> > AUX is a reserved[1] filename on Windows. Quoting from that source:
> >
> >  Do not use the following reserved names for the name of a file:
> >  CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7,
> >  COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and
> >  LPT9. Also avoid these names followed immediately by an
> >  extension...
> >
> > The default Git "ref store" is filesystem-based, so a branch named
> > "aux" is problematic. Other ref store implementations would not be
> > subject to this limitation (though I'm not sure what the state of the
> > others is -- someone with more knowledge can probably answer that).
> >
> > [1]:
> > https://docs.microsoft.com/en-us/windows/desktop/FileIO/naming-a-file#naming-conventions
> This sounds interesting. I thought (but I'm not certain) that Git for Windows
> avoided creating files in the working tree with such problematic names, so
> that a clone of a repo that contained a file "AUX" (any case, any extension
> IIRC), would be bypassed with possibly a warning message.
>
> However this looks to be a slightly different case where a _branch_ called
> "AUX" (lower cased) has been created within the clone, and it's a problem not
> trapped. Maybe worth creating a proper issue on the Git-for-Windows repo. Also
> cc'ing Dscho who may remember better than I.

He doesn't ;-)

Ciao,
Dscho



Re: windows: error cannot lock ref ... unable to create lock

2019-06-22 Thread Philip Oakley

On 18/06/2019 18:01, Eric Sunshine wrote:

On Tue, Jun 18, 2019 at 12:39 PM Anthony Sottile  wrote:

+ git fetch origin --tags
Unpacking objects: 100% (10/10), done.
 From https://github.com/asottile-archive/git-windows-branch-test
  * [new branch]  master -> origin/master
error: cannot lock ref 'refs/remotes/origin/pr/aux': Unable to create
'C:/Users/IEUser/x/x/.git/refs/remotes/origin/pr/aux.lock': No such
file or directory
  ! [new branch]  pr/aux -> origin/pr/aux  (unable to update local ref)

AUX is a reserved[1] filename on Windows. Quoting from that source:

 Do not use the following reserved names for the name of a file:
 CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7,
 COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and
 LPT9. Also avoid these names followed immediately by an
 extension...

The default Git "ref store" is filesystem-based, so a branch named
"aux" is problematic. Other ref store implementations would not be
subject to this limitation (though I'm not sure what the state of the
others is -- someone with more knowledge can probably answer that).

[1]: 
https://docs.microsoft.com/en-us/windows/desktop/FileIO/naming-a-file#naming-conventions
This sounds interesting. I thought (but I'm not certain) that Git for 
Windows avoided creating files in the working tree with such problematic 
names, so that a clone of a repo that contained a file "AUX" (any case, 
any extension IIRC), would be bypassed with possibly a warning message.


However this looks to be a slightly different case where a _branch_ 
called "AUX" (lower cased) has been created within the clone, and it's a 
problem not trapped. Maybe worth creating a proper issue on the 
Git-for-Windows repo. Also cc'ing Dscho who may remember better than I.

--
Philip


Re: windows: error cannot lock ref ... unable to create lock

2019-06-18 Thread Anthony Sottile
On Tue, Jun 18, 2019 at 10:01 AM Eric Sunshine  wrote:
>
> On Tue, Jun 18, 2019 at 12:39 PM Anthony Sottile  wrote:
> > + git fetch origin --tags
> > Unpacking objects: 100% (10/10), done.
> > From https://github.com/asottile-archive/git-windows-branch-test
> >  * [new branch]  master -> origin/master
> > error: cannot lock ref 'refs/remotes/origin/pr/aux': Unable to create
> > 'C:/Users/IEUser/x/x/.git/refs/remotes/origin/pr/aux.lock': No such
> > file or directory
> >  ! [new branch]  pr/aux -> origin/pr/aux  (unable to update local 
> > ref)
>
> AUX is a reserved[1] filename on Windows. Quoting from that source:
>
> Do not use the following reserved names for the name of a file:
> CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7,
> COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and
> LPT9. Also avoid these names followed immediately by an
> extension...
>
> The default Git "ref store" is filesystem-based, so a branch named
> "aux" is problematic. Other ref store implementations would not be
> subject to this limitation (though I'm not sure what the state of the
> others is -- someone with more knowledge can probably answer that).
>
> [1]: 
> https://docs.microsoft.com/en-us/windows/desktop/FileIO/naming-a-file#naming-conventions

Oh of course, totally forgot about the special files on windows.

Thanks for the quick response and helpful link!

Anthony


Re: windows: error cannot lock ref ... unable to create lock

2019-06-18 Thread Eric Sunshine
On Tue, Jun 18, 2019 at 12:39 PM Anthony Sottile  wrote:
> + git fetch origin --tags
> Unpacking objects: 100% (10/10), done.
> From https://github.com/asottile-archive/git-windows-branch-test
>  * [new branch]  master -> origin/master
> error: cannot lock ref 'refs/remotes/origin/pr/aux': Unable to create
> 'C:/Users/IEUser/x/x/.git/refs/remotes/origin/pr/aux.lock': No such
> file or directory
>  ! [new branch]  pr/aux -> origin/pr/aux  (unable to update local ref)

AUX is a reserved[1] filename on Windows. Quoting from that source:

Do not use the following reserved names for the name of a file:
CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7,
COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and
LPT9. Also avoid these names followed immediately by an
extension...

The default Git "ref store" is filesystem-based, so a branch named
"aux" is problematic. Other ref store implementations would not be
subject to this limitation (though I'm not sure what the state of the
others is -- someone with more knowledge can probably answer that).

[1]: 
https://docs.microsoft.com/en-us/windows/desktop/FileIO/naming-a-file#naming-conventions


Re: windows: error cannot lock ref ... unable to create lock

2019-06-18 Thread Anthony Sottile
hit send too quickly, here's my version information:

$ git --version
git version 2.22.0.windows.1

On Tue, Jun 18, 2019 at 9:38 AM Anthony Sottile  wrote:
>
> I've set up a demo problematic repository on github:
> https://github.com/asottile-archive/git-windows-branch-test
>
> The minimal reproduction is:
>
> rm -rf x
> git init x
> cd x
> git remote add origin 
> https://github.com/asottile-archive/git-windows-branch-te>
> git fetch origin --tags
>
> Here's the output:
>
> + git init x
> Initialized empty Git repository in C:/Users/IEUser/x/x/.git/
> + cd x
> + git remote add origin
> https://github.com/asottile-archive/git-windows-branch-test
> + git fetch origin --tags
> remote: Enumerating objects: 10, done.
> remote: Counting objects: 100% (10/10), done.
> remote: Compressing objects: 100% (4/4), done.
> remote: Total 10 (delta 0), reused 0 (delta 0), pack-reused 0
> Unpacking objects: 100% (10/10), done.
> From https://github.com/asottile-archive/git-windows-branch-test
>  * [new branch]  master -> origin/master
> error: cannot lock ref 'refs/remotes/origin/pr/aux': Unable to create
> 'C:/Users/IEUser/x/x/.git/refs/remotes/origin/pr/aux.lock': No such
> file or directory
>  ! [new branch]  pr/aux -> origin/pr/aux  (unable to update local ref)
>
>
> real-world issue: https://github.com/pre-commit/pre-commit/issues/1058
>
> Thanks
>
> Anthony


  1   2   3   4   5   6   7   8   9   10   >