Re: Fetching commit instead of ref

2017-12-19 Thread Carlsson, Magnus
Thanks Junio for your answer!

>From my simple tests it seems that github doesn't have this on by default so 
>that seems a little dull.

Do you know if there is a way to actually find a ref that contains the SHA from 
a remote?

Finally, you say that its a security feature, but from the log it feels like 
you are actually downloading things first and then you determine if you are 
allowed to fetch it or?

$ git fetch subrepo 
50f730db793e0733b159326c5a3e78fd48cedfec:refs/remote/subrepo/foo-commit
remote: Counting objects: 2311, done.
remote: Total 2311 (delta 0), reused 0 (delta 0), pack-reused 2311
-- >>> Receiving objects: 100% (2311/2311), 703.64 KiB | 0 bytes/s, done.
Resolving deltas: 100% (1174/1174), done.
error: Server does not allow request for unadvertised object 
50f730db793e0733b159326c5a3e78fd48cedfec

-- Magnus


MAGNUS CARLSSON
Staff Software Engineer
ARRIS

o: +46 13 36 75 92
e: magnus.carls...@arris.com
w: www.arris.com

ARRIS:  Legal entity: Arris Sweden AB - Registered Office: Teknikringen 2, 583 
30 Linkoping, Sweden - Reg No:556518-5831 - VAT No:SE 556518-583

This electronic transmission (and any attached document) is for the sole use of 
the individual or entity to whom it is addressed.  It is confidential and may 
be attorney-client privileged.  In any event the Sender reserves, to the 
fullest extent, any "legal advice privilege".  Any further distribution or 
copying of this message is strictly prohibited.  If you received this message 
in error, please notify the Sender immediately and destroy the attached message 
(and all attached documents).


From: Junio C Hamano 
Sent: Tuesday, December 19, 2017 17:07
To: Carlsson, Magnus
Cc: git@vger.kernel.org
Subject: Re: Fetching commit instead of ref

"Carlsson, Magnus"  writes:

> I understand that you don't want to allow people fetching single
> commits all the time, but is there a reason that you don't allow
> SHA instead of references when you fetch an entire tree?

I do not think we don't want to allow "single commit" fetch.  We do
not allow fetching histories from an arbitrary point in the graph,
unless we can prove that what gets fetched is what the serving end
intended to expose to the fetcher---you should view it as a security
issue.

The default way to determine that a fetch request is getting only
the things the serving end wanted to publish is to see the requested
tips of the histories to be fetched are all pointed by refs.  Which
means that a client of a hosting site can rewind and repoint a ref
after pushing a wrong branch that contained undesirable matter by
accident.  Moving the ref to make the undesirable object unreachable
is all that is needed to "hide" it from the public view, even when
the client does not have a way to trigger "gc" immediately on the
hosting site.

There are variants of this security measure.  If the hosting site is
willing to spend extra cycles, we could loosen the "is the request
fetching only what is published?" check to see if the requested tips
are reachable from the tips of refs, instead of exactly at refs.  It
preserves "a mistake can be hidden with a forced push" property the
same way as above, but it is costly and is prone to abuse.

If you are confident about your pushes all the time, you can take
a stance "anything in the repository is meant to be read".  That is
what the "allowAnySHA1InWant" does.  Not everybody however wants to
do this for obvious reasons, so it is not a default.




Re: [PATCH v4] Makefile: replace perl/Makefile.PL with simple make rules

2017-12-19 Thread Todd Zullinger
Hi Ævar,

Ævar Arnfjörð Bjarmason wrote:
> Here's a hopefully final version. The only difference with v3 is:
> 
> -local @_ = ($caller, @_);
> +unshift @_, $caller;
> 
> As it turns out localizing @_ isn't something that worked properly
> until
> https://github.com/Perl/perl5/commit/049bd5ffd62b73325d4b2e75e59ba04b3569137d
> 
> That commit isn't part of the 5.16.3 version that ships with CentOS 7,
> which explains why Michael J Gruber had issues with it. I've tested
> this on CentOS 7 myself, it passes all tests now.

Thanks for tracking this down!

FWIW, I applied this version to next and tested it with
CentOS 6 and 7.  The tests pass on both (though there are
some unrelated failures on CentOS 6 in t5700-protocol-v1,
which I haven't looked into further yet).

I also applied this patch to 2.15.1 and ran the tests in the
Fedora build system for all fedora and epel releases, which
also passed (though with some spurious git-svn failures on
x86_64 in fedora 28, AKA rawhide).

The .pmc extensions seem to cause rpm to fail to parse the
files for rpm 'provides' as it normally would.  This causes
scripts like git-send-email which generates a 'requires' on
Git::Error to fail to find anything which provides it.

I'm not familiar with the .pmc extenstion.  Searching the
fedora repositories, there is only one other package - and
one file within it - which has a .pmc extension.

(The package is perl-test, the file is
/usr/libexec/perl5-tests/perl-tests/t/run/flib/t2.pmc.)

Perhaps it's a bug in rpm's perl dependency generator, but
I'd like to think that git wouldn't be the first package to
find it.

Is the .pmc extension important to ensure these files are
loaded in the right order?  Since they're all in the Git
namespace, I don't imagine there should be anything else in
@INC which would be provided by the system or another
package.

Pardon my ignorance if I've missed the obvious (I haven't
fully read "perldoc -f require" which you referenced in the
commit message).

-- 
Todd
~~
Suppose I were a member of Congress, and suppose I were an idiot. But,
I repeat myself.
-- Mark Twain



Re: [PATCH v6 0/2] http-backend: respect CONTENT_LENGTH as specified by rfc3875

2017-12-19 Thread Max Kirillov
On Tue, Dec 19, 2017 at 02:13:33PM -0800, Junio C Hamano wrote:
> So... is there going to be an update (or has there been one and I
> missed it)?

Yes there it! I wanted to add tests for the cases Jeff
mentioned. It is almost done, I just need to check I did not
miss some note.


Re: [PATCH] http: support CURLPROXY_HTTPS

2017-12-19 Thread Wei Shuyu

On 2017-12-20 10:22, Wei Shuyu wrote:

CURLPROXY_HTTPS is intended for run-time detection. I don't think it's 
a

good idea to use it with #ifdef.


s/CURLPROXY_HTTPS/CURL_VERSION_HTTPS_PROXY/


Re: [PATCH 5/5] submodule: submodule_move_head omits old argument in forced case

2017-12-19 Thread Jonathan Nieder
Stefan Beller wrote:
> On Tue, Dec 19, 2017 at 2:44 PM, Jonathan Nieder  wrote:

>>   checkout -f
>> I think I would expect this not to touch a submodule that
>> hasn't changed, since that would be consistent with its
>> behavior on files that haven't changed.
[...]
> I may have a different understanding of git commands than you do,
> but a plain "checkout -f" with no further arguments is the same as
> a hard reset IMHO, and could be used interchangeably?

A kind person pointed out privately that you were talking about
"git checkout -f" with no further arguments, not "git checkout -f
".  In that context, the competing meanings I mentioned in
https://crbug.com/git/5 don't exist: either a given entry in the
worktree matches the index or it doesn't.

So plain "git checkout -f" is similar to plain "git reset --hard"
in that it means "make the worktree (and index, in the reset case)
look just like this".  checkout -f makes the worktree look like the
index; reset --hard makes the worktree and index look like HEAD.

In that context, more aggressively making the submodule in the
worktree get into the defined state sounds to me like a good change.

Hopefully my confusion helps illustrate what a commit message focusing
on the end user experience might want to mention.

Thanks again,
Jonathan


[PATCH v4] Makefile: replace perl/Makefile.PL with simple make rules

2017-12-19 Thread Ævar Arnfjörð Bjarmason
Replace the perl/Makefile.PL and the fallback perl/Makefile used under
NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily
inspired by how the i18n infrastructure's build process works[1].

The reason for having the Makefile.PL in the first place is that it
was initially[2] building a perl C binding to interface with libgit,
this functionality, that was removed[3] before Git.pm ever made it to
the master branch.

We've since since started maintaining a fallback perl/Makefile, as
MakeMaker wouldn't work on some platforms[4]. That's just the tip of
the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to
detect whether we need to regenerate the perl/perl.mak, which I fixed
just recently to deal with issues like the perl version changing from
under us[6].

There is absolutely no reason for why this needs to be so complex
anymore. All we're getting out of this elaborate Rube Goldberg machine
was copying perl/* to perl/blib/* as we do a string-replacement on
the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as
pod2man-ing Git.pm & friends.

So replace the whole thing with something that's pretty much a copy of
how we generate po/build/**.mo from po/*.po, just with a small sed(1)
command instead of msgfmt. As that's being done rename the files
from *.pm to *.pmc just to indicate that they're generated (see
"perldoc -f require").

While I'm at it, change the fallback for Error.pm from being something
where we'll ship our own Error.pm if one doesn't exist at build time
to one where we just use a Git::Error wrapper that'll always prefer
the system-wide Error.pm, only falling back to our own copy if it
really doesn't exist at runtime. It's now shipped as
Git::FromCPAN::Error, making it easy to add other modules to
Git::FromCPAN::* in the future if that's needed.

Functional changes:

 * This will not always install into perl's idea of its global
   "installsitelib". This only potentially matters for packagers that
   need to expose Git.pm for non-git use, and as explained in the
   INSTALL file there's a trivial workaround.

 * The scripts themselves will 'use lib' the target directory, but if
   INSTLIBDIR is set it overrides it. It doesn't have to be this way,
   it could be set in addition to INSTLIBDIR, but my reading of [7] is
   that this is the desired behavior.

 * We don't build man pages for all of the perl modules as we used to,
   only Git(3pm). As discussed on-list[8] that we were building
   installed manpages for purely internal APIs like Git::I18N or
   private-Error.pm was always a bug anyway, and all the Git::SVN::*
   ones say they're internal APIs.

   There are apparently external users of Git.pm, but I don't expect
   there to be any of the others.

   As a side-effect of these general changes the perl documentation
   now only installed by install-{doc,man}, not a mere "install" as
   before.

1. 5e9637c629 ("i18n: add infrastructure for translating Git with
   gettext", 2011-11-18)

2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24)

3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23)

4. f848718a69 ("Make perl/ build procedure ActiveState friendly.",
   2006-12-04)

5. ee9be06770 ("perl: detect new files in MakeMaker builds",
   2012-07-27)

6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes",
   2017-03-29)

7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to
   default perl path", 2013-11-15)

8. 87bmjjv1pu@evledraar.booking.com ("Re: [PATCH] Makefile:
   replace perl/Makefile.PL with simple make rules"

Signed-off-by: Ævar Arnfjörð Bjarmason 
---

Here's a hopefully final version. The only difference with v3 is:

-local @_ = ($caller, @_);
+unshift @_, $caller;

As it turns out localizing @_ isn't something that worked properly
until
https://github.com/Perl/perl5/commit/049bd5ffd62b73325d4b2e75e59ba04b3569137d

That commit isn't part of the 5.16.3 version that ships with CentOS 7,
which explains why Michael J Gruber had issues with it. I've tested
this on CentOS 7 myself, it passes all tests now.

 INSTALL  | 17 -
 Makefile | 67 ++
 contrib/examples/git-difftool.perl   |  2 +-
 git-send-email.perl  |  2 +-
 perl/.gitignore  |  9 +--
 perl/Git.pm  |  2 +-
 perl/Git/Error.pm| 46 
 perl/{private-Error.pm => Git/FromCPAN/Error.pm} |  0
 perl/Git/I18N.pm |  2 +-
 perl/Makefile| 90 
 perl/Makefile.PL | 62 
 t/perf/aggregate.perl|  2 +-
 t/test-lib.sh|  2 +-
 wrap-for-bin.sh  |  2 +-
 14 files changed, 

Re: [PATCH 5/5] submodule: submodule_move_head omits old argument in forced case

2017-12-19 Thread Jonathan Nieder
Hi,

Stefan Beller wrote:
> On Tue, Dec 19, 2017 at 2:44 PM, Jonathan Nieder  wrote:

>>   checkout -f
>> I think I would expect this not to touch a submodule that
>> hasn't changed, since that would be consistent with its
>> behavior on files that haven't changed.
[...]
> I may have a different understanding of git commands than you do,
> but a plain "checkout -f" with no further arguments is the same as
> a hard reset IMHO, and could be used interchangeably?

Ah, good catch.  Filed https://crbug.com/git/5 to clarify this in
documentation.  Thanks for clarifying.

Jonathan


Congratulation Again

2017-12-19 Thread Friedrich Mayrhofer


This is the second time i am sending you this mail.

I, Friedrich Mayrhofer Donate $ 1,000,000.00 to You, Email Me personally for 
more details.

Regards.
Friedrich Mayrhofer


Re: [PATCH 5/5] submodule: submodule_move_head omits old argument in forced case

2017-12-19 Thread Stefan Beller
On Tue, Dec 19, 2017 at 2:44 PM, Jonathan Nieder  wrote:
> Hi,
>
> I had trouble understanding what this fixes, so I'll try nitpicking a
> bit as a sideways way to address that.
>
> Stefan Beller wrote:
>
>> With the previous patch applied (fix of the same() function),
>
> This tripped me up a bit.  Usually commits assume that all previous
> patches have already been applied, since that allows the maintainer to
> apply the early part of a series on one day and the later part another
> day without losing too much context.
>
> I think this intends to say something like
>
> Now that we allow recursing into an unchanged submodule (see
> "unpack-trees: Fix same() for submodules", 2017-12-19), it is
> possible for ...


yup

>
>>   the
>> function `submodule_move_head` may be invoked with the same argument
>> for the `old` and `new` state of a submodule, for example when you
>> run `reset --hard --recurse-submodules` in the superproject that has no
>> change in the gitlink entry, but only worktree related change in the
>> submodule. The read-tree call in the submodule is not amused about
>> the duplicate argument.
>
> What is the symptom of read-tree being unamused?  Is that a sign that
> these patches are out of order (i.e. that we should prepare to handle an
> unchanged submodule first, and then adjust the caller to pass in
> unchanged submodules)?
>
>> It turns out that we can omit the duplicate old argument in all forced
>> cases anyway, so let's do that.
>
> What is the end-user visibile effect of such a change?  E.g. what
> becomes possible to a user that wasn't possible before?
>
> Among the commands you mentioned before:
>
>   checkout -f
> I think I would expect this not to touch a submodule that
> hasn't changed, since that would be consistent with its
> behavior on files that haven't changed.
>
>   reset --hard
> Nice!  Yes, recursing into unchanged submodules is a big part
> of the psychological comfort of being able to say "you can
> always use reset --hard  to get back to a known
> state".

I may have a different understanding of git commands than you do,
but a plain "checkout -f" with no further arguments is the same as
a hard reset IMHO, and could be used interchangeably?

Rehashing the "What is a submodule?" discussion, I would claim
that when its worktree is changed, we'd want checkout to restore
the submodules worktree back, so I disagree with your assessment
of checkout -f.

>   read-tree -u --reset
> This is essentially the plumbing equivalent of reset --hard,
> so it makes sense for them to be consistent.  Ok.
>
> Based on the checkout -f case, if I've understood correctly then patch
> 4/5 goes too far on it (but I could easily be convinced otherwise).

Without 4/5 we cannot implement hard reset recursing into submodules
as it is functionally the same as forced checkout except when we
differentiate them
on a higher layer?

>> Signed-off-by: Stefan Beller 
>> ---
>>  submodule.c   | 4 +++-
>>  t/lib-submodule-update.sh | 2 +-
>>  2 files changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/submodule.c b/submodule.c
>> index fa25888783..db0f7ac51e 100644
>> --- a/submodule.c
>> +++ b/submodule.c
>> @@ -1653,7 +1653,9 @@ int submodule_move_head(const char *path,
>>   else
>>   argv_array_push(, "-m");
>>
>> - argv_array_push(, old ? old : EMPTY_TREE_SHA1_HEX);
>> + if (!(flags & SUBMODULE_MOVE_HEAD_FORCE))
>> + argv_array_push(, old ? old : EMPTY_TREE_SHA1_HEX);
>
> Interesting.  What is the effect when old != new?

when the force flag is set we mostly pass in old="HEAD", which is
technically correct,
but not a sha1. (I assume you want to know what happens when two unequal sha1s
are given; for that it will perform a 2 way merge instead of a complete reset)

Thanks,
Stefan


Re: [PATCH 5/5] submodule: submodule_move_head omits old argument in forced case

2017-12-19 Thread Jonathan Nieder
Hi,

I had trouble understanding what this fixes, so I'll try nitpicking a
bit as a sideways way to address that.

Stefan Beller wrote:

> With the previous patch applied (fix of the same() function),

This tripped me up a bit.  Usually commits assume that all previous
patches have already been applied, since that allows the maintainer to
apply the early part of a series on one day and the later part another
day without losing too much context.

I think this intends to say something like

Now that we allow recursing into an unchanged submodule (see
"unpack-trees: Fix same() for submodules", 2017-12-19), it is
possible for ...

>   the
> function `submodule_move_head` may be invoked with the same argument
> for the `old` and `new` state of a submodule, for example when you
> run `reset --hard --recurse-submodules` in the superproject that has no
> change in the gitlink entry, but only worktree related change in the
> submodule. The read-tree call in the submodule is not amused about
> the duplicate argument.

What is the symptom of read-tree being unamused?  Is that a sign that
these patches are out of order (i.e. that we should prepare to handle an
unchanged submodule first, and then adjust the caller to pass in
unchanged submodules)?

> It turns out that we can omit the duplicate old argument in all forced
> cases anyway, so let's do that.

What is the end-user visibile effect of such a change?  E.g. what
becomes possible to a user that wasn't possible before?

Among the commands you mentioned before:

  checkout -f
I think I would expect this not to touch a submodule that
hasn't changed, since that would be consistent with its
behavior on files that haven't changed.

  reset --hard
Nice!  Yes, recursing into unchanged submodules is a big part
of the psychological comfort of being able to say "you can
always use reset --hard  to get back to a known
state".

  read-tree -u --reset
This is essentially the plumbing equivalent of reset --hard,
so it makes sense for them to be consistent.  Ok.

Based on the checkout -f case, if I've understood correctly then patch
4/5 goes too far on it (but I could easily be convinced otherwise).

> Signed-off-by: Stefan Beller 
> ---
>  submodule.c   | 4 +++-
>  t/lib-submodule-update.sh | 2 +-
>  2 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/submodule.c b/submodule.c
> index fa25888783..db0f7ac51e 100644
> --- a/submodule.c
> +++ b/submodule.c
> @@ -1653,7 +1653,9 @@ int submodule_move_head(const char *path,
>   else
>   argv_array_push(, "-m");
>  
> - argv_array_push(, old ? old : EMPTY_TREE_SHA1_HEX);
> + if (!(flags & SUBMODULE_MOVE_HEAD_FORCE))
> + argv_array_push(, old ? old : EMPTY_TREE_SHA1_HEX);

Interesting.  What is the effect when old != new?

Thanks,
Jonathan


Re: [PATCH 3/5] t/lib-submodule-update.sh: add new test for submodule internal change

2017-12-19 Thread Stefan Beller
On Tue, Dec 19, 2017 at 2:31 PM, Jonathan Nieder  wrote:
> Hi,
>
> Stefan Beller wrote:
>
>> The test is marked as a failure as the fix comes in a later patch.
>>
>> Signed-off-by: Stefan Beller 
>> ---
>>  t/lib-submodule-update.sh | 11 +++
>>  1 file changed, 11 insertions(+)
>
> I think I'd find this easier to undrestand if it were squashed with the
> patch that fixes it.

ok, let's squash this into the last patch then.

> This is part of test_submodule_foced_switch --- does that mean it
> affects both checkout -f and reset --hard, or only the latter?

All of
  checkout -f
  reset --hard (and not reset --keep ;)
  read-tree -u --reset

Thanks,
Stefan


Re: [PATCH] Re: Bug with "git submodule update" + subrepo with differing path/name?

2017-12-19 Thread Stefan Beller
On Tue, Dec 19, 2017 at 2:19 PM, Junio C Hamano  wrote:
> Stefan Beller  writes:
>
>> I tried reproducing the issue (based on the `next` branch, not 2.15,
>> but I do not recall any changes in the submodule area lately), and
>> could not come up with a reproduction recipe,...
>
> I do not offhand recall anything; the closest I can think of is the
> three-patch series <20171016135623.ga12...@book.hvoigt.net> that
> taught the on-demand recursive fetch to pay attention to the location
> in the superproject tree a submodule is bound to.

I tried the same test on 2.15 and cannot reproduce there either.

>
> 4b4acedd61 submodule: simplify decision tree whether to or not to fetch
> c68f837576 implement fetching of moved submodules
> 01ce12252c fetch: add test to make sure we stay backwards compatible
>
> But IIRC, "submodule update" uses a separate codepath?

Yes, any portion of git-submodule.sh that calls out to C is going
through the submodule--helper. I want to revive the port of that
shell script to C again.

The "submodule update" uses the submodule helper to obtain
the list of submodules and then does a "git -C $sub fetch" in there.

Stefan


Re: [PATCH 3/5] t/lib-submodule-update.sh: add new test for submodule internal change

2017-12-19 Thread Jonathan Nieder
Hi,

Stefan Beller wrote:

> The test is marked as a failure as the fix comes in a later patch.
>
> Signed-off-by: Stefan Beller 
> ---
>  t/lib-submodule-update.sh | 11 +++
>  1 file changed, 11 insertions(+)

I think I'd find this easier to undrestand if it were squashed with the
patch that fixes it.

This is part of test_submodule_foced_switch --- does that mean it
affects both checkout -f and reset --hard, or only the latter?

Thanks,
Jonathan


[PATCH 5/5] submodule: submodule_move_head omits old argument in forced case

2017-12-19 Thread Stefan Beller
With the previous patch applied (fix of the same() function), the
function `submodule_move_head` may be invoked with the same argument
for the `old` and `new` state of a submodule, for example when you
run `reset --hard --recurse-submodules` in the superproject that has no
change in the gitlink entry, but only worktree related change in the
submodule. The read-tree call in the submodule is not amused about
the duplicate argument.

It turns out that we can omit the duplicate old argument in all forced
cases anyway, so let's do that.

Signed-off-by: Stefan Beller 
---
 submodule.c   | 4 +++-
 t/lib-submodule-update.sh | 2 +-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/submodule.c b/submodule.c
index fa25888783..db0f7ac51e 100644
--- a/submodule.c
+++ b/submodule.c
@@ -1653,7 +1653,9 @@ int submodule_move_head(const char *path,
else
argv_array_push(, "-m");
 
-   argv_array_push(, old ? old : EMPTY_TREE_SHA1_HEX);
+   if (!(flags & SUBMODULE_MOVE_HEAD_FORCE))
+   argv_array_push(, old ? old : EMPTY_TREE_SHA1_HEX);
+
argv_array_push(, new ? new : EMPTY_TREE_SHA1_HEX);
 
if (run_command()) {
diff --git a/t/lib-submodule-update.sh b/t/lib-submodule-update.sh
index 15cf3e0b8b..7b6661cc84 100755
--- a/t/lib-submodule-update.sh
+++ b/t/lib-submodule-update.sh
@@ -1016,7 +1016,7 @@ test_submodule_forced_switch_recursing_with_args () {
)
'
 
-   test_expect_failure "$command: changed submodule worktree is reset" '
+   test_expect_success "$command: changed submodule worktree is reset" '
prolog &&
reset_work_tree_to_interested add_sub1 &&
(
-- 
2.15.1.620.gb9897f4670-goog



[PATCH 2/5] t/lib-submodule-update.sh: Fix test ignoring ignored files in submodules

2017-12-19 Thread Stefan Beller
It turns out that this buggy test passed due to the buggy implementation,
which will soon be corrected.  Let's fix the test first.

Signed-off-by: Stefan Beller 
---
 t/lib-submodule-update.sh | 1 +
 1 file changed, 1 insertion(+)

diff --git a/t/lib-submodule-update.sh b/t/lib-submodule-update.sh
index d7699046f6..fb0173ea87 100755
--- a/t/lib-submodule-update.sh
+++ b/t/lib-submodule-update.sh
@@ -885,6 +885,7 @@ test_submodule_switch_recursing_with_args () {
(
cd submodule_update &&
git branch -t replace_sub1_with_file 
origin/replace_sub1_with_file &&
+   echo ignored >.git/modules/sub1/info/exclude &&
: >sub1/ignored &&
$command replace_sub1_with_file &&
test_superproject_content origin/replace_sub1_with_file 
&&
-- 
2.15.1.620.gb9897f4670-goog



[PATCH 4/5] unpack-trees: Fix same() for submodules

2017-12-19 Thread Stefan Beller
The function same(a, b) is used to check if two entries a and b are the
same.  As the index contains the staged files the same() function can be
used to check if files between a given revision and the index are the same.

In case of submodules, the gitlink entry is showing up as not modified
despite potential changes inside the submodule.

Fix the function to examine submodules by looking inside the submodule.
This patch alone doesn't affect any correctness garantuees, but in
combination with the next patch this fixes the new test introduced
earlier in this series.

This may be a slight performance regression as now we have to
inspect any submodule thouroughly.

Signed-off-by: Stefan Beller 
---
 unpack-trees.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/unpack-trees.c b/unpack-trees.c
index bf8b602901..4d839e8edb 100644
--- a/unpack-trees.c
+++ b/unpack-trees.c
@@ -1427,6 +1427,8 @@ static int same(const struct cache_entry *a, const struct 
cache_entry *b)
return 1;
if ((a->ce_flags | b->ce_flags) & CE_CONFLICTED)
return 0;
+   if (S_ISGITLINK(b->ce_mode) && should_update_submodules())
+   return !oidcmp(>oid, >oid) && 
!is_submodule_modified(b->name, 0);
return a->ce_mode == b->ce_mode &&
   !oidcmp(>oid, >oid);
 }
-- 
2.15.1.620.gb9897f4670-goog



[PATCH 1/5] t/lib-submodule-update.sh: clarify test

2017-12-19 Thread Stefan Beller
Keep the local branch name as the upstream branch name to avoid confusion.

Signed-off-by: Stefan Beller 
---
 t/lib-submodule-update.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/t/lib-submodule-update.sh b/t/lib-submodule-update.sh
index 38dadd2c29..d7699046f6 100755
--- a/t/lib-submodule-update.sh
+++ b/t/lib-submodule-update.sh
@@ -664,8 +664,8 @@ test_submodule_recursing_with_args_common() {
cd submodule_update &&
git -C sub1 checkout -b keep_branch &&
git -C sub1 rev-parse HEAD >expect &&
-   git branch -t check-keep origin/modify_sub1 &&
-   $command check-keep &&
+   git branch -t modify_sub1 origin/modify_sub1 &&
+   $command modify_sub1 &&
test_superproject_content origin/modify_sub1 &&
test_submodule_content sub1 origin/modify_sub1 &&
git -C sub1 rev-parse keep_branch >actual &&
-- 
2.15.1.620.gb9897f4670-goog



[PATCH 0/5] Fix --recurse-submodules for submodule worktree changes

2017-12-19 Thread Stefan Beller
The fix is in the last patch, the first patches are just massaging the code
base to make the fix easy.

The second patch fixes a bug in the test, which was ineffective at testing.
The third patch shows the problem this series addresses,
the fourth patch is a little refactoring, which I want to keep separate
as I would expect it to be a performance regression[1].
The first patch is unrelated, but improves the readability of submodule test
cases, which we'd want to improve further.

Thanks,
Stefan

[1] The performance should improve once we don't use so many processes
any more, but that repository series is stalled for now.

Stefan Beller (5):
  t/lib-submodule-update.sh: clarify test
  t/lib-submodule-update.sh: Fix test ignoring ignored files in
submodules
  t/lib-submodule-update.sh: add new test for submodule internal change
  unpack-trees: Fix same() for submodules
  submodule: submodule_move_head omits old argument in forced case

 submodule.c   |  4 +++-
 t/lib-submodule-update.sh | 16 ++--
 unpack-trees.c|  2 ++
 3 files changed, 19 insertions(+), 3 deletions(-)

-- 
2.15.1.620.gb9897f4670-goog



[PATCH 3/5] t/lib-submodule-update.sh: add new test for submodule internal change

2017-12-19 Thread Stefan Beller
The test is marked as a failure as the fix comes in a later patch.

Signed-off-by: Stefan Beller 
---
 t/lib-submodule-update.sh | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/t/lib-submodule-update.sh b/t/lib-submodule-update.sh
index fb0173ea87..15cf3e0b8b 100755
--- a/t/lib-submodule-update.sh
+++ b/t/lib-submodule-update.sh
@@ -1015,4 +1015,15 @@ test_submodule_forced_switch_recursing_with_args () {
test_submodule_content sub1 origin/modify_sub1
)
'
+
+   test_expect_failure "$command: changed submodule worktree is reset" '
+   prolog &&
+   reset_work_tree_to_interested add_sub1 &&
+   (
+   cd submodule_update &&
+   rm sub1/file1 &&
+   $command HEAD &&
+   test_path_is_file sub1/file1
+   )
+   '
 }
-- 
2.15.1.620.gb9897f4670-goog



Re: [PATCH] Re: Bug with "git submodule update" + subrepo with differing path/name?

2017-12-19 Thread Junio C Hamano
Stefan Beller  writes:

> I tried reproducing the issue (based on the `next` branch, not 2.15,
> but I do not recall any changes in the submodule area lately), and
> could not come up with a reproduction recipe,...

I do not offhand recall anything; the closest I can think of is the
three-patch series <20171016135623.ga12...@book.hvoigt.net> that
taught the on-demand recursive fetch to pay attention to the location
in the superproject tree a submodule is bound to.

4b4acedd61 submodule: simplify decision tree whether to or not to fetch
c68f837576 implement fetching of moved submodules
01ce12252c fetch: add test to make sure we stay backwards compatible

But IIRC, "submodule update" uses a separate codepath?


Re: [PATCH v6 0/2] http-backend: respect CONTENT_LENGTH as specified by rfc3875

2017-12-19 Thread Junio C Hamano
Max Kirillov  writes:

> v6:
>
> Do not implement generic git_env_ssize_t(), instead export 
> git_parse_ssize_t() from config.c
> and hardcode the rest.
>
> Florian Manschwetus (1):
>   http-backend: respect CONTENT_LENGTH as specified by rfc3875
>
> Max Kirillov (1):
>   t5560-http-backend-noserver.sh: add CONTENT_LENGTH cases
>
>  Makefile |  1 +
>  config.c |  2 +-
>  config.h |  1 +
>  http-backend.c   | 50 
> +++-
>  t/helper/test-print-values.c | 10 
>  t/t5560-http-backend-noserver.sh | 30 
>  6 files changed, 92 insertions(+), 2 deletions(-)
>  create mode 100644 t/helper/test-print-values.c

So... is there going to be an update (or has there been one and I
missed it)?

Thanks.


What's cooking in git.git (Dec 2017, #04; Tue, 19)

2017-12-19 Thread Junio C Hamano
Here are the topics that have been cooking.  Commits prefixed with
'-' are only in 'pu' (proposed updates) while commits prefixed with
'+' are in 'next'.  The ones marked with '.' do not appear in any of
the integration branches, but I am still holding onto them.

You can find the changes described here in the integration branches
of the repositories listed at

http://git-blame.blogspot.com/p/git-public-repositories.html

--
[Graduated to "master"]

* ar/unconfuse-three-dots (2017-12-06) 8 commits
  (merged to 'next' on 2017-12-13 at 33bd0b67c0)
 + t2020: test variations that matter
 + t4013: test new output from diff --abbrev --raw
 + diff: diff_aligned_abbrev: remove ellipsis after abbreviated SHA-1 value
 + t4013: prepare for upcoming "diff --raw --abbrev" output format change
 + checkout: describe_detached_head: remove ellipsis after committish
 + print_sha1_ellipsis: introduce helper
 + Documentation: user-manual: limit usage of ellipsis
 + Documentation: revisions: fix typo: "three dot" ---> "three-dot" (in line 
with "two-dot").

 Ancient part of codebase still shows dots after an abbreviated
 object name just to show that it is not a full object name, but
 these ellipses are confusing to people who newly discovered Git
 who are used to seeing abbreviated object names and find them
 confusing with the range syntax.


* bw/pathspec-match-submodule-boundary (2017-12-05) 1 commit
  (merged to 'next' on 2017-12-13 at e256d292a4)
 + pathspec: only match across submodule boundaries when requested

 An v2.12-era regression in pathspec match logic, which made it look
 into submodule tree even when it is not desired, has been fixed.


* bw/submodule-config-cleanup (2017-12-06) 1 commit
  (merged to 'next' on 2017-12-13 at c952bf1b84)
 + diff-tree: read the index so attribute checks work in bare repositories

 Recent update to the submodule configuration code broke "diff-tree"
 by accidentally stopping to read from the index upfront.


* en/merge-recursive-icase-removal (2017-11-27) 1 commit
  (merged to 'next' on 2017-12-13 at 85c6538a2a)
 + merge-recursive: ignore_case shouldn't reject intentional removals

 The code internal to the recursive merge strategy was not fully
 prepared to see a path that is renamed to try overwriting another
 path that is only different in case on case insensitive systems.
 This does not matter in the current code, but will start to matter
 once the rename detection logic starts taking hints from nearby
 paths moving to some directory and moves a new path along with them.


* en/rename-progress (2017-12-02) 5 commits
  (merged to 'next' on 2017-12-04 at 49b39d2297)
 + diffcore-rename: make diff-tree -l0 mean -l
  (merged to 'next' on 2017-11-20 at 77a2e0ddd9)
 + sequencer: show rename progress during cherry picks
 + diff: remove silent clamp of renameLimit
 + progress: fix progress meters when dealing with lots of work
 + sequencer: warn when internal merge may be suboptimal due to renameLimit

 Historically, the diff machinery for rename detection had a
 hardcoded limit of 32k paths; this is being lifted to allow users
 trade cycles with a (possibly) easier to read result.


* gk/tracing-optimization (2017-12-06) 2 commits
  (merged to 'next' on 2017-12-13 at d6bfac03ad)
 + trace: improve performance while category is disabled
 + trace: remove trace key normalization

 The tracing infrastructure has been optimized for cases where no
 tracing is requested.


* jt/diff-anchored-patience (2017-11-28) 1 commit
  (merged to 'next' on 2017-12-13 at 5f4843d7a0)
 + diff: support anchoring line(s)

 "git diff" learned a variant of the "--patience" algorithm, to
 which the user can specify which 'unique' line to be used as
 anchoring points.


* ls/editor-waiting-message (2017-12-07) 2 commits
  (merged to 'next' on 2017-12-13 at 494b5b41e3)
 + launch_editor(): indicate that Git waits for user input
 + refactor "dumb" terminal determination

 Git shows a message to tell the user that it is waiting for the
 user to finish editing when spawning an editor, in case the editor
 opens to a hidden window or somewhere obscure and the user gets
 lost.


* ls/git-gui-no-double-utf8-author-name (2017-12-05) 2 commits
  (merged to 'next' on 2017-12-13 at be577d6e1b)
 + Merge branch 'ls/no-double-utf8-author-name' of ../git-gui into 
ls/git-gui-no-double-utf8-author-name
 + git-gui: prevent double UTF-8 conversion

 Amending commits in git-gui broke the author name that is non-ascii
 due to incorrect enconding conversion.


* sb/clone-recursive-submodule-doc (2017-12-05) 1 commit
  (merged to 'next' on 2017-12-13 at abfed699db)
 + Documentation/git-clone: improve description for submodule recursing

 Doc update.


* sg/setup-doc-update (2017-12-07) 1 commit
  (merged to 'next' on 2017-12-13 at 4355c6e0ef)
 + setup.c: fix comment about order of .git directory discovery

 Comment update.


* tg/worktree-create-tracking (2017-12-06) 6 commits
  

Re: [PATCH] http: support CURLPROXY_HTTPS

2017-12-19 Thread Jonathan Nieder
Junio C Hamano wrote:
> Jonathan Nieder  writes:
>> Wei Shuyu wrote:

>>> diff --git a/http.c b/http.c
>>> index 215bebef1..32d33261c 100644
>>> --- a/http.c
>>> +++ b/http.c
>>> @@ -865,6 +865,11 @@ static CURL *get_curl_handle(void)
>>> else if (starts_with(curl_http_proxy, "socks"))
>>> curl_easy_setopt(result,
>>> CURLOPT_PROXYTYPE, CURLPROXY_SOCKS4);
>>> +#endif
>>> +#if LIBCURL_VERSION_NUM >= 0x073400
>>
>> Can this use #ifdef CURLPROXY_HTTPS instead?  That way, if someone's
>> copy of curl has backported support then they get the benefit of this
>> change as well.
>
> It sounds like a worthwhile thing to do (assuming that these are
> always implemented as preprocessor macros).

Oh, good point!  It's an enumerator, not a preprocessor macro.  But
there is a preprocessor macro CURL_VERSION_HTTPS_PROXY.

Anyway, using LIBCURL_VERSION_NUM is consistent with the surrounding
code.

Thanks,
Jonathan


Re: [PATCH] http: support CURLPROXY_HTTPS

2017-12-19 Thread Junio C Hamano
Jonathan Nieder  writes:

> Hi,
>
> Wei Shuyu wrote:
>
>> HTTP proxy over SSL is supported by curl since 7.52.0.
>> This is very useful for networks with protocol whitelist.
>>
>> Signed-off-by: Wei Shuyu 
>> ---
>>  http.c | 5 +
>>  1 file changed, 5 insertions(+)
>
> Thanks for writing this.  Can you give an example of how I'd use it
> (ideally in the form of a test in t/ so we avoid this functionality
> regressing, but if that's not straightforward then an example for the
> commit message is fine as well)?

Just FYI, here is an entry I added to the What's cooking report
(which will be used as the log message for a merge commit that pulls
this topic in, and will become an entry in the release notes if this
topic ever becomes a part of a release).

 Git has been taught to support an https:// used for http.proxy when
 using recent versions of libcurl.

There are multiple ways other than http.proxy configuration variable
that a user can use to tell Git to use a proxy; I do not think the
log message of this change is a place to enumerate all of them, but
showing one of them to the readers would be good to remind them what
we are talking about, I would guess.

>> diff --git a/http.c b/http.c
>> index 215bebef1..32d33261c 100644
>> --- a/http.c
>> +++ b/http.c
>> @@ -865,6 +865,11 @@ static CURL *get_curl_handle(void)
>>  else if (starts_with(curl_http_proxy, "socks"))
>>  curl_easy_setopt(result,
>>  CURLOPT_PROXYTYPE, CURLPROXY_SOCKS4);
>> +#endif
>> +#if LIBCURL_VERSION_NUM >= 0x073400
>
> Can this use #ifdef CURLPROXY_HTTPS instead?  That way, if someone's
> copy of curl has backported support then they get the benefit of this
> change as well.

It sounds like a worthwhile thing to do (assuming that these are
always implemented as preprocessor macros).

>> +else if (starts_with(curl_http_proxy, "https"))
>> +curl_easy_setopt(result,
>> +CURLOPT_PROXYTYPE, CURLPROXY_HTTPS);
>>  #endif
>>  if (strstr(curl_http_proxy, "://"))
>>  credential_from_url(_auth, curl_http_proxy);
>
> Thanks and hope that helps,
> Jonathan


[PATCH v2 3/2] t5573, t7612: clean up after unexpected success of 'pull' and 'merge'

2017-12-19 Thread Junio C Hamano
The previous steps added test_when_finished to tests that run 'git
pull' or 'git merge' with expectation of success, so that the test
after them can start from a known state even when their 'git pull'
invocation unexpectedly fails.  However, tests that run 'git pull'
or 'git merge' expecting it not to succeed forgot to protect later
tests the same way---if they unexpectedly succeed, the test after
them would start from an unexpected state.

Reset and checkout the initial commit after all these tests, whether
they expect their invocations to succeed or fail.

Signed-off-by: Junio C Hamano 
---

 * Let's do this and move the whole thing forward.

 t/t5573-pull-verify-signatures.sh  |  9 ++---
 t/t7612-merge-verify-signatures.sh | 16 +++-
 2 files changed, 17 insertions(+), 8 deletions(-)

diff --git a/t/t5573-pull-verify-signatures.sh 
b/t/t5573-pull-verify-signatures.sh
index 8ae331f40e..9594e891f4 100755
--- a/t/t5573-pull-verify-signatures.sh
+++ b/t/t5573-pull-verify-signatures.sh
@@ -43,33 +43,36 @@ test_expect_success GPG 'create repositories with signed 
commits' '
 '
 
 test_expect_success GPG 'pull unsigned commit with --verify-signatures' '
+   test_when_finished "git reset --hard && git checkout initial" &&
test_must_fail git pull --ff-only --verify-signatures unsigned 
2>pullerror &&
test_i18ngrep "does not have a GPG signature" pullerror
 '
 
 test_expect_success GPG 'pull commit with bad signature with 
--verify-signatures' '
+   test_when_finished "git reset --hard && git checkout initial" &&
test_must_fail git pull --ff-only --verify-signatures bad 2>pullerror &&
test_i18ngrep "has a bad GPG signature" pullerror
 '
 
 test_expect_success GPG 'pull commit with untrusted signature with 
--verify-signatures' '
+   test_when_finished "git reset --hard && git checkout initial" &&
test_must_fail git pull --ff-only --verify-signatures untrusted 
2>pullerror &&
test_i18ngrep "has an untrusted GPG signature" pullerror
 '
 
 test_expect_success GPG 'pull signed commit with --verify-signatures' '
-   test_when_finished "git checkout initial" &&
+   test_when_finished "git reset --hard && git checkout initial" &&
git pull --verify-signatures signed >pulloutput &&
test_i18ngrep "has a good GPG signature" pulloutput
 '
 
 test_expect_success GPG 'pull commit with bad signature without verification' '
-   test_when_finished "git checkout initial" &&
+   test_when_finished "git reset --hard && git checkout initial" &&
git pull --ff-only bad 2>pullerror
 '
 
 test_expect_success GPG 'pull commit with bad signature with 
--no-verify-signatures' '
-   test_when_finished "git checkout initial" &&
+   test_when_finished "git reset --hard && git checkout initial" &&
test_config merge.verifySignatures true &&
test_config pull.verifySignatures true &&
git pull --ff-only --no-verify-signatures bad 2>pullerror
diff --git a/t/t7612-merge-verify-signatures.sh 
b/t/t7612-merge-verify-signatures.sh
index 2344995a11..e797c74112 100755
--- a/t/t7612-merge-verify-signatures.sh
+++ b/t/t7612-merge-verify-signatures.sh
@@ -35,64 +35,70 @@ test_expect_success GPG 'create signed commits' '
 '
 
 test_expect_success GPG 'merge unsigned commit with verification' '
+   test_when_finished "git reset --hard && git checkout initial" &&
test_must_fail git merge --ff-only --verify-signatures side-unsigned 
2>mergeerror &&
test_i18ngrep "does not have a GPG signature" mergeerror
 '
 
 test_expect_success GPG 'merge unsigned commit with 
merge.verifySignatures=true' '
+   test_when_finished "git reset --hard && git checkout initial" &&
test_config merge.verifySignatures true &&
test_must_fail git merge --ff-only side-unsigned 2>mergeerror &&
test_i18ngrep "does not have a GPG signature" mergeerror
 '
 
 test_expect_success GPG 'merge commit with bad signature with verification' '
+   test_when_finished "git reset --hard && git checkout initial" &&
test_must_fail git merge --ff-only --verify-signatures $(cat 
forged.commit) 2>mergeerror &&
test_i18ngrep "has a bad GPG signature" mergeerror
 '
 
 test_expect_success GPG 'merge commit with bad signature with 
merge.verifySignatures=true' '
+   test_when_finished "git reset --hard && git checkout initial" &&
test_config merge.verifySignatures true &&
test_must_fail git merge --ff-only $(cat forged.commit) 2>mergeerror &&
test_i18ngrep "has a bad GPG signature" mergeerror
 '
 
 test_expect_success GPG 'merge commit with untrusted signature with 
verification' '
+   test_when_finished "git reset --hard && git checkout initial" &&
test_must_fail git merge --ff-only --verify-signatures side-untrusted 
2>mergeerror &&
test_i18ngrep "has an untrusted GPG signature" mergeerror
 '
 
 test_expect_success GPG 'merge commit 

Re: [PATCH] http: support CURLPROXY_HTTPS

2017-12-19 Thread Jonathan Nieder
Hi,

Wei Shuyu wrote:

> HTTP proxy over SSL is supported by curl since 7.52.0.
> This is very useful for networks with protocol whitelist.
>
> Signed-off-by: Wei Shuyu 
> ---
>  http.c | 5 +
>  1 file changed, 5 insertions(+)

Thanks for writing this.  Can you give an example of how I'd use it
(ideally in the form of a test in t/ so we avoid this functionality
regressing, but if that's not straightforward then an example for the
commit message is fine as well)?

> diff --git a/http.c b/http.c
> index 215bebef1..32d33261c 100644
> --- a/http.c
> +++ b/http.c
> @@ -865,6 +865,11 @@ static CURL *get_curl_handle(void)
>   else if (starts_with(curl_http_proxy, "socks"))
>   curl_easy_setopt(result,
>   CURLOPT_PROXYTYPE, CURLPROXY_SOCKS4);
> +#endif
> +#if LIBCURL_VERSION_NUM >= 0x073400

Can this use #ifdef CURLPROXY_HTTPS instead?  That way, if someone's
copy of curl has backported support then they get the benefit of this
change as well.

> + else if (starts_with(curl_http_proxy, "https"))
> + curl_easy_setopt(result,
> + CURLOPT_PROXYTYPE, CURLPROXY_HTTPS);
>  #endif
>   if (strstr(curl_http_proxy, "://"))
>   credential_from_url(_auth, curl_http_proxy);

Thanks and hope that helps,
Jonathan


Re: [PATCH 4/6] fsmonitor: Add a trailing newline to test-dump-fsmonitor

2017-12-19 Thread Junio C Hamano
Alex Vandiver  writes:

> Subject: Re: [PATCH 4/6] fsmonitor: Add a trailing newline to 
> test-dump-fsmonitor

"Subject: fsmonitor: complete the last line of test-dump-fsmonitor output"

perhaps.

> This makes it more readable when used for debugging from the
> commandline.
>
> Signed-off-by: Alex Vandiver 
> ---
>  t/helper/test-dump-fsmonitor.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/t/helper/test-dump-fsmonitor.c b/t/helper/test-dump-fsmonitor.c
> index 53b19b39b..4e56929f7 100644
> --- a/t/helper/test-dump-fsmonitor.c
> +++ b/t/helper/test-dump-fsmonitor.c
> @@ -19,5 +19,6 @@ int cmd_main(int ac, const char **av)
>   for (i = 0; i < istate->cache_nr; i++)
>   printf((istate->cache[i]->ce_flags & CE_FSMONITOR_VALID) ? "+" 
> : "-");
>  
> + printf("\n");

That (and existing) uses of printf() all feel a bit overkill ;-)
Perhaps putchar() would suffice.

I am not sure if the above wants to become something like

for (i = 0; i < istate->cache_nr; i++) {
putchar(istate->cache[i]->ce_flags & CE_FSMONITOR_VALID ? '+' : 
'-');
quote_c_style(istate->cache[i]->name, NULL, stdout, 0);
putchar('\n');
}

instead of "a single long incomplete line" in the first place.  Your
"fix" merely turns it into "a single long complete line", which does
not quite feel big enough an improvement, at least to me.

>   return 0;
>  }


Re: [PATCH 5/6] fsmonitor: Remove debugging lines from t/t7519-status-fsmonitor.sh

2017-12-19 Thread Junio C Hamano
Alex Vandiver  writes:

> These were mistakenly left in when the test was introduced, in
> 1487372d3 ("fsmonitor: store fsmonitor bitmap before splitting index",
> 2017-11-09)
>
> Signed-off-by: Alex Vandiver 
> ---
>  t/t7519-status-fsmonitor.sh | 2 --
>  1 file changed, 2 deletions(-)
>
> diff --git a/t/t7519-status-fsmonitor.sh b/t/t7519-status-fsmonitor.sh
> index eb2d13bbc..19b2a0a0f 100755
> --- a/t/t7519-status-fsmonitor.sh
> +++ b/t/t7519-status-fsmonitor.sh
> @@ -307,9 +307,7 @@ test_expect_success 'splitting the index results in the 
> same state' '
>   dirty_repo &&
>   git update-index --fsmonitor  &&
>   git ls-files -f >expect &&
> - test-dump-fsmonitor >&2 && echo &&
>   git update-index --fsmonitor --split-index &&
> - test-dump-fsmonitor >&2 && echo &&
>   git ls-files -f >actual &&
>   test_cmp expect actual
>  '

Hmph, by default the standard output and standard error streams are
not shown in the test output, and it would help while debugging test
failures, so I am not sure if this is a good change.  With the
previous step [4/6], we can lose the "echo", of course, and I do not
think we need >&2 redirection there, either.


Re: [PATCH 2/6] fsmonitor: Add dir.h include, for untracked_cache_invalidate_path

2017-12-19 Thread Junio C Hamano
Alex Vandiver  writes:

> Subject: Re: [PATCH 2/6] fsmonitor: Add dir.h include, for 
> untracked_cache_invalidate_path

Perhaps

"Subject: fsmonitor.h: include dir.h"

But I am not sure if this is a right direction to go in.  If a .C
user of fsmonitor needs (does not need) things from dir.h, that file
can (does not need to) include dir.h itself.

I think this header does excessive "static inline" as premature
optimization, so a better "fix" to your perceived problem may be to
make them not "static inline".

> This missing include is currently hidden by dint of the fact that
> dir.h is already included by all things that currently include
> fsmonitor.h
>
> Signed-off-by: Alex Vandiver 
> ---


>  fsmonitor.h | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/fsmonitor.h b/fsmonitor.h
> index cd3cc0ccf..5f68ca4d2 100644
> --- a/fsmonitor.h
> +++ b/fsmonitor.h
> @@ -1,5 +1,6 @@
>  #ifndef FSMONITOR_H
>  #define FSMONITOR_H
> +#include "dir.h"
>  
>  extern struct trace_key trace_fsmonitor;


Re: [PATCHv5 7/7] builtin/describe.c: describe a blob

2017-12-19 Thread Stefan Beller
On Tue, Dec 19, 2017 at 11:22 AM, Junio C Hamano  wrote:
> I had to squash in the following to make 'pu' pass under
> gettext-poison build.  Is this ready for 'next' otherwise?

I saw that in pu, thanks for squashing. I should have spoken
up earlier confirming it.

> With the "log --find-object" thing, it may be that this no longer is
> needed, but then again we haven't done anything with the other
> Jonathan's idea to unify the --find-object thing into the --pickaxe
> framework.

I'll look into that after I finish looking at a submodule related bug.

> It seems that we tend to open and then abandon new interests without
> seeing them through completion, leaving too many loose ends untied.
> This has to stop.

I'll try to find my balance again.

When working on too few topics at the same time, sometimes I get stalled
waiting for the mailing list (or internal reviewers) to respond, so I start many
topics, which then overwhelm me after a while once reviews catch up.

In this specific case it was not clear what the best way is, i.e. the
interest was
rather broad: "Hey I have this blob sha1, how do I get more information?",
which can be solved in many different ways, so I tried some of them.
(Another alternative just now considered: I could have wrapped that
script from stackoverflow into a new command and called it a day, though that
has very poor benefits for the rest of ecosystem, an addition to an established
powerful command gives more benefits IMHO)

Thanks,
Stefan


Re: [PATCHv5 7/7] builtin/describe.c: describe a blob

2017-12-19 Thread Junio C Hamano
I had to squash in the following to make 'pu' pass under
gettext-poison build.  Is this ready for 'next' otherwise?

With the "log --find-object" thing, it may be that this no longer is
needed, but then again we haven't done anything with the other
Jonathan's idea to unify the --find-object thing into the --pickaxe
framework.

It seems that we tend to open and then abandon new interests without
seeing them through completion, leaving too many loose ends untied.
This has to stop.

diff --git a/t/t6120-describe.sh b/t/t6120-describe.sh
index 4668f0058e..3e3fb462a0 100755
--- a/t/t6120-describe.sh
+++ b/t/t6120-describe.sh
@@ -341,7 +341,7 @@ test_expect_success 'describe directly tagged blob' '
 test_expect_success 'describe tag object' '
git tag test-blob-1 -a -m msg unique-file:file &&
test_must_fail git describe test-blob-1 2>actual &&
-   grep "fatal: test-blob-1 is neither a commit nor blob" actual
+   test_i18ngrep "fatal: test-blob-1 is neither a commit nor blob" actual
 '
 
 test_expect_failure ULIMIT_STACK_SIZE 'name-rev works in a deep repo' '


Re: [PATCH v2 2/5] Makefile: under "make dist", include the sha1collisiondetection submodule

2017-12-19 Thread Junio C Hamano
Ævar Arnfjörð Bjarmason  writes:

> I started by trying to come up with something generic which would handle
> future submodules, i.e.:
>
> git submodule foreach 'git ls-files'

I am not all that interested in that.  The hardcoded list I felt
disturbing was not "what are the submodules we want to include?",
but "what are the files from sha1dc submodule we use?".

IOW, I was more wondering about these selective copies:

...
+   @cp sha1collisiondetection/LICENSE.txt \
+   $(GIT_TARNAME)/sha1collisiondetection/
+   @cp sha1collisiondetection/LICENSE.txt \
+   $(GIT_TARNAME)/sha1collisiondetection/
+   @cp sha1collisiondetection/lib/sha1.[ch] \
+   $(GIT_TARNAME)/sha1collisiondetection/lib/
+   @cp sha1collisiondetection/lib/ubc_check.[ch] \
+   $(GIT_TARNAME)/sha1collisiondetection/lib/
...

As we are assuming that DC_SHA1_SUBMODULE users are not doing the
make dist from a tarball extract, wouldn't something along the
lines of

git -C sha1collisiondetection archive HEAD | \
$(TAR) Cxf $(GIT_TARNAME)/sha1collisiondetection -

be a better way to go?


Re: [PATCH] revision: introduce prepare_revision_walk_extended()

2017-12-19 Thread Junio C Hamano
Jeff King  writes:

> On Mon, Dec 18, 2017 at 08:18:19PM +0100, René Scharfe wrote:
>
>> > The root of the matter is that the revision-walking code doesn't clean
>> > up after itself. In every case, the caller is just saving these to clean
>> > up commit marks, isn't it?
>> 
>> bundle also checks if the pending objects exists.
>
> Thanks, I missed that one. So just adding a feature to clean up commit
> marks wouldn't be sufficient to cover that case.

OK.

>> > That sidesteps all of the memory ownership issues by just creating a
>> > copy. That's less efficient, but I'd be surprised if it matters in
>> > practice (we tend to do one or two revisions per process, there don't
>> > tend to be a lot of pending tips, and we're really just talking about
>> > copying some pointers here).
>> [...]
>> I don't know if there can be real-world use cases with millions of
>> entries (when it would start to hurt).
>
> I've seen repos which have tens of thousands of tags. Something like
> "rev-list --all" would have tens of thousands of pending objects.
> I think in practice it's limited to the number of objects (though in
> practice more like the number of commits).
> ...

OK again; that is an argument against "let's copy the array".

>> Why does prepare_revision_walk() clear the list of pending objects at
>> all?  Assuming the list is append-only then perhaps remembering the
>> last handled index would suffice.

For that matter, why does it copy, instead of using revs->pending
directly?  I do not think I can answer this, as I think the design
decisions led to this code predates me.

In any case, it seems that the discussion has veered into an
interesting tangent but at this point it seems to me that it is not
likely to produce an immediate improvement over the posted patch.

Should we take the patch posted as-is and move forward?


Re: [PATCH] fmt-merge-msg: avoid leaking strbuf in shortlog()

2017-12-19 Thread René Scharfe
Am 19.12.2017 um 12:38 schrieb Jeff King:
> On Mon, Dec 18, 2017 at 08:18:17PM +0100, René Scharfe wrote:
> 
>>> I'd actually argue the other way: the simplest interface is one where
>>> the string list owns all of its pointers. That keeps the
>>> ownership/lifetime issues clear, and it's one less step for the caller
>>> to have to remember to do at the end (they do have to clear() the list,
>>> but they must do that anyway to free the array of items).
>>>
>>> It does mean that some callers may have to remember to free a temporary
>>> buffer right after adding its contents to the list. But that's a lesser
>>> evil, I think, since the memory ownership issues are all clearly
>>> resolved at the time of add.
>>>
>>> The big cost is just extra copies/allocations.
>>
>> An interface requiring callers to allocate can be used to implement a
>> wrapper that does all allocations for them -- the other way around is
>> harder.  It can be used to avoid object duplication, but duplicates
>> functions.  No idea if that's worth it.
> 
> Sure, but would anybody actually want to _use_ the non-wrapped version?

Not sure, but cases that currently use STRING_LIST_INIT_NODUP probably
apply.  Apropos: apply.c::write_out_results() looks like it might, too.

Another question is how much it would cost to let them duplicate strings
as well in order to simplify the code.

> That's the same duality we have now with string_list.

Hmm, I thought we *were* discussing string_list?

>>> Having a "format into a string" wrapper doesn't cover _every_ string you
>>> might want to add to a list, but my experience with argv_array_pushf
>>> leads me to believe that it covers quite a lot of cases.
>>
>> It would fit in with the rest of the API -- we have string_list_append()
>> as a wrapper for string_list_append_nodup()+xstrdup() already.  We also
>> have similar functions for strbuf and argv_array.  I find it a bit sad
>> to reimplement xstrfmt() yet again instead of using it directly, though.
> 
> I dunno, I think could provide some safety and some clarity. IOW, why
> _don't_ we like:
> 
>string_list_append_nodup(list, xstrfmt(fmt, ...));
> 
> ? I think because:
> 
>1. It's a bit long and ugly.
> 
>2. It requires a magic "nodup", because we're violating the memory
>   ownership boundary.
> 
>3. It doesn't provide any safety for the case where strdup_strings is
>   not set, making it easy to leak accidentally.

Right, and at least 2 and 3 would be solved by having distinct types for
the plain and the duplicating variants.  The plain one would always
"nodup" and would have no flags that need to be checked.

> Doing:
> 
>string_list_appendf(list, fmt, ...);
> 
> pushes the memory ownership semantics "under the hood" of the
> string_list API. And as opposed to being a simple wrapper, it could
> assert that strdup_strings is set (we already do some similar assertions
> in the split functions).

Yes, that check would guard against leaks.

Having few functions that can be combined looks like a cleaner interface
to me than having additional shortcuts for specific combinations -- less
duplication, smaller surface.

That said I'm not against adding string_list_appendf(); we already have
similar functions for other types.

René


[PATCH] Re: Bug with "git submodule update" + subrepo with differing path/name?

2017-12-19 Thread Stefan Beller
I tried reproducing the issue (based on the `next` branch, not 2.15,
but I do not recall any changes in the submodule area lately), and
could not come up with a reproduction recipe, but here is what I got so
far, maybe you can take it from here (i.e. either make the test case
more like your environment such that it fails, or rather bisect git
to tell us the offending commit) ?

Signed-off-by: Stefan Beller 
---
 t/t7406-submodule-update.sh | 16 
 1 file changed, 16 insertions(+)

diff --git a/t/t7406-submodule-update.sh b/t/t7406-submodule-update.sh
index 6f083c4d68..d957305c38 100755
--- a/t/t7406-submodule-update.sh
+++ b/t/t7406-submodule-update.sh
@@ -978,4 +978,20 @@ test_expect_success 'git clone passes the parallel jobs 
config on to submodules'
rm -rf super4
 '
 
+test_expect_success 'git submodule update works with submodules with name/path 
difference' '
+   test_create_repo super6 &&
+   (
+   cd super6 &&
+   git submodule add ../submodule sub1 &&
+   git submodule add --name testingname ../submodule sub2 &&
+   git commit -m initial &&
+   git -C sub1 checkout HEAD^ &&
+   git -C sub2 checkout HEAD^ &&
+
+   git submodule update >actual &&
+   grep sub1 actual &&
+   grep sub2 actual
+   )
+'
+
 test_done
-- 
2.15.1.620.gb9897f4670-goog



GREETINGS BELOVED

2017-12-19 Thread mis.sbort...@ono.com
GREETINGS BELOVED

I AM BORTE ,I WAS DIAGNOSE WITH OVARIAN CANCER,WHICH DOCTORS HAVE 
CONFIRMED THAT I HAVE ONLY FEW WEEKS TO LIVE, SO I HAVE DECIDED TO 
DONATE EVERYTHING I HAVE TO THE ORPHANAGE AND THE POOR WIDOWS THROUGH 
YOU AND YOUR HELP .PLEASE KINDLY REPLY ME ONLY ON MY EMAIL ADDRESS 
HERE 
SO THAT YOUR MESSAGE WILL GET TO ME (misbotteogot...@gmail.com)REPLY AS 
SOON AS POSIBLE TO ENABLE ME GIVE YOU MORE INFORMATION ABOUT MYSELF AND 
HOW TO GO ABOUT IT.  

 
THANKS

MISS BORTE



Re: difftool uses hardcoded perl shebang

2017-12-19 Thread Junio C Hamano
Jeff King  writes:

> On Tue, Dec 19, 2017 at 09:08:44AM -0800, Junio C Hamano wrote:
>
>> Jeff King  writes:
>> 
>> > In the meantime, pointing to the actual build-time perl is a workaround
>> > (but obviously not if it's being done by a third-party packager who has
>> > no idea where your perl is).
>> 
>> Is such a binary packaging scheme actually in use that deliberately
>> leaves it up to the end user where/if a perl is installed and if it
>> is an appropriately recent version?  It sounds rather irresponsible
>> to me.
>
> No, I mean that the user can do:
>
>   make PERL_PATH=/path/to/perl/in/my/PATH
>
> but if they are not building Git themselves, that is not an option for
> them. And a binary packager cannot help them there, because they do not
> know that path.

I think we are saying the same thing.  A third-party binary packager
cannot guess where your custom Perl is nor if it is recent enough.
I just was wondering if such an irresponsible packaging scheme is in
use that lets you install Git without somehow making sure that the
box also has a version of Perl that can be used with the version of
Git.  Then the presence of /path/to/perl/in/my/PATH does not matter,
as it does not have to be used with Git.




[PATCH] http: support CURLPROXY_HTTPS

2017-12-19 Thread Wei Shuyu
HTTP proxy over SSL is supported by curl since 7.52.0.
This is very useful for networks with protocol whitelist.

Signed-off-by: Wei Shuyu 
---
 http.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/http.c b/http.c
index 215bebef1..32d33261c 100644
--- a/http.c
+++ b/http.c
@@ -865,6 +865,11 @@ static CURL *get_curl_handle(void)
else if (starts_with(curl_http_proxy, "socks"))
curl_easy_setopt(result,
CURLOPT_PROXYTYPE, CURLPROXY_SOCKS4);
+#endif
+#if LIBCURL_VERSION_NUM >= 0x073400
+   else if (starts_with(curl_http_proxy, "https"))
+   curl_easy_setopt(result,
+   CURLOPT_PROXYTYPE, CURLPROXY_HTTPS);
 #endif
if (strstr(curl_http_proxy, "://"))
credential_from_url(_auth, curl_http_proxy);
-- 
2.15.1



Re: difftool uses hardcoded perl shebang

2017-12-19 Thread Jeff King
On Tue, Dec 19, 2017 at 09:08:44AM -0800, Junio C Hamano wrote:

> Jeff King  writes:
> 
> > In the meantime, pointing to the actual build-time perl is a workaround
> > (but obviously not if it's being done by a third-party packager who has
> > no idea where your perl is).
> 
> Is such a binary packaging scheme actually in use that deliberately
> leaves it up to the end user where/if a perl is installed and if it
> is an appropriately recent version?  It sounds rather irresponsible
> to me.

No, I mean that the user can do:

  make PERL_PATH=/path/to/perl/in/my/PATH

but if they are not building Git themselves, that is not an option for
them. And a binary packager cannot help them there, because they do not
know that path.

-Peff


Re: difftool uses hardcoded perl shebang

2017-12-19 Thread Junio C Hamano
Jeff King  writes:

> In the meantime, pointing to the actual build-time perl is a workaround
> (but obviously not if it's being done by a third-party packager who has
> no idea where your perl is).

Is such a binary packaging scheme actually in use that deliberately
leaves it up to the end user where/if a perl is installed and if it
is an appropriately recent version?  It sounds rather irresponsible
to me.


Re: difftool uses hardcoded perl shebang

2017-12-19 Thread Jeff King
On Tue, Dec 19, 2017 at 08:33:22AM -0800, Junio C Hamano wrote:

> Jeff King  writes:
> 
> > This is a build-time knob. When you build git, try:
> >
> >   make PERL_PATH='/usr/bin/env perl'
> >
> > (If you don't build your own git, then you might raise the issue with
> > whomever packages your binary).
> 
> I somehow thought ANYTHING_PATH was meant to point at the exact path
> (as opposed to ANYTHING_COMMAND which is a command line), so it is
> within our rights to do
> 
> test -x "$GIT_EXEC_PATH" || die "Your Git installation is broken"
> 
> but your suggestion above makes such a sanity check impossible.

Hmm, you're right. Though it looks like it is only the test scripts
which actually use this feature. It would be nice if we supported this.
Unfortunately it's hard to make both of these work:

  make PERL_PATH='/usr/bin/env perl'

  make PERL_PATH='/path with spaces/perl'

We must protect the spaces in the latter case and treat it as a single
unit, but would not want to in the former.

In the meantime, pointing to the actual build-time perl is a workaround
(but obviously not if it's being done by a third-party packager who has
no idea where your perl is).

-Peff


Re: difftool uses hardcoded perl shebang

2017-12-19 Thread Junio C Hamano
Jeff King  writes:

> This is a build-time knob. When you build git, try:
>
>   make PERL_PATH='/usr/bin/env perl'
>
> (If you don't build your own git, then you might raise the issue with
> whomever packages your binary).

I somehow thought ANYTHING_PATH was meant to point at the exact path
(as opposed to ANYTHING_COMMAND which is a command line), so it is
within our rights to do

test -x "$GIT_EXEC_PATH" || die "Your Git installation is broken"

but your suggestion above makes such a sanity check impossible.

I'd understand if it were

make PERL_PATH=$(type --path perl)

of course, though.

> As an aside, git-difftool is now a C builtin these days, so the problem
> might also go away if you upgrade. ;)

Yup, true, true.


Re: difftool uses hardcoded perl shebang

2017-12-19 Thread Jeff King
On Tue, Dec 19, 2017 at 01:28:09PM +, Jakub Zaverka wrote:

> When running git difftool:
> 
> >git difftool
> Perl lib version (5.10.0) doesn't match executable version (v5.16.3)
> Compilation failed in require at /git-difftool line 2.
> 
> First line in my git-difftool is:
> #!/usr/bin/perl
> 
> I am using a specific perl that I cannot change. Similarly, I cannot change 
> the git-difftool file. I would like the difftool to use the perl form my 
> PATH. 
> 
> Maybe it would be better to use #!/usr/bin/env perl?

This is a build-time knob. When you build git, try:

  make PERL_PATH='/usr/bin/env perl'

(If you don't build your own git, then you might raise the issue with
whomever packages your binary).

As an aside, git-difftool is now a C builtin these days, so the problem
might also go away if you upgrade. ;)

-Peff


RE: difftool uses hardcoded perl shebang

2017-12-19 Thread Jakub Zaverka
Please disregard that line, it just a mailing client attachment. It is 
completely irrelevant to the matter. 

-Original Message-
From: Junio C Hamano [mailto:gits...@pobox.com] 
Sent: 19 December 2017 17:13
To: Jakub Zaverka 
Cc: git@vger.kernel.org
Subject: Re: difftool uses hardcoded perl shebang

Jakub Zaverka  writes:

> The below email is classified: Internal

Internal to what?
-
Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte Informationen.
Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie bitte
sofort den Absender und loeschen Sie diese E-Mail. Das unbefugte Kopieren
dieser E-Mail oder die unbefugte Weitergabe der enthaltenen Informationen
ist nicht gestattet.

The information contained in this message is confidential or protected by
law. If you are not the intended recipient, please contact the sender and
delete this message. Any unauthorised copying of this message or
unauthorised distribution of the information contained herein is prohibited.

Legally required information for business correspondence/
Gesetzliche Pflichtangaben fuer Geschaeftskorrespondenz:
http://deutsche-boerse.com/letterhead



Re: difftool uses hardcoded perl shebang

2017-12-19 Thread Junio C Hamano
Jakub Zaverka  writes:

> The below email is classified: Internal

Internal to what?


Re: Fetching commit instead of ref

2017-12-19 Thread Junio C Hamano
"Carlsson, Magnus"  writes:

> I understand that you don't want to allow people fetching single
> commits all the time, but is there a reason that you don't allow
> SHA instead of references when you fetch an entire tree?

I do not think we don't want to allow "single commit" fetch.  We do
not allow fetching histories from an arbitrary point in the graph,
unless we can prove that what gets fetched is what the serving end
intended to expose to the fetcher---you should view it as a security
issue.

The default way to determine that a fetch request is getting only
the things the serving end wanted to publish is to see the requested
tips of the histories to be fetched are all pointed by refs.  Which
means that a client of a hosting site can rewind and repoint a ref
after pushing a wrong branch that contained undesirable matter by
accident.  Moving the ref to make the undesirable object unreachable
is all that is needed to "hide" it from the public view, even when
the client does not have a way to trigger "gc" immediately on the
hosting site.

There are variants of this security measure.  If the hosting site is
willing to spend extra cycles, we could loosen the "is the request
fetching only what is published?" check to see if the requested tips
are reachable from the tips of refs, instead of exactly at refs.  It
preserves "a mistake can be hidden with a forced push" property the
same way as above, but it is costly and is prone to abuse.

If you are confident about your pushes all the time, you can take
a stance "anything in the repository is meant to be read".  That is
what the "allowAnySHA1InWant" does.  Not everybody however wants to
do this for obvious reasons, so it is not a default.




Re: [PATCH v3] Makefile: replace perl/Makefile.PL with simple make rules

2017-12-19 Thread Junio C Hamano
Ævar Arnfjörð Bjarmason  writes:

>> Is there anything I'm supposed to do differently now to make our test
>> suite run? If yes then a clear and short hint in the patch description
>> would me more than approriate.
>
> This is a bug in my patch, I can reproduce it on CO7. Will figure out
> what's going on there...

Thanks.


[no subject]

2017-12-19 Thread Amir A. K



-- 
Thanks for your last email response to me.
The information required should include the following-:
Your full names
Your address
Telephone number
Your private email
Occupation
Age
This is to enable my further discussion with you in confidence.
Best regards and wishes to you.
Mohammad Amir Khadov
NB: Please reply to:

am...@postaxte.com

am...@pobox.sk



Re: [FYI PATCH] t/helper/test-lazy-name-hash: fix compilation

2017-12-19 Thread Jeff Hostetler



On 12/18/2017 4:49 PM, Stefan Beller wrote:

I was compiling origin/master today with stricter compiler flags today
and was greeted by

t/helper/test-lazy-init-name-hash.c: In function ‘cmd_main’:
t/helper/test-lazy-init-name-hash.c:172:5: error: ‘nr_threads_used’ may be used 
uninitialized in this function [-Werror=maybe-uninitialized]
  printf("avg [size %8d] [single %f] %c [multi %f %d]\n",
  ^~~
  nr,
  ~~~
  (double)avg_single/10,
  ~~
  (avg_single < avg_multi ? '<' : '>'),
  ~
  (double)avg_multi/10,
  ~
  nr_threads_used);
  
t/helper/test-lazy-init-name-hash.c:115:6: note: ‘nr_threads_used’ was declared 
here
   int nr_threads_used;
   ^~~

I do not see how we can arrive at that line without having `nr_threads_used`
initialized, as we'd have `count > 1`  (which asserts that we ran the
loop above at least once, such that it *should* be initialized).

I do not have time to dive into further analysis.

Signed-off-by: Stefan Beller 
---
  t/helper/test-lazy-init-name-hash.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/t/helper/test-lazy-init-name-hash.c 
b/t/helper/test-lazy-init-name-hash.c
index 6368a89345..297fb01d61 100644
--- a/t/helper/test-lazy-init-name-hash.c
+++ b/t/helper/test-lazy-init-name-hash.c
@@ -112,7 +112,7 @@ static void analyze_run(void)
  {
uint64_t t1s, t1m, t2s, t2m;
int cache_nr_limit;
-   int nr_threads_used;
+   int nr_threads_used = 0;
int i;
int nr;


Agreed. It should not be a problem.  Explicitly initializing it is fine.


Signed-off-by: Jeff Hostetler 




Allowing remote git repos to set config & hook configuration

2017-12-19 Thread Ævar Arnfjörð Bjarmason

On Sat, Dec 16 2017, Jeff King jotted:

> On Fri, Dec 15, 2017 at 12:48:07PM -0800, Satyakiran Duggina wrote:
>
>> To give the code pullers a chance to review, can we not have a
>> `trusted-hooks: default` and `trusted-SHA: ` field in .git/.
>> I'm assuming githooks/ are source tracked here.
>>
>> When developer tries to execute `git commit`, git can ask developer to
>> change `trusted-hooks` field to true or false. Let's say developer
>> sets it to true, git can record the SHA. If any latest pull has the
>> hooks changed, git can revert the `trusted-hook` to default.
>>
>> This way there is not much hassle for developers to manually copy
>> hooks all the time. And at the same time, they are not running scripts
>> that they haven't reviewed.
>
> We've talked about doing something like this (though more for config
> than for hooks). But what the discussion always come down to is that
> carrying a script like "import-hooks.sh" in the repository ends up being
> the exact same amount of work for the developer as any git-blessed "OK,
> trust these hooks" command.
>
> And it's a lot more flexible. The writer of that script can touch hooks,
> config, etc. They can base decisions about what values to use based on
> data that Git otherwise wouldn't care about (e.g., uname). And they only
> have to handle cases that the project cares about, whereas anything Git
> does has to work everywhere.

I brought this up at the dev meeting we had in NYC (and you can see how
much I care since it's not implemented): It would be really neat to have
a system supported in git which would work the same way Emacs treats
safe file variables:
https://www.gnu.org/software/emacs/manual/html_node/emacs/Safe-File-Variables.html#Safe-File-Variables

I.e. git would recognize a .gitconfig and .githooks/* shipped in any
repo, but completely ignore those by default, but you could then set
config (in .git/config or ~/.gitconfig etc) which would whitelist
certain types of config brought in from the repo.

This is how Emacs does it and it strikes a really good balance between
security and convenience, e.g. I can say a file is allowed to tweak my
tab width settings, but can't alter my load path, or is allowed to
specify a syntax highlighting mode etc.

So you could for example say that any repo you clone is allowed to alter
the value of sendemail.to, or the value of tag.sort. It would work
really well with includeIf, e.g. I could clone all my work repos to a
"safe" area in ~/work which is allowed to set more options,
e.g. aliases.

Other values would either be ignored, or you could set them to "ask", so
for example, when we clone a bunch of config would be accepted because
you'd said it was OK for any repo, and then client would ask you if you
wanted to setup a hook the repo is suggesting.

This would be a much better user experience than every repo that needs
this suggesting in a README that you should run some local shellscript,
and would strike a good balance between security and convenience since
we'd ignore all of this by default, with the user needing to granularly
opt-in.


difftool uses hardcoded perl shebang

2017-12-19 Thread Jakub Zaverka
The below email is classified: Internal

When running git difftool:

>git difftool
Perl lib version (5.10.0) doesn't match executable version (v5.16.3)
Compilation failed in require at /git-difftool line 2.

First line in my git-difftool is:
#!/usr/bin/perl

I am using a specific perl that I cannot change. Similarly, I cannot change the 
git-difftool file. I would like the difftool to use the perl form my PATH. 

Maybe it would be better to use #!/usr/bin/env perl?
-
Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte Informationen.
Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie bitte
sofort den Absender und loeschen Sie diese E-Mail. Das unbefugte Kopieren
dieser E-Mail oder die unbefugte Weitergabe der enthaltenen Informationen
ist nicht gestattet.

The information contained in this message is confidential or protected by
law. If you are not the intended recipient, please contact the sender and
delete this message. Any unauthorised copying of this message or
unauthorised distribution of the information contained herein is prohibited.

Legally required information for business correspondence/
Gesetzliche Pflichtangaben fuer Geschaeftskorrespondenz:
http://deutsche-boerse.com/letterhead



Re: [PATCH v2 6/8] travis-ci: don't install 'language-pack-is' package

2017-12-19 Thread SZEDER Gábor
On Mon, Dec 18, 2017 at 11:04 PM, SZEDER Gábor  wrote:

>   $ sudo apt-get install language-pack-is
>   [...]
>   $ ./t0204-gettext-reencode-sanity.sh
>   # lib-gettext: Found 'is_IS.utf8' as an is_IS UTF-8 locale
>   # lib-gettext: No is_IS ISO-8859-1 locale available
>   ok 1 - gettext: Emitting UTF-8 from our UTF-8 *.mo files / Icelandic
>   ok 2 - gettext: Emitting UTF-8 from our UTF-8 *.mo files / Runes
>   ok 3 # skip gettext: Emitting ISO-8859-1 from our UTF-8 *.mo files /
> Icelandic (missing GETTEXT_ISO_LOCALE)
>   ok 4 # skip gettext: impossible ISO-8859-1 output (missing 
> GETTEXT_ISO_LOCALE)
>   ok 5 - gettext: Fetching a UTF-8 msgid -> UTF-8
>   ok 6 # skip gettext: Fetching a UTF-8 msgid -> ISO-8859-1 (missing
> GETTEXT_ISO_LOCALE)
>   ok 7 - gettext.c: git init UTF-8 -> UTF-8
>   ok 8 # skip gettext.c: git init UTF-8 -> ISO-8859-1 (missing
> GETTEXT_ISO_LOCALE)
>   # passed all 8 test(s)
>   1..8
>
> I'd expect something like this in the Travis CI build jobs as well, but
> prove hides the detailed output.
>
> It seems we would loose coverage with this patch, so it should be
> dropped.

Not so fast!

Notice how there are still a few tests skipped above, because of missing
GETTEXT_ISO_LOCALE.

  # grep is_IS /etc/locale.gen
  # is_IS ISO-8859-1
  # is_IS.UTF-8 UTF-8
  # sed -i -e 's/^# is_IS/is_IS/' /etc/locale.gen
  # locale-gen
  Generating locales (this might take a while)...
  [...]
  is_IS.ISO-8859-1... done
  is_IS.UTF-8... done
  Generation complete.

Both UTF-8 and ISO Icelandic locales are generated, good.

  $ $ ./t0204-gettext-reencode-sanity.sh
  # lib-gettext: Found 'is_IS.utf8' as an is_IS UTF-8 locale
  # lib-gettext: Found 'is_IS.iso88591' as an is_IS ISO-8859-1 locale
  ok 1 - gettext: Emitting UTF-8 from our UTF-8 *.mo files / Icelandic
  ok 2 - gettext: Emitting UTF-8 from our UTF-8 *.mo files / Runes
  ok 3 - gettext: Emitting ISO-8859-1 from our UTF-8 *.mo files / Icelandic
  ok 4 - gettext: impossible ISO-8859-1 output
  ok 5 - gettext: Fetching a UTF-8 msgid -> UTF-8
  ok 6 - gettext: Fetching a UTF-8 msgid -> ISO-8859-1
  ok 7 - gettext.c: git init UTF-8 -> UTF-8
  ok 8 - gettext.c: git init UTF-8 -> ISO-8859-1
  # passed all 8 test(s)

And now all those tests are run, great!
But look what happens without the language pack:

  # dpkg -P language-pack-is{,-base}
  [...]
  # grep is_IS /etc/locale.gen
  is_IS ISO-8859-1
  is_IS.UTF-8 UTF-8
  buzz ~# locale-gen
  Generating locales (this might take a while)...
  [...]
  is_IS.ISO-8859-1... done
  is_IS.UTF-8... done
  Generation complete.

Still both Icelandic locales are generated.

  $ ./t0204-gettext-reencode-sanity.sh
  # lib-gettext: Found 'is_IS.utf8' as an is_IS UTF-8 locale
  # lib-gettext: Found 'is_IS.iso88591' as an is_IS ISO-8859-1 locale
  ok 1 - gettext: Emitting UTF-8 from our UTF-8 *.mo files / Icelandic
  ok 2 - gettext: Emitting UTF-8 from our UTF-8 *.mo files / Runes
  ok 3 - gettext: Emitting ISO-8859-1 from our UTF-8 *.mo files / Icelandic
  ok 4 - gettext: impossible ISO-8859-1 output
  ok 5 - gettext: Fetching a UTF-8 msgid -> UTF-8
  ok 6 - gettext: Fetching a UTF-8 msgid -> ISO-8859-1
  ok 7 - gettext.c: git init UTF-8 -> UTF-8
  ok 8 - gettext.c: git init UTF-8 -> ISO-8859-1
  # passed all 8 test(s)

And still all those tests are run!

I did run all test scripts sourcing 'lib-gettext.sh' with both locales
generated and didn't see any errors, independently from whether the
language pack was installed or not.  I still have a few skipped tests
(because of no GETTEXT_POISON and something PCRE-related), but those,
too, are independent from the language pack.

So it seems that we don't need 'language-pack-is' after all; what we do
need are the ISO and UTF-8 Icelandic locales.  Not sure we can modify
/etc/locale.gen without sudo in the Travis CI build job, though, and
I've run out of time to try.


Gábor


Re: [PATCH] revision: introduce prepare_revision_walk_extended()

2017-12-19 Thread Jeff King
On Mon, Dec 18, 2017 at 08:18:19PM +0100, René Scharfe wrote:

> > The root of the matter is that the revision-walking code doesn't clean
> > up after itself. In every case, the caller is just saving these to clean
> > up commit marks, isn't it?
> 
> bundle also checks if the pending objects exists.

Thanks, I missed that one. So just adding a feature to clean up commit
marks wouldn't be sufficient to cover that case.

> > That sidesteps all of the memory ownership issues by just creating a
> > copy. That's less efficient, but I'd be surprised if it matters in
> > practice (we tend to do one or two revisions per process, there don't
> > tend to be a lot of pending tips, and we're really just talking about
> > copying some pointers here).
> [...]
> I don't know if there can be real-world use cases with millions of
> entries (when it would start to hurt).

I've seen repos which have tens of thousands of tags. Something like
"rev-list --all" would have tens of thousands of pending objects.
I think in practice it's limited to the number of objects (though in
practice more like the number of commits).

I'd note also that for most uses we don't need a full object_array. You
really just need a pointer to the "struct object" to wipe its flags.

So there we might waste 8 bytes per object in the worst case. But bear
in mind that the process is wasting a lot more than that per "struct
commit" that we're holding. And versus the existing scheme, it's only
for the moment until prepare_revision_walk() frees the old pending list.

> Why does prepare_revision_walk() clear the list of pending objects at
> all?  Assuming the list is append-only then perhaps remembering the
> last handled index would suffice.

I assume it was mostly to clean up after itself, since there's no
explicit "I'm done with the traversal" function. But as I said earlier,
I'd be surprised of a revision walk doesn't leave some allocated cruft
in rev_info these days (e.g., pathspec cruft). In practice it doesn't
matter much because we don't do arbitrary numbers of traversals in
single process.

-Peff


Re: [PATCH] fmt-merge-msg: avoid leaking strbuf in shortlog()

2017-12-19 Thread Jeff King
On Mon, Dec 18, 2017 at 08:18:17PM +0100, René Scharfe wrote:

> > I'd actually argue the other way: the simplest interface is one where
> > the string list owns all of its pointers. That keeps the
> > ownership/lifetime issues clear, and it's one less step for the caller
> > to have to remember to do at the end (they do have to clear() the list,
> > but they must do that anyway to free the array of items).
> > 
> > It does mean that some callers may have to remember to free a temporary
> > buffer right after adding its contents to the list. But that's a lesser
> > evil, I think, since the memory ownership issues are all clearly
> > resolved at the time of add.
> > 
> > The big cost is just extra copies/allocations.
> 
> An interface requiring callers to allocate can be used to implement a
> wrapper that does all allocations for them -- the other way around is
> harder.  It can be used to avoid object duplication, but duplicates
> functions.  No idea if that's worth it.

Sure, but would anybody actually want to _use_ the non-wrapped version?
That's the same duality we have now with string_list.

> > Having a "format into a string" wrapper doesn't cover _every_ string you
> > might want to add to a list, but my experience with argv_array_pushf
> > leads me to believe that it covers quite a lot of cases.
> 
> It would fit in with the rest of the API -- we have string_list_append()
> as a wrapper for string_list_append_nodup()+xstrdup() already.  We also
> have similar functions for strbuf and argv_array.  I find it a bit sad
> to reimplement xstrfmt() yet again instead of using it directly, though.

I dunno, I think could provide some safety and some clarity. IOW, why
_don't_ we like:

  string_list_append_nodup(list, xstrfmt(fmt, ...));

? I think because:

  1. It's a bit long and ugly.

  2. It requires a magic "nodup", because we're violating the memory
 ownership boundary.

  3. It doesn't provide any safety for the case where strdup_strings is
 not set, making it easy to leak accidentally.

Doing:

  string_list_appendf(list, fmt, ...);

pushes the memory ownership semantics "under the hood" of the
string_list API. And as opposed to being a simple wrapper, it could
assert that strdup_strings is set (we already do some similar assertions
in the split functions).

-Peff


FEDERAL MINISTRY OF FINANCE

2017-12-19 Thread Cynthia Eden
Dear Friend

Be informed that we have received an approved payment file from
FEDERAL MINISTRY OF FINANCE in conjunction with International Monetary
Fund (IMF) compensation for scam victims and your email address is
among the listed victim's.

I write to inform you that we will be sending you $5000.00USD daily
from our office here as we have been given the mandate to transfer
your full compensatory payment of $800,000.00USD by the International
Monetary Fund (IMF) and FEDERAL MINISTRY OF FINANCE.  Your Personal
Identification Number given by the I.M.F Team is CPP0920TG.
Here is the payment information that we shall be using to forward your
daily remittance.

Sender's Name: Cynthia Eden
Question: Payment
Answer: Yes
Amount: $5,000.00USD
City: Lome
Country: Togo

NOTE: The MTCN will be sent to you upon your response and confirmation
of your receiver information to avoid wrong transfer.

We await your urgent response to enable us proceed with the payment.

Here is my contact address (cynthiaede...@gmail.com)

Your Faithfully,

Cynthia Eden.


Re: Fetching commit instead of ref

2017-12-19 Thread Carlsson, Magnus
First: Thanks everyone for your answers.

I understand that there is a fetch pack, problem is that I can't force every 
git server provider to turn it on... Tested with github and they don't seem to 
have it on by default.

I understand that you don't want to allow people fetching single commits all 
the time, but is there a reason that you don't allow SHA instead of references 
when you fetch an entire tree?

Is there a workaround? Can I somehow ask a remote on a valid reference that 
includes a SHA? So I can later fetch that reference. In my case I would like to 
avoid to fetch more then necessary as it pollutes the main repository.

-- Magnus

MAGNUS CARLSSON
Staff Software Engineer
ARRIS

o: +46 13 36 75 92
e: magnus.carls...@arris.com
w: www.arris.com

ARRIS:  Legal entity: Arris Sweden AB - Registered Office: Teknikringen 2, 583 
30 Linkoping, Sweden - Reg No:556518-5831 - VAT No:SE 556518-583

This electronic transmission (and any attached document) is for the sole use of 
the individual or entity to whom it is addressed.  It is confidential and may 
be attorney-client privileged.  In any event the Sender reserves, to the 
fullest extent, any "legal advice privilege".  Any further distribution or 
copying of this message is strictly prohibited.  If you received this message 
in error, please notify the Sender immediately and destroy the attached message 
(and all attached documents).


From: Junio C Hamano 
Sent: Monday, December 18, 2017 19:44
To: Carlsson, Magnus
Cc: git@vger.kernel.org
Subject: Re: Fetching commit instead of ref

"Carlsson, Magnus"  writes:

> > So far so good, but then an error message appear:
> error: Server does not allow request for unadvertised object 
> 50f730db793e0733b159326c5a3e78fd48cedfec
> > And nothing seems to be fetched.

Yes, that is what the error message is telling you.

You'd need to coordinate with the server operator so that the server
allows such an request; uploadpack.allowAnySHA1InWant may help.