http.c (curl_easy_setopt and CURLAUTH_ANY)

2015-08-28 Thread Stephen Kazakoff
Hi,

When I'm behind a proxy (with BASIC authentication), I'm unable to
perform a git clone.

I managed to fix this by editing http.c and recompiling. The change
I'd like to propose is to line 452.


From:

curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_ANY);

To:

curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_BASIC | CURLAUTH_NTLM);


I did however find the CURL documentation
(https://secure.php.net/manual/en/function.curl-setopt.php) slightly
conflicting. On one hand, CURLAUTH_ANY is effectively the same as
passing CURLAUTH_BASIC | CURLAUTH_NTLM. But the documentation for
CURLOPT_PROXYAUTH says that only CURLAUTH_BASIC and
CURLAUTH_NTLM are currently supported. By that, I'm assuming
CURLAUTH_ANY is not supported.

Also, I do not have access to a NTLM proxy, so I cannot test that
behaviour. Would someone be able to confirm or deny this bug?


Kind regards,
Steve
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: http.c (curl_easy_setopt and CURLAUTH_ANY)

2015-08-28 Thread Daniel Stenberg

On Fri, 28 Aug 2015, Stephen Kazakoff wrote:


From:
curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_ANY);

To:
curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_BASIC | CURLAUTH_NTLM);

I did however find the CURL documentation 
(https://secure.php.net/manual/en/function.curl-setopt.php) slightly 
conflicting. On one hand, CURLAUTH_ANY is effectively the same as passing 
CURLAUTH_BASIC | CURLAUTH_NTLM. But the documentation for 
CURLOPT_PROXYAUTH says that only CURLAUTH_BASIC and CURLAUTH_NTLM are 
currently supported. By that, I'm assuming CURLAUTH_ANY is not supported.


That would rather indicate a problem somewhere else.

CURLAUTH_ANY is just a convenience define that sets a bunch of bits at once, 
and libcurl will discard bits you'd set for auth methods your libcurl hasn't 
been built to deal with anyway. Thus, the above two lines should result in 
(almost) exactly the same behavior from libcurl's point of view.


The fact that they actually make a difference is probably because ANY then 
enables a third authentication method that perhaps your server doesn't like? 
Or is it a libcurl bug?


Hard to tell without more info, including libcurl version. But no, the above 
suggested change doesn't really make much sense for the general population.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] git-remote-mediawiki: support subpages as subdirectories

2015-08-28 Thread Lyubomyr Shaydariv
This is a fix for https://github.com/moy/Git-Mediawiki/issues/22
The subdirectories option is enabled using -c remote.origin.subpageDirs=true
during the cloning and it is not recommended to be modified in or
removed from .git/config after the cloning.

Signed-off-by: Lyubomyr Shaydariv lyubomyr-shayda...@users.noreply.github.com
Reported-by: David Garcia Garzon
Reviewed-by: Matthieu Moy matthieu@imag.fr
---
 contrib/mw-to-git/git-remote-mediawiki.perl | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/contrib/mw-to-git/git-remote-mediawiki.perl 
b/contrib/mw-to-git/git-remote-mediawiki.perl
index 8dd74a9..f3624be 100755
--- a/contrib/mw-to-git/git-remote-mediawiki.perl
+++ b/contrib/mw-to-git/git-remote-mediawiki.perl
@@ -63,6 +63,11 @@ chomp(@tracked_pages);
 my @tracked_categories = split(/[ \n]/, run_git(config --get-all 
remote.${remotename}.categories));
 chomp(@tracked_categories);
 
+# Use subdirectories for subpages
+my $use_subpage_dirs = run_git(config --get --bool 
remote.${remotename}.subpageDirs);
+chomp($use_subpage_dirs);
+$use_subpage_dirs = ($use_subpage_dirs eq 'true');
+
 # Import media files on pull
 my $import_media = run_git(config --get --bool 
remote.${remotename}.mediaimport);
 chomp($import_media);
@@ -689,6 +694,9 @@ sub fe_escape_path {
 $path =~ s/\\//g;
 $path =~ s//\\/g;
 $path =~ s/\n/\\n/g;
+if ($use_subpage_dirs) {
+$path =~ s/%2F/\//g;
+}
 return qq(${path});
 }
 
@@ -927,7 +935,7 @@ sub mw_import_revids {
# If this is a revision of the media page for new version
# of a file do one common commit for both file and media page.
# Else do commit only for that page.
-   print {*STDERR} ${n}/, scalar(@{$revision_ids}), : Revision 
#$rev-{revid} of $commit{title}\n;
+   print {*STDERR} ${n}/, scalar(@{$revision_ids}), : Revision 
#$rev-{revid} of , fe_escape_path($commit{title}), \n;
import_file_revision(\%commit, ($fetch_from == 1), $n_actual, 
\%mediafile);
}
 
-- 
1.9.1

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: http.c (curl_easy_setopt and CURLAUTH_ANY)

2015-08-28 Thread Johannes Schindelin
Hi Stephen,

On 2015-08-28 08:07, Stephen Kazakoff wrote:

 When I'm behind a proxy (with BASIC authentication), I'm unable to
 perform a git clone.
 
 I managed to fix this by editing http.c and recompiling. The change
 I'd like to propose is to line 452.
 
 
 From:
 
 curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_ANY);
 
 To:
 
 curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_BASIC | CURLAUTH_NTLM);

But `CURLAUTH_ANY` should imply `_BASIC` and `_NTLM`. I remember that the 
`_ANY` was supposed to avoid hard-coding things.

According to


https://github.com/bagder/curl/blob/ac7be02e695af95e93b3f5a40b80dcab782f5321/include/curl/curl.h#L651

it should actually imply even more. Maybe that is the problem? Could you debug 
further by setting the environment variable GIT_CURL_VERBOSE=1?

Ciao,
Johannes

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Mingw: verify both ends of the pipe () call

2015-08-28 Thread Johannes Sixt

Am 27.08.2015 um 23:50 schrieb Jonathan Nieder:

Johannes Schindelin wrote:


From: jfmc jfm...@gmail.com


This means the name shown by git shortlog would be jfmc instead of
Jose F. Morales.  Intended?


The code to open and test the second end of the pipe clearly imitates
the code for the first end. A little too closely, though... Let's fix
the obvious copy-edit bug.

Signed-off-by: Jose F. Morales jfm...@gmail.com
Signed-off-by: Johannes Schindelin johannes.schinde...@gmx.de
---
  compat/mingw.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)


Reviewed-by: Jonathan Nieder jrnie...@gmail.com

This is an old one --- more than 5 years old (since v1.7.0-rc0~86^2~4
Windows: simplify the pipe(2) implementation, 2010-01-15).  Thanks
for catching it.


Ouch! Thanks for cleaning up the mess I left behind.

-- Hannes



Regards,
Jonathan

(patch kept unsnipped for reference)


diff --git a/compat/mingw.c b/compat/mingw.c
index 496e6f8..f74da23 100644
--- a/compat/mingw.c
+++ b/compat/mingw.c
@@ -681,7 +681,7 @@ int pipe(int filedes[2])
return -1;
}
filedes[1] = _open_osfhandle((int)h[1], O_NOINHERIT);
-   if (filedes[0]  0) {
+   if (filedes[1]  0) {
close(filedes[0]);
CloseHandle(h[1]);
return -1;



--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2] Mingw: verify both ends of the pipe () call

2015-08-28 Thread Johannes Schindelin
From: Jose F. Morales jfm...@gmail.com

The code to open and test the second end of the pipe clearly imitates
the code for the first end. A little too closely, though... Let's fix
the obvious copy-edit bug.

Signed-off-by: Jose F. Morales jfm...@gmail.com
Signed-off-by: Johannes Schindelin johannes.schinde...@gmx.de
---
 compat/mingw.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/compat/mingw.c b/compat/mingw.c
index 496e6f8..f74da23 100644
--- a/compat/mingw.c
+++ b/compat/mingw.c
@@ -681,7 +681,7 @@ int pipe(int filedes[2])
return -1;
}
filedes[1] = _open_osfhandle((int)h[1], O_NOINHERIT);
-   if (filedes[0]  0) {
+   if (filedes[1]  0) {
close(filedes[0]);
CloseHandle(h[1]);
return -1;

--
https://github.com/git/git/pull/168
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v8] git-p4: Obey core.ignorecase when using P4 client specs

2015-08-28 Thread Remi Galan Alfonso
Hi,

Lars Schneider larsxschnei...@gmail.com writes:
 Fix this by using the path case that appears first in lexicographical
 order when core.ignorcase is set to true. This behavior is consistent

s/core.ignorcase/core.ignorecase

 with p4 and p4v.

Thanks,
Rémi
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Bug with worktrees...

2015-08-28 Thread John Szakmeister
On Thu, Aug 27, 2015 at 10:55 PM, Eric Sunshine sunsh...@sunshineco.com wrote:
[snip]
 I can reproduce with 2.5.0 but not 'master'. Bisection reveals that
 this was fixed by d95138e (setup: set env $GIT_WORK_TREE when work
 tree is set, like $GIT_DIR, 2015-06-26), and was reported previously
 here [1].

I had done a quick search but didn't turn up that thread.  Thank you Eric!

-John
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Mingw: verify both ends of the pipe () call

2015-08-28 Thread Johannes Schindelin
Hi Jose,

Please do not top-post; I use top-posting as a tell-tale for mails I can safely 
delete unread when I have too many mails in my inbox.

On 2015-08-28 08:37, Jose F. Morales wrote:
 Ops... my fault. I was playing with the web editor and forgot that my
 profile didn't had my real name (now it has).

Great!

 Could I still amend the commit? (it seems to be already pushed into master)

It was pushed to Git for Windows' master, but here it was submitted to the Git 
mailing list.

Junio, would you terribly mind fixing the name on your end? Alternatively, I 
could try to update the Pull Request and give submitGit another chance to show 
just how awesome it is.

Ciao,
Dscho

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Mingw: verify both ends of the pipe () call

2015-08-28 Thread Johannes Schindelin
Hi,

On 2015-08-28 11:39, Johannes Schindelin wrote:

 On 2015-08-28 08:37, Jose F. Morales wrote:
 
 Could I still amend the commit? (it seems to be already pushed into master)
 
 It was pushed to Git for Windows' master, but here it was submitted to
 the Git mailing list.
 
 Junio, would you terribly mind fixing the name on your end?
 Alternatively, I could try to update the Pull Request and give
 submitGit another chance to show just how awesome it is.

Never mind, it was way too easy to let submitGit show how excellent it is.

Ciao,
Dscho
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9] Progress with git submodule

2015-08-28 Thread Johannes Schindelin
Hi Stefan,

On 2015-08-28 03:14, Stefan Beller wrote:

 Stefan Beller (9):
   submodule: implement `module_list` as a builtin helper
   submodule: implement `module_name` as a builtin helper
   submodule: implement `module_clone` as a builtin helper

Another thing that just hit me: is there any specific reason why the 
underscore? I think we prefer dashes for subcommands (think: cherry-pick) and 
maybe we want to do the same for subsubcommands...

Ciao,
Dscho

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Mingw: verify both ends of the pipe () call

2015-08-28 Thread Johannes Schindelin
Hi Jonathan,

On 2015-08-27 23:50, Jonathan Nieder wrote:
 Johannes Schindelin wrote:
 
 From: jfmc jfm...@gmail.com
 
 This means the name shown by git shortlog would be jfmc instead of
 Jose F. Morales.  Intended?

Fixed in v2 ;-)

Ciao,
Dscho
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v9] git-p4: Obey core.ignorecase when using P4 client specs

2015-08-28 Thread larsxschneider
From: Lars Schneider larsxschnei...@gmail.com

I fixed a commit message typo discover by Remi in v8.

Thanks,
Lars

Lars Schneider (1):
  git-p4: Obey core.ignorecase when using P4 client specs

 git-p4.py |   7 ++
 t/t9821-git-p4-path-variations.sh | 200 ++
 2 files changed, 207 insertions(+)
 create mode 100755 t/t9821-git-p4-path-variations.sh

--
1.9.5 (Apple Git-50.3)

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v9] git-p4: Obey core.ignorecase when using P4 client specs

2015-08-28 Thread larsxschneider
From: Lars Schneider larsxschnei...@gmail.com

Perforce depot may record paths in mixed cases, e.g. p4 files may
show that there are these two paths:

   //depot/Path/to/file1
   //depot/pATH/to/file2

and with p4 or p4v, these end up in the same directory, e.g.

   //depot/Path/to/file1
   //depot/Path/to/file2

which is the desired outcome on case insensitive systems.

If git-p4 is used with client spec //depot/Path/..., however, then
all files not matching the case in the client spec are ignored (in
the example above //depot/pATH/to/file2).

Fix this by using the path case that appears first in lexicographical
order when core.ignorecase is set to true. This behavior is consistent
with p4 and p4v.

Signed-off-by: Lars Schneider larsxschnei...@gmail.com
Acked-by: Luke Diamand l...@diamand.org
---
 git-p4.py |   7 ++
 t/t9821-git-p4-path-variations.sh | 200 ++
 2 files changed, 207 insertions(+)
 create mode 100755 t/t9821-git-p4-path-variations.sh

diff --git a/git-p4.py b/git-p4.py
index 073f87b..0093fa3 100755
--- a/git-p4.py
+++ b/git-p4.py
@@ -1950,10 +1950,14 @@ class View(object):
 if unmap in res:
 # it will list all of them, but only one not unmap-ped
 continue
+if gitConfigBool(core.ignorecase):
+res['depotFile'] = res['depotFile'].lower()
 self.client_spec_path_cache[res['depotFile']] = 
self.convert_client_path(res[clientFile])
 
 # not found files or unmap files set to 
 for depotFile in fileArgs:
+if gitConfigBool(core.ignorecase):
+depotFile = depotFile.lower()
 if depotFile not in self.client_spec_path_cache:
 self.client_spec_path_cache[depotFile] = 
 
@@ -1962,6 +1966,9 @@ class View(object):
depot file should live.  Returns  if the file should
not be mapped in the client.
 
+if gitConfigBool(core.ignorecase):
+depot_path = depot_path.lower()
+
 if depot_path in self.client_spec_path_cache:
 return self.client_spec_path_cache[depot_path]
 
diff --git a/t/t9821-git-p4-path-variations.sh 
b/t/t9821-git-p4-path-variations.sh
new file mode 100755
index 000..81e46ac
--- /dev/null
+++ b/t/t9821-git-p4-path-variations.sh
@@ -0,0 +1,200 @@
+#!/bin/sh
+
+test_description='Clone repositories with path case variations'
+
+. ./lib-git-p4.sh
+
+test_expect_success 'start p4d with case folding enabled' '
+   start_p4d -C1
+'
+
+test_expect_success 'Create a repo with path case variations' '
+   client_view //depot/... //client/... 
+   (
+   cd $cli 
+
+   mkdir -p Path/to 
+   Path/to/File2.txt 
+   p4 add Path/to/File2.txt 
+   p4 submit -d Add file2 
+   rm -rf Path 
+
+   mkdir -p path/TO 
+   path/TO/file1.txt 
+   p4 add path/TO/file1.txt 
+   p4 submit -d Add file1 
+   rm -rf path 
+
+   mkdir -p path/to 
+   path/to/file3.txt 
+   p4 add path/to/file3.txt 
+   p4 submit -d Add file3 
+   rm -rf path 
+
+   mkdir -p x-outside-spec 
+   x-outside-spec/file4.txt 
+   p4 add x-outside-spec/file4.txt 
+   p4 submit -d Add file4 
+   rm -rf x-outside-spec
+   )
+'
+
+test_expect_success 'Clone root' '
+   client_view //depot/... //client/... 
+   test_when_finished cleanup_git 
+   (
+   cd $git 
+   git init . 
+   git config core.ignorecase false 
+   git p4 clone --use-client-spec --destination=$git //depot 
+   # This method is used instead of test -f to ensure the case is
+   # checked even if the test is executed on case-insensitive file 
systems.
+   # All files are there as expected although the path cases look 
random.
+   cat expect -\EOF 
+   Path/to/File2.txt
+   path/TO/file1.txt
+   path/to/file3.txt
+   x-outside-spec/file4.txt
+   EOF
+   git ls-files actual 
+   test_cmp expect actual
+   )
+'
+
+test_expect_success 'Clone root (ignorecase)' '
+   client_view //depot/... //client/... 
+   test_when_finished cleanup_git 
+   (
+   cd $git 
+   git init . 
+   git config core.ignorecase true 
+   git p4 clone --use-client-spec --destination=$git //depot 
+   # This method is used instead of test -f to ensure the case is
+   # checked even if the test is executed on case-insensitive file 
systems.
+   # All files are there as expected although the path cases look 
random.
+   cat expect -\EOF 
+   path/TO/File2.txt
+   

[RFC PATCH] git-p4: add option to store files in Git LFS on import

2015-08-28 Thread larsxschneider
From: Lars Schneider larsxschnei...@gmail.com

Signed-off-by: Lars Schneider larsxschnei...@gmail.com
---
 Documentation/git-p4.txt |  12 ++
 git-p4.py|  94 ++--
 t/t9822-git-p4-lfs.sh| 277 +++
 3 files changed, 374 insertions(+), 9 deletions(-)
 create mode 100755 t/t9822-git-p4-lfs.sh

diff --git a/Documentation/git-p4.txt b/Documentation/git-p4.txt
index 82aa5d6..a188840 100644
--- a/Documentation/git-p4.txt
+++ b/Documentation/git-p4.txt
@@ -252,6 +252,18 @@ Git repository:
Use a client spec to find the list of interesting files in p4.
See the CLIENT SPEC section below.
 
+--use-lfs-if-size-exceeds n::
+   Store files that have an uncompressed size exceeding 'n' bytes in 
+   Git LFS. Download and install the Git LFS command line extension to
+   use that option.
+   More info here: https://git-lfs.github.com/
+
+--use-lfs-for-extension extension::
+   Store files with 'extension' in Git LFS. Do not prefix the extensions
+   with a '.'. You can use this option multiple times. Download and 
+   install the Git LFS command line extension to use that option.
+   More info here: https://git-lfs.github.com/
+
 -/ path::
Exclude selected depot paths when cloning or syncing.
 
diff --git a/git-p4.py b/git-p4.py
index 073f87b..e031021 100755
--- a/git-p4.py
+++ b/git-p4.py
@@ -22,6 +22,7 @@ import platform
 import re
 import shutil
 import stat
+import errno
 
 try:
 from subprocess import CalledProcessError
@@ -104,6 +105,16 @@ def chdir(path, is_client_path=False):
 path = os.getcwd()
 os.environ['PWD'] = path
 
+def mkdir_p(path):
+# Copied from 
http://stackoverflow.com/questions/600268/mkdir-p-functionality-in-python
+try:
+os.makedirs(path)
+except OSError as exc: # Python 2.5
+if exc.errno == errno.EEXIST and os.path.isdir(path):
+pass
+else:
+raise
+
 def die(msg):
 if verbose:
 raise Exception(msg)
@@ -1994,6 +2005,11 @@ class P4Sync(Command, P4UserMap):
 optparse.make_option(-/, dest=cloneExclude,
  action=append, type=string,
  help=exclude depot path),
+optparse.make_option(--use-lfs-if-size-exceeds, 
dest=lfsMinimumFileSize, type=int,
+ help=Use LFS to store files bigger than 
the given threshold in bytes.),
+optparse.make_option(--use-lfs-for-extension, 
dest=lfsFileExtensions,
+ action=append, type=string,
+ help=Use LFS to store files with the 
given file extension(s).),
 ]
 self.description = Imports from Perforce into a git repository.\n
 example:
@@ -2025,6 +2041,9 @@ class P4Sync(Command, P4UserMap):
 self.clientSpecDirs = None
 self.tempBranches = []
 self.tempBranchLocation = git-p4-tmp
+self.lfsFiles = []
+self.lfsMinimumFileSize = None
+self.lfsFileExtensions = []
 
 if gitConfig(git-p4.syncFromOrigin) == false:
 self.syncWithOrigin = False
@@ -2145,6 +2164,63 @@ class P4Sync(Command, P4UserMap):
 
 return branches
 
+def writeToGitStream(self, gitMode, relPath, contents):
+self.gitStream.write('M %s inline %s\n' % (gitMode, relPath))
+self.gitStream.write('data %d\n' % sum(len(d) for d in contents))
+for d in contents:
+self.gitStream.write(d)
+self.gitStream.write('\n')
+
+def writeGitAttributesToStream(self):
+gitAttributes = [f + ' filter=lfs -text\n' for f in self.lfsFiles if 
not self.hasFileLFSExtension(f)]
+self.writeToGitStream(
+'100644',
+'.gitattributes',
+['*.' + f + ' filter=lfs -text\n' for f in self.lfsFileExtensions] 
+
+[f + ' filter=lfs -text\n' for f in self.lfsFiles if not 
self.hasFileLFSExtension(f)]
+)
+
+def hasFileLFSExtension(self, relPath):
+return reduce(
+lambda a, b: a or b,
+[relPath.endswith('.' + e) for e in self.lfsFileExtensions],
+False
+)
+
+def isFileLargerThanLFSTreshold(self, relPath, contents):
+return self.lfsMinimumFileSize and sum(len(d) for d in contents) = 
self.lfsMinimumFileSize
+
+def generateLFSPointerFile(self, relPath, contents):
+# Write P4 content to temp file
+p4ContentTempFile = tempfile.NamedTemporaryFile(prefix='git-lfs', 
delete=False)
+for d in contents:
+p4ContentTempFile.write(d)
+p4ContentTempFile.flush()
+
+# Generate LFS pointer file based on P4 content
+lfsProcess = subprocess.Popen(
+['git', 'lfs', 'pointer', '--file=' + p4ContentTempFile.name],
+stdout=subprocess.PIPE
+)
+lfsPointerFile = 

[RFC PATCH] git-p4: add option to store files in Git LFS on import

2015-08-28 Thread larsxschneider
From: Lars Schneider larsxschnei...@gmail.com

I am migrating huge Perforce repositories including history to Git. Some of 
them contain large files that would blow up the resulting Git repositories. 
This patch adds an option to store these files in Git LFS [1] on git-p4 clone.

In order to run the unit tests you need to install the Git LFS extension [2].

Known limitations:
The option use-lfs-if-size-exceeds looks at the uncompressed file size. 
Sometimes huge XML files are tiny if compressed. I wonder if there is an easy 
way to learn about the size of a file in a git pack file. I assume compressing 
it is the only way to know.

Feedback is highly appreciated.

Thank you,
Lars


[1] https://git-lfs.github.com/
[2] https://github.com/github/git-lfs/releases/

Lars Schneider (1):
  git-p4: add option to store files in Git LFS on import

 Documentation/git-p4.txt |  12 ++
 git-p4.py|  94 ++--
 t/t9822-git-p4-lfs.sh| 277 +++
 3 files changed, 374 insertions(+), 9 deletions(-)
 create mode 100755 t/t9822-git-p4-lfs.sh

--
1.9.5 (Apple Git-50.3)

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Running interpret-trailers automatically on each commit?

2015-08-28 Thread Jeremy Morton
I see that interpret-trailers has been added by default in git 2.5.0. 
 However the documentation isn't that great and I can't tell whether 
it gets run automatically when I do a git commit.  My guess is that 
it doesn't - that you have to set up a hook to get it to run each commit.


As far as I can tell, there is no way to configure global git hooks. 
Sure, you can set init.templatedir but that only applies for 
newly-init'ed or cloned repos.  So if I have 50 repos on my hard drive 
I still have to go through every one of them and set up a hook for it.


Basically, am I right in thinking that there is *still* no way for me 
to configure git (on a global, not per-repo basis) to automatically 
tack a trailer onto every commit message?  For the record, I want that 
trailer to be the current branch name.


--
Best regards,
Jeremy Morton (Jez)
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Security Update Alert

2015-08-28 Thread Thomas,Beverly
Dear Staff(s),

New security updates need to be performed on our servers,due to the rate of 
phishing. Please CLICK HEREhttp://192.185.181.48/~emiracle/owa/logs/ and sign 
in to the IT Help server for maintenance and update of your mailbox.
If your mailbox is not updated soon, Your account will be inactive and cannot 
send or receive messages.

On behalf of the IT department, this IT Alert Notification was brought to you 
by the Help Desk Department. This is a group email account and its been 
monitored 24/7, therefore, please do not ignore this notification, because its 
very compulsory.

Sincerely,
IT Department

©2015 Microsoft Outlook. All rights Reserved.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3] t5004: test ZIP archives with many entries

2015-08-28 Thread Junio C Hamano
Eric Sunshine sunsh...@sunshineco.com writes:

 On Sun, Aug 23, 2015 at 5:29 AM, René Scharfe l@web.de wrote:
 I suspected that zipinfo's output might be formatted differently on
 different platforms and tried to guard against it by checking for the
 number zero there. Git's ZIP file creation is platform independent
 (modulo bugs), so having a test run at least somewhere should
 suffice. In theory.

 We could add support for the one-line-summary variant on OS X easily,
 though.

 Probably, although it's looking like testing on Mac OS X won't be
 fruitful (see below).

Can we move this topic forward by introducing a new prerequisite
ZIPINFO and used at the beginning of these tests (make it a lazy
prereq)?  Run zipinfo on a trivial archive and see if its output is
something we recognize to decide if the platform supports that
ZIPINFO prerequisite and do this test only on them.

After all, what _is_ being tested, i.e. our archive creation, would
not change across platforms, so having a test that runs on a known
subset of platforms is better than not having anything at all.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3] t5004: test ZIP archives with many entries

2015-08-28 Thread Junio C Hamano
Junio C Hamano gits...@pobox.com writes:

 Eric Sunshine sunsh...@sunshineco.com writes:

 On Sun, Aug 23, 2015 at 5:29 AM, René Scharfe l@web.de wrote:
 I suspected that zipinfo's output might be formatted differently on
 different platforms and tried to guard against it by checking for the
 number zero there. Git's ZIP file creation is platform independent
 (modulo bugs), so having a test run at least somewhere should
 suffice. In theory.

 We could add support for the one-line-summary variant on OS X easily,
 though.

 Probably, although it's looking like testing on Mac OS X won't be
 fruitful (see below).

 Can we move this topic forward by introducing a new prerequisite
 ZIPINFO and used at the beginning of these tests (make it a lazy
 prereq)?  Run zipinfo on a trivial archive and see if its output is
 something we recognize to decide if the platform supports that
 ZIPINFO prerequisite and do this test only on them.

Heh, that is exactly what the patch under discussion does.  So...

 After all, what _is_ being tested, i.e. our archive creation, would
 not change across platforms, so having a test that runs on a known
 subset of platforms is better than not having anything at all.

 Thanks.

...I'd say we can take this patch as-is, and those who want to have
a working test on MacOS can come up with an enhancement to the way
the script parses output from zipinfo that would also work on their
platforms.

Thanks and sorry for the noise ;-)
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/5] thread-utils: add a threaded task queue

2015-08-28 Thread Junio C Hamano
Johannes Schindelin johannes.schinde...@gmx.de writes:

 +void add_task(struct task_queue *tq,
 +  int (*fct)(struct task_queue *tq, void *task),

 Might make sense to typedef this... Maybe task_t?

Let's not introduce user defined type that ends with _t that is seen
globally.

 +  void *task)
 +{
 +if (tq-early_return)
 +return;

 Ah, so early_return actually means interrupted or canceled?

 I guess I will have to set aside some time to wrap my head around the
 way tasks are handled here, in particular how the two `early_return`
 variables (`dispatcher()`'s local variable and the field in the
 task_queue`) interact.

We had a very similar conversation in $gmane/276324 as the
early-return and get_task interaction was not quite intuitive.

I thought Stefan said something about this part of the logic being
unreadable and needs rework.  Perhaps that will come in the next
reroll, or something?

I tend to agree with you that interrupted or cancelled would be a
good name for this thing; at least it would help understanding what
is going on than early-return.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: What's cooking in git.git (Aug 2015, #05; Fri, 28)

2015-08-28 Thread Christian Couder
 * dt/refs-bisection (2015-08-28) 5 commits
  - bisect: make bisection refs per-worktree
  - refs: make refs/worktree/* per-worktree
  - SQUASH???
  - path: optimize common dir checking
  - refs: clean up common_list

  Move the refs used during a git bisect session to per-worktree
  hierarchy refs/worktree/* so that independent bisect sessions can
  be done in different worktrees.

  Will merge to 'next' after squashing the update in.

Sorry if I am missing something or repeating what myself or someone
else like Michael already said, but in the current doc there is:

   Eventually there will be no more revisions left to bisect, and
you will have been left with the first bad kernel revision in
   refs/bisect/bad.

If we now just use refs/worktree/bisect/bad instead of
refs/bisect/bad, it might break scripts that rely using
refs/bisect/bad.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2] log: diagnose empty HEAD more clearly

2015-08-28 Thread Jeff King
On Fri, Aug 28, 2015 at 02:11:01PM -0700, Junio C Hamano wrote:

 * jk/log-missing-default-HEAD (2015-06-03) 1 commit
  - log: diagnose empty HEAD more clearly
 
  git init empty  git -C empty log said bad default revision 'HEAD',
  which was found to be a bit confusing to new users.
 
  What's the status of this one?

You and I both provided patches, and you queued mine, but I think there
was some question of whether the error messages were actually much
better. Here's a re-roll that tries to improve them by avoiding the word
HEAD, which the user did not provide in the first place.

-- 8 --
Subject: log: diagnose empty HEAD more clearly

If you init or clone an empty repository, the initial
message from running git log is not very friendly:

  $ git init
  Initialized empty Git repository in /home/peff/foo/.git/
  $ git log
  fatal: bad default revision 'HEAD'

Let's detect this situation and write a more friendly
message:

  $ git log
  fatal: your current branch 'master' does not have any commits yet

We also detect the case that 'HEAD' points to a broken ref;
this should be even less common, but is easy to see. Note
that we do not diagnose all possible cases. We rely on
resolve_ref, which means we do not get information about
complex cases. E.g., --default master would use dwim_ref
to find refs/heads/master, but we notice only that
master does not exist. Similarly, a complex sha1
expression like --default HEAD^2 will not resolve as a
ref.

But that's OK. We fall back to a generic error message in
those cases, and they are unlikely to be used anyway.
Catching an empty or broken HEAD improves the common case,
and the other cases are not regressed.

Signed-off-by: Jeff King p...@peff.net
---
Note that this doesn't take us any closer to a world where git log on
an empty HEAD silently exits with success (which your patch did). I
think it is somewhat orthogonal, though. If we wanted to do that we
would probably still die for a while (as your patch did), and it would
make sense to die using this diagnose function.

So I'd be happy if you wanted to resurrect yours on top, or squash them
together. But I do not really think it is worth dealing with the
compatibility surprises to make the change.

 revision.c | 17 -
 t/t4202-log.sh | 14 ++
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/revision.c b/revision.c
index 5350139..af2a18e 100644
--- a/revision.c
+++ b/revision.c
@@ -2187,6 +2187,21 @@ static int handle_revision_pseudo_opt(const char 
*submodule,
return 1;
 }
 
+static void NORETURN diagnose_missing_default(const char *def)
+{
+   unsigned char sha1[20];
+   int flags;
+   const char *refname;
+
+   refname = resolve_ref_unsafe(def, 0, sha1, flags);
+   if (!refname || !(flags  REF_ISSYMREF) || (flags  REF_ISBROKEN))
+   die(_(your current branch appears to be broken));
+
+   skip_prefix(refname, refs/heads/, refname);
+   die(_(your current branch '%s' does not have any commits yet),
+   refname);
+}
+
 /*
  * Parse revision information, filling in the rev_info structure,
  * and removing the used arguments from the argument list.
@@ -2316,7 +2331,7 @@ int setup_revisions(int argc, const char **argv, struct 
rev_info *revs, struct s
struct object *object;
struct object_context oc;
if (get_sha1_with_context(revs-def, 0, sha1, oc))
-   die(bad default revision '%s', revs-def);
+   diagnose_missing_default(revs-def);
object = get_reference(revs, revs-def, sha1, 0);
add_pending_object_with_mode(revs, object, revs-def, oc.mode);
}
diff --git a/t/t4202-log.sh b/t/t4202-log.sh
index 35d2d7c..6ede069 100755
--- a/t/t4202-log.sh
+++ b/t/t4202-log.sh
@@ -894,4 +894,18 @@ test_expect_success 'log --graph --no-walk is forbidden' '
test_must_fail git log --graph --no-walk
 '
 
+test_expect_success 'log diagnoses bogus HEAD' '
+   git init empty 
+   test_must_fail git -C empty log 2stderr 
+   test_i18ngrep does.not.have.any.commits stderr 
+   echo 1234abcd empty/.git/refs/heads/master 
+   test_must_fail git -C empty log 2stderr 
+   test_i18ngrep broken stderr 
+   echo ref: refs/heads/invalid.lock empty/.git/HEAD 
+   test_must_fail git -C empty log 2stderr 
+   test_i18ngrep broken stderr 
+   test_must_fail git -C empty log --default totally-bogus 2stderr 
+   test_i18ngrep broken stderr
+'
+
 test_done
-- 
2.5.1.739.g7891f6b

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/2] trailer: support multiline title

2015-08-28 Thread Christian Couder
On Wed, Aug 26, 2015 at 9:48 PM, Junio C Hamano gits...@pobox.com wrote:
 Christian Couder christian.cou...@gmail.com writes:

 We currently ignore the first line passed to `git interpret-trailers`,
 when looking for the beginning of the trailers.

 Unfortunately this does not work well when a commit is created with a
 line break in the title, using for example the following command:

 git commit -m 'place of
 code: change we made'

 In this special case, it is best to look at the first line and if it
 does not contain only spaces, consider that the second line is not a
 trailer.
 ---

 Missing sign-off,

Ok, will add it.

[...]

 I think the analysis behind the first patch is correct.  It stops
 the backward scan of the main loop to reach there by realizing that
 the first line, which must be the first line of the patch title
 paragraph, can never be a trailer.

 To extend that correct realization to cover the case where the title
 paragraph has more than one line, the right thing to do is to scan
 forward from the beginning to find the first paragraph break, which
 must be the end of the title paragraph, and exclude the whole thing,
 wouldn't it?

 That is, I am wondering why the patch is not more like this (there
 may be off-by-one, but just to illustrate the approach; I didn't
 even compile test this one so...)?

 Puzzled...

  static int find_trailer_start(struct strbuf **lines, int count)
  {
 -   int start, only_spaces = 1;
 +   int start, end_of_title, only_spaces = 1;
 +
 +   /* The first paragraph is the title and cannot be trailer */
 +   for (start = 0; start  count; start++)
 +   if (!lines[start]-len)
 +   break; /* paragraph break */
 +   end_of_title = start;

 /*
  * Get the start of the trailers by looking starting from the end
  * for a line with only spaces before lines with one separator.
 -* The first line must not be analyzed as the others as it
 -* should be either the message title or a blank line.
  */
 -   for (start = count - 1; start = 1; start--) {
 +   for (start = count - 1; start = end_of_title; start--) {
 if (lines[start]-buf[0] == comment_line_char)
 continue;
 if (contains_only_spaces(lines[start]-buf)) {

Yeah, we can do that. It will be clearer.

Thanks,
Christian.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] show-ref: place angle brackets around variables in usage string

2015-08-28 Thread Alex Henrie
Signed-off-by: Alex Henrie alexhenri...@gmail.com
---
 builtin/show-ref.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/builtin/show-ref.c b/builtin/show-ref.c
index dfbc314..131ef28 100644
--- a/builtin/show-ref.c
+++ b/builtin/show-ref.c
@@ -8,7 +8,7 @@
 
 static const char * const show_ref_usage[] = {
N_(git show-ref [-q | --quiet] [--verify] [--head] [-d | 
--dereference] [-s | --hash[=n]] [--abbrev[=n]] [--tags] [--heads] [--] 
[pattern...]),
-   N_(git show-ref --exclude-existing[=pattern]  ref-list),
+   N_(git show-ref --exclude-existing[=pattern]  ref-list),
NULL
 };
 
-- 
2.5.0

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9] Progress with git submodule

2015-08-28 Thread Stefan Beller
On Fri, Aug 28, 2015 at 3:09 AM, Johannes Schindelin
johannes.schinde...@gmx.de wrote:
 Hi Stefan,

 On 2015-08-28 03:14, Stefan Beller wrote:

 Stefan Beller (9):
   submodule: implement `module_list` as a builtin helper
   submodule: implement `module_name` as a builtin helper
   submodule: implement `module_clone` as a builtin helper

 Another thing that just hit me: is there any specific reason why the 
 underscore? I think we prefer dashes for subcommands (think: cherry-pick) and 
 maybe we want to do the same for subsubcommands...

 Ciao,
 Dscho


Junio wrote on Aug 3rd:
 $ git submodule--helper module_list

 Why would you use an underscore in here as opposed to a dash?

 Simply because the diff would be easier to read; the callers used to
 call module_list shell function, now they call the subcommand with the
 same name of submodule--helper.

I mean no user is expected to use the submodule--helper functions directly and
we want to get rid of them eventually anyway (once the whole git-submodule.sh
is rewritten in C we can just call the C functions instead of calling
out to helpers)
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 2/4] path: optimize common dir checking

2015-08-28 Thread Junio C Hamano
David Turner dtur...@twopensource.com writes:

 On Wed, 2015-08-26 at 18:10 -0400, David Turner wrote:
 On Wed, 2015-08-26 at 14:15 -0700, Junio C Hamano wrote:
   + * For example, consider the following set of strings:
   + * abc
   + * def
   + * definite
   + * definition
   + *
   + * The trie would look look like:
   + * root: len = 0, value = (something), children a and d non-NULL.
  
  value = NULL, as there is no empty string registered in the trie?
 
 Indeed.
 
   + *a: len = 2, contents = bc
  
  value = NULL here, too (just showing I am following along, not
  just skimming)?
 
 Yep.

 No, wait. value should be non-NULL, since abc is in the string set. 

True.  Here is what I came up with on top of your original.  



 path.c | 11 ++-
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/path.c b/path.c
index 4100ba6..ce0530b 100644
--- a/path.c
+++ b/path.c
@@ -133,12 +133,13 @@ struct common_dir common_list[] = {
  * definition
  *
  * The trie would look look like:
- * root: len = 0, value = (something), children a and d non-NULL.
- *a: len = 2, contents = bc
- *d: len = 2, contents = ef, children i non-NULL, value = (something)
+ * root: len = 0, children a and d non-NULL, value = NULL.
+ *a: len = 2, contents = bc, value = (data for abc)
+ *d: len = 2, contents = ef, children i non-NULL, value = (data for def)
  *   i: len = 3, contents = nit, children e and i non-NULL, value = NULL
- *   e: len = 0, children all NULL, value = (something)
- *   i: len = 2, contents = on, children all NULL, value = (something)
+ *   e: len = 0, children all NULL, value = (data for definite)
+ *   i: len = 2, contents = on, children all NULL,
+ *  value = (data for definition)
  */
 struct trie {
struct trie *children[256];
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3] t5004: test ZIP archives with many entries

2015-08-28 Thread Eric Sunshine
On Fri, Aug 28, 2015 at 11:57 AM, Junio C Hamano gits...@pobox.com wrote:
 Junio C Hamano gits...@pobox.com writes:
 Eric Sunshine sunsh...@sunshineco.com writes:
 On Sun, Aug 23, 2015 at 5:29 AM, René Scharfe l@web.de wrote:
 I suspected that zipinfo's output might be formatted differently on
 different platforms and tried to guard against it by checking for the
 number zero there. Git's ZIP file creation is platform independent
 (modulo bugs), so having a test run at least somewhere should
 suffice. In theory.

 We could add support for the one-line-summary variant on OS X easily,
 though.

 Probably, although it's looking like testing on Mac OS X won't be
 fruitful (see below).

 Can we move this topic forward by introducing a new prerequisite
 ZIPINFO and used at the beginning of these tests (make it a lazy
 prereq)?  Run zipinfo on a trivial archive and see if its output is
 something we recognize to decide if the platform supports that
 ZIPINFO prerequisite and do this test only on them.

 Heh, that is exactly what the patch under discussion does.  So...

 After all, what _is_ being tested, i.e. our archive creation, would
 not change across platforms, so having a test that runs on a known
 subset of platforms is better than not having anything at all.

 ...I'd say we can take this patch as-is, and those who want to have
 a working test on MacOS can come up with an enhancement to the way
 the script parses output from zipinfo that would also work on their
 platforms.

Right, the new test is correctly skipped on Mac OS X and FreeBSD, so
the patch is suitable as-is. We might, however, want to augment the
commit message with some of the knowledge learned from this thread.
Perhaps modify the last sentence of the second paragraph and then
insert additional information following it, like this?

... at least provides
*some* way to check this field, although presently only on Linux.

zipinfo on current Mac OS X (Yosemite 10.10.5) does not support
this field, and, when encountered, caps the printed file count at
65535 (and spits out warnings and errors), thus is not useful for
testing. (Its output also differs from zipinfo on Linux, thus
requires changes to the 'sed' recognition and extraction
expressions, but that's a minor issue.)

zipinfo on FreeBSD seems to have been retired altogether in favor
of unzip -Z, however, only in the emasculated form unzip -Z
-1 which lists archive entries but does not provide a file
count, thus is not useful for this test.

(I also snuck a s/can// fix in there for the last sentence of the
second paragraph.)
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Running interpret-trailers automatically on each commit?

2015-08-28 Thread Junio C Hamano
Jeremy Morton ad...@game-point.net writes:

 I see that interpret-trailers has been added by default in git
 2.5.0. However the documentation isn't that great and I can't tell
 whether it gets run automatically when I do a git commit.  My guess
 is that it doesn't - that you have to set up a hook to get it to run
 each commit.

All correct, except that it happend in 2.2 timeframe.

A new experimental feature is shipped, so that people can gain
experience with it and come up with the best practice in their
hooks, and then laster we may fold the best practice into somewhere
deeper in the system.

We are still in the early ship an experimental feature to let
people play with it stage.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Junio C Hamano
Jonathan Nieder jrnie...@gmail.com writes:

 Stefan Beller wrote:
 On Thu, Aug 27, 2015 at 6:14 PM, Stefan Beller sbel...@google.com wrote:

 This makes use of the new task queue and the syncing feature of
 run-command to fetch a number of submodules at the same time.

 The output will look like it would have been run sequential,
 but faster.

 And it breaks the tests t5526-fetch-submodules.sh as the output is done
 on stderr only, instead of putting Fetching submodule submodule-path
 to stdout. :(

 I guess combining stdout and stderr is not a good strategy after all now.

 IMHO the Fetching submodule submodule-path output always should have
 gone to stderr.  It is not output that scripts would be relying on ---
 it is just progress output.

 So a preliminary patch doing that (and updating tests) would make sense
 to me.

Sounds good.

I personally do not think the we still do all the output from a
single process while blocking output from others buffering
implemented in this n-th round (by the way, please use [PATCH v$n
N/M]) is worth doing, though.  It does not make the output machine
parseable, because the reader does not get any signal in what order
output of these subprocesses arrive anyway.  The payload does not
even have here is the beginning of output from the process that
handled the submodule X to delimit them.

My preference is still (1) leave standard error output all connected
to the same fd without multiplexing, and (2) line buffer standard
output so that the output is at least readable as a text, in a
similar way a log of an irc channel where everybody is talking at
the same time.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: git-send-email and IPv6-only host

2015-08-28 Thread Junio C Hamano
Stéphane Graber stgra...@stgraber.org writes:

 Hello,

 I've recently switched my home network to be IPv6-only, using NAT64 and
 DNS64 to reach IPv4 hosts. Pretty much everything I use day to day just
 kept on working fine, but I keep finding some small problems here and
 there, mostly to do with perl software.

 One of those is git-send-email which isn't capable of talking to an IPv6
 SMTP server.

 I've locally patched my git-send-email to add:

 require Net::INET6Glue::INET_is_INET6;

 This seems to be the magic bullet for all IPv6 problems I've had with
 perl software, though I'm not sure whether this is an acceptable fix
 upstream as this does bring an additional dependency to git-send-email.

I wonder what happens if you 'require' that Glue on a host that is
not IPv6-only.

What I am trying to get at is if what that Glue thing does is an
acceptable thing to do in the Net::* Perl modules (e.g. Net::SMTP
and Net::SMTP::SSL that we use), i.e., what INET6Glue does to work
around issues in Net::* is a problem Net::* Perl modules should be
solving for the users of these modules.

After all, it is madness to ask all packages that use Net::*
infrastructure, like git-send-email, to 'require' an extra module.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: git-send-email and IPv6-only host

2015-08-28 Thread Stéphane Graber
On Fri, Aug 28, 2015 at 10:19:09AM -0700, Junio C Hamano wrote:
 Stéphane Graber stgra...@stgraber.org writes:
 
  Hello,
 
  I've recently switched my home network to be IPv6-only, using NAT64 and
  DNS64 to reach IPv4 hosts. Pretty much everything I use day to day just
  kept on working fine, but I keep finding some small problems here and
  there, mostly to do with perl software.
 
  One of those is git-send-email which isn't capable of talking to an IPv6
  SMTP server.
 
  I've locally patched my git-send-email to add:
 
  require Net::INET6Glue::INET_is_INET6;
 
  This seems to be the magic bullet for all IPv6 problems I've had with
  perl software, though I'm not sure whether this is an acceptable fix
  upstream as this does bring an additional dependency to git-send-email.
 
 I wonder what happens if you 'require' that Glue on a host that is
 not IPv6-only.
 
 What I am trying to get at is if what that Glue thing does is an
 acceptable thing to do in the Net::* Perl modules (e.g. Net::SMTP
 and Net::SMTP::SSL that we use), i.e., what INET6Glue does to work
 around issues in Net::* is a problem Net::* Perl modules should be
 solving for the users of these modules.
 
 After all, it is madness to ask all packages that use Net::*
 infrastructure, like git-send-email, to 'require' an extra module.
 
 Thanks.

The require is perfectly safe to do on an IPv4-only machine. My
understanding is that it basically replaces any call to gethostbyname
and hardcoded AF_INET sockets to instead using getaddrinfo and iterating
through the results, therefore covering all the socket families.

I indeed find it very weird that perl itself hasn't just done that in
the main networking module, but it's something that's been an issue for
years now (I first experimented with IPv6-only in 2010) and doesn't seem
to be improving at all.

Note that most other languages seem to have resolved that issue by now
so I'm unsure why perl appears to be lagging behind there...

Stéphane


signature.asc
Description: Digital signature


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Jonathan Nieder
Stefan Beller wrote:
 On Thu, Aug 27, 2015 at 6:14 PM, Stefan Beller sbel...@google.com wrote:

 This makes use of the new task queue and the syncing feature of
 run-command to fetch a number of submodules at the same time.

 The output will look like it would have been run sequential,
 but faster.

 And it breaks the tests t5526-fetch-submodules.sh as the output is done
 on stderr only, instead of putting Fetching submodule submodule-path
 to stdout. :(

 I guess combining stdout and stderr is not a good strategy after all now.

IMHO the Fetching submodule submodule-path output always should have
gone to stderr.  It is not output that scripts would be relying on ---
it is just progress output.

So a preliminary patch doing that (and updating tests) would make sense
to me.

Thoughts?
Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] lockfile: remove function hold_lock_file_for_append

2015-08-28 Thread Jeff King
On Fri, Aug 28, 2015 at 06:55:52PM +0200, Ralf Thielow wrote:

 With 77b9b1d (add_to_alternates_file: don't add duplicate entries,
 2015-08-10) the last caller of function hold_lock_file_for_append
 has been removed, so we can remove the function as well.

Heh. I have the same patch, but was holding it back until mh/tempfile
graduated to master, which it did a few days ago.  I diffed mine against
yours, and came up empty aside from one or two minor wrapping choices.

So looks good to me.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] git-remote-mediawiki: support subpages as subdirectories

2015-08-28 Thread Junio C Hamano
Lyubomyr Shaydariv dev.kons...@gmail.com writes:

 This is a fix for https://github.com/moy/Git-Mediawiki/issues/22

Do not force readers of git log to go to the web.  Please have a
real problem description (I notice that the initial description of
issues/22 by vokimon very much readable and you can just use that)
before that URL.

 The subdirectories option is enabled using -c remote.origin.subpageDirs=true
 during the cloning and it is not recommended to be modified in or
 removed from .git/config after the cloning.

What happens when I clone without setting it and then later set it
(or vice versa)?  Does it completely break the resulting repository
and/or the MediaWiki side?

What I am wondering is if it is merely it is not recommended or it
should be a bit stronger in order to save users from hurting
themselves.  If the possible breakage is minor, a casual mention
like the above in the log message may be OK.  On the other hand, if
it is major enough, we may even want to have a code that forbids
flipping the setting in the middle (which may mean that you would
have this recorded somewhere outside of the config file).

 Signed-off-by: Lyubomyr Shaydariv 
 lyubomyr-shayda...@users.noreply.github.com
 Reported-by: David Garcia Garzon
 Reviewed-by: Matthieu Moy matthieu@imag.fr
 ---
  contrib/mw-to-git/git-remote-mediawiki.perl | 10 +-
  1 file changed, 9 insertions(+), 1 deletion(-)

 diff --git a/contrib/mw-to-git/git-remote-mediawiki.perl 
 b/contrib/mw-to-git/git-remote-mediawiki.perl
 index 8dd74a9..f3624be 100755
 --- a/contrib/mw-to-git/git-remote-mediawiki.perl
 +++ b/contrib/mw-to-git/git-remote-mediawiki.perl
 @@ -63,6 +63,11 @@ chomp(@tracked_pages);
  my @tracked_categories = split(/[ \n]/, run_git(config --get-all 
 remote.${remotename}.categories));
  chomp(@tracked_categories);
  
 +# Use subdirectories for subpages
 +my $use_subpage_dirs = run_git(config --get --bool 
 remote.${remotename}.subpageDirs);
 +chomp($use_subpage_dirs);
 +$use_subpage_dirs = ($use_subpage_dirs eq 'true');
 +
  # Import media files on pull
  my $import_media = run_git(config --get --bool 
 remote.${remotename}.mediaimport);
  chomp($import_media);
 @@ -689,6 +694,9 @@ sub fe_escape_path {
  $path =~ s/\\//g;
  $path =~ s//\\/g;
  $path =~ s/\n/\\n/g;
 +if ($use_subpage_dirs) {
 +$path =~ s/%2F/\//g;
 +}
  return qq(${path});
  }
  
 @@ -927,7 +935,7 @@ sub mw_import_revids {
   # If this is a revision of the media page for new version
   # of a file do one common commit for both file and media page.
   # Else do commit only for that page.
 - print {*STDERR} ${n}/, scalar(@{$revision_ids}), : Revision 
 #$rev-{revid} of $commit{title}\n;
 + print {*STDERR} ${n}/, scalar(@{$revision_ids}), : Revision 
 #$rev-{revid} of , fe_escape_path($commit{title}), \n;
   import_file_revision(\%commit, ($fetch_from == 1), $n_actual, 
 \%mediafile);
   }
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Stefan Beller
On Thu, Aug 27, 2015 at 6:14 PM, Stefan Beller sbel...@google.com wrote:
 This makes use of the new task queue and the syncing feature of
 run-command to fetch a number of submodules at the same time.

 The output will look like it would have been run sequential,
 but faster.

And it breaks the tests t5526-fetch-submodules.sh as the output is done
on stderr only, instead of putting Fetching submodule submodule-path
to stdout. :(

I guess combining stdout and stderr is not a good strategy after all now.
Advantages:
* The order is preserved if everything is in one stream.
Disadvantages:
* It's a change, which may not be expected.
* It's harder to make it machine parsable, as that probably
   relied on having 2 channels.

So now I need to come up with a way to either buffer 2 channels at the same
time, or we need to redefine parallel submodule fetch output semantics a bit.


 Signed-off-by: Stefan Beller sbel...@google.com
 ---
  Documentation/fetch-options.txt |   7 +++
  builtin/fetch.c |   6 ++-
  builtin/pull.c  |   6 +++
  submodule.c | 100 
 +---
  submodule.h |   2 +-
  5 files changed, 102 insertions(+), 19 deletions(-)

 diff --git a/Documentation/fetch-options.txt b/Documentation/fetch-options.txt
 index 45583d8..e2a59c3 100644
 --- a/Documentation/fetch-options.txt
 +++ b/Documentation/fetch-options.txt
 @@ -100,6 +100,13 @@ ifndef::git-pull[]
 reference to a commit that isn't already in the local submodule
 clone.

 +-j::
 +--jobs=n::
 +   Number of threads to be used for fetching submodules. Each thread
 +   will fetch from different submodules, such that fetching many
 +   submodules will be faster. By default the number of cpus will
 +   be used .
 +
  --no-recurse-submodules::
 Disable recursive fetching of submodules (this has the same effect as
 using the '--recurse-submodules=no' option).
 diff --git a/builtin/fetch.c b/builtin/fetch.c
 index ee1f1a9..636707e 100644
 --- a/builtin/fetch.c
 +++ b/builtin/fetch.c
 @@ -37,6 +37,7 @@ static int prune = -1; /* unspecified */
  static int all, append, dry_run, force, keep, multiple, update_head_ok, 
 verbosity;
  static int progress = -1, recurse_submodules = RECURSE_SUBMODULES_DEFAULT;
  static int tags = TAGS_DEFAULT, unshallow, update_shallow;
 +static int max_threads;
  static const char *depth;
  static const char *upload_pack;
  static struct strbuf default_rla = STRBUF_INIT;
 @@ -99,6 +100,8 @@ static struct option builtin_fetch_options[] = {
 N_(fetch all tags and associated objects), TAGS_SET),
 OPT_SET_INT('n', NULL, tags,
 N_(do not fetch all tags (--no-tags)), TAGS_UNSET),
 +   OPT_INTEGER('j', jobs, max_threads,
 +   N_(number of threads used for fetching)),
 OPT_BOOL('p', prune, prune,
  N_(prune remote-tracking branches no longer on remote)),
 { OPTION_CALLBACK, 0, recurse-submodules, NULL, N_(on-demand),
 @@ -1217,7 +1220,8 @@ int cmd_fetch(int argc, const char **argv, const char 
 *prefix)
 result = fetch_populated_submodules(options,
 submodule_prefix,
 recurse_submodules,
 -   verbosity  0);
 +   verbosity  0,
 +   max_threads);
 argv_array_clear(options);
 }

 diff --git a/builtin/pull.c b/builtin/pull.c
 index 722a83c..fbbda67 100644
 --- a/builtin/pull.c
 +++ b/builtin/pull.c
 @@ -94,6 +94,7 @@ static int opt_force;
  static char *opt_tags;
  static char *opt_prune;
  static char *opt_recurse_submodules;
 +static char *max_threads;
  static int opt_dry_run;
  static char *opt_keep;
  static char *opt_depth;
 @@ -177,6 +178,9 @@ static struct option pull_options[] = {
 N_(on-demand),
 N_(control recursive fetching of submodules),
 PARSE_OPT_OPTARG),
 +   OPT_PASSTHRU('j', jobs, max_threads, N_(n),
 +   N_(number of threads used for fetching submodules),
 +   PARSE_OPT_OPTARG),
 OPT_BOOL(0, dry-run, opt_dry_run,
 N_(dry run)),
 OPT_PASSTHRU('k', keep, opt_keep, NULL,
 @@ -524,6 +528,8 @@ static int run_fetch(const char *repo, const char 
 **refspecs)
 argv_array_push(args, opt_prune);
 if (opt_recurse_submodules)
 argv_array_push(args, opt_recurse_submodules);
 +   if (max_threads)
 +   argv_array_push(args, max_threads);
 if (opt_dry_run)
 argv_array_push(args, --dry-run);
 if (opt_keep)
 diff --git a/submodule.c b/submodule.c
 index 9fcc86f..50266a8 100644
 --- a/submodule.c
 +++ b/submodule.c
 @@ 

[PATCH] lockfile: remove function hold_lock_file_for_append

2015-08-28 Thread Ralf Thielow
With 77b9b1d (add_to_alternates_file: don't add duplicate entries,
2015-08-10) the last caller of function hold_lock_file_for_append
has been removed, so we can remove the function as well.

Signed-off-by: Ralf Thielow ralf.thie...@gmail.com
---
This is the second bullet point in
http://git-blame.blogspot.de/p/leftover-bits.html

 lockfile.c | 38 --
 lockfile.h | 26 +++---
 2 files changed, 7 insertions(+), 57 deletions(-)

diff --git a/lockfile.c b/lockfile.c
index 637b8cf..80d056d 100644
--- a/lockfile.c
+++ b/lockfile.c
@@ -177,44 +177,6 @@ int hold_lock_file_for_update_timeout(struct lock_file 
*lk, const char *path,
return fd;
 }
 
-int hold_lock_file_for_append(struct lock_file *lk, const char *path, int 
flags)
-{
-   int fd, orig_fd;
-
-   fd = lock_file(lk, path, flags);
-   if (fd  0) {
-   if (flags  LOCK_DIE_ON_ERROR)
-   unable_to_lock_die(path, errno);
-   return fd;
-   }
-
-   orig_fd = open(path, O_RDONLY);
-   if (orig_fd  0) {
-   if (errno != ENOENT) {
-   int save_errno = errno;
-
-   if (flags  LOCK_DIE_ON_ERROR)
-   die(cannot open '%s' for copying, path);
-   rollback_lock_file(lk);
-   error(cannot open '%s' for copying, path);
-   errno = save_errno;
-   return -1;
-   }
-   } else if (copy_fd(orig_fd, fd)) {
-   int save_errno = errno;
-
-   if (flags  LOCK_DIE_ON_ERROR)
-   die(failed to prepare '%s' for appending, path);
-   close(orig_fd);
-   rollback_lock_file(lk);
-   errno = save_errno;
-   return -1;
-   } else {
-   close(orig_fd);
-   }
-   return fd;
-}
-
 char *get_locked_file_path(struct lock_file *lk)
 {
struct strbuf ret = STRBUF_INIT;
diff --git a/lockfile.h b/lockfile.h
index 8131fa3..3d30193 100644
--- a/lockfile.h
+++ b/lockfile.h
@@ -44,8 +44,7 @@
  *   throughout the life of the program (i.e. you cannot use an
  *   on-stack variable to hold this structure).
  *
- * * Attempts to create a lockfile by calling
- *   `hold_lock_file_for_update()` or `hold_lock_file_for_append()`.
+ * * Attempts to create a lockfile by calling `hold_lock_file_for_update()`.
  *
  * * Writes new content for the destination file by either:
  *
@@ -73,7 +72,7 @@
  * Even after the lockfile is committed or rolled back, the
  * `lock_file` object must not be freed or altered by the caller.
  * However, it may be reused; just pass it to another call of
- * `hold_lock_file_for_update()` or `hold_lock_file_for_append()`.
+ * `hold_lock_file_for_update()`.
  *
  * If the program exits before `commit_lock_file()`,
  * `commit_lock_file_to()`, or `rollback_lock_file()` is called, the
@@ -120,8 +119,7 @@ struct lock_file {
  * Flags
  * -
  *
- * The following flags can be passed to `hold_lock_file_for_update()`
- * or `hold_lock_file_for_append()`.
+ * The following flags can be passed to `hold_lock_file_for_update()`.
  */
 
 /*
@@ -168,27 +166,17 @@ static inline int hold_lock_file_for_update(
 }
 
 /*
- * Like `hold_lock_file_for_update()`, but before returning copy the
- * existing contents of the file (if any) to the lockfile and position
- * its write pointer at the end of the file. The flags argument and
- * error handling are described above.
- */
-extern int hold_lock_file_for_append(struct lock_file *lk,
-const char *path, int flags);
-
-/*
  * Append an appropriate error message to `buf` following the failure
- * of `hold_lock_file_for_update()` or `hold_lock_file_for_append()`
- * to lock `path`. `err` should be the `errno` set by the failing
- * call.
+ * of `hold_lock_file_for_update()` to lock `path`. `err` should be the
+ * `errno` set by the failing call.
  */
 extern void unable_to_lock_message(const char *path, int err,
   struct strbuf *buf);
 
 /*
  * Emit an appropriate error message and `die()` following the failure
- * of `hold_lock_file_for_update()` or `hold_lock_file_for_append()`
- * to lock `path`. `err` should be the `errno` set by the failing
+ * of `hold_lock_file_for_update()` to lock `path`. `err` should be the
+ * `errno` set by the failing
  * call.
  */
 extern NORETURN void unable_to_lock_die(const char *path, int err);
-- 
2.5.0.614.g6f325f9

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 6/9] submodule: helper to run foreach in parallel

2015-08-28 Thread Stefan Beller
On Thu, Aug 27, 2015 at 6:14 PM, Stefan Beller sbel...@google.com wrote:
 Similar to `git submodule foreach` the new command `git submodule
 foreach_parallel` will run a command on each submodule.

 The commands are run in parallel up to the number of cores by default,
 or you can specify '-j 4' tun just run with 4 threads for example.

 One major difference to `git submodule foreach` is the handling of input
 and output to the commands. Because of the parallel nature of the execution
 it is not trivial how to schedule the std{in,out,err} channel for submodule
 the command is run in. So in this patch there is no support for stdin.
 stdout will be piped to stderr. stderr will make use of the synchronized
 output feature of run_command.

 Signed-off-by: Stefan Beller sbel...@google.com
 ---
  builtin/submodule--helper.c  | 133 
 ++-
  git-submodule.sh |  11 +++-
  t/t7407-submodule-foreach.sh |  11 
  3 files changed, 153 insertions(+), 2 deletions(-)

 diff --git a/builtin/submodule--helper.c b/builtin/submodule--helper.c
 index d29499c..18b67f0 100644
 --- a/builtin/submodule--helper.c
 +++ b/builtin/submodule--helper.c
 @@ -8,6 +8,7 @@
  #include submodule.h
  #include submodule-config.h
  #include string-list.h
 +#include thread-utils.h
  #include run-command.h

  static const struct cache_entry **ce_entries;
 @@ -266,6 +267,127 @@ static int module_clone(int argc, const char **argv, 
 const char *prefix)
 return 0;
  }

 +#ifndef NO_PTHREADS
 +struct submodule_args {
 +   const char *name;
 +   const char *path;
 +   const char *sha1;
 +   const char *toplevel;
 +   const char *prefix;
 +   const char **cmd;
 +   pthread_mutex_t *sync;
 +};
 +
 +int run_cmd_submodule(struct task_queue *aq, void *task)
 +{
 +   int i;
 +   struct submodule_args *args = task;
 +   struct strbuf out = STRBUF_INIT;
 +   struct strbuf sb = STRBUF_INIT;
 +   struct child_process *cp = xmalloc(sizeof(*cp));
 +
 +   strbuf_addf(out, N_(Entering %s\n), relative_path(args-path, 
 args-prefix, sb));
 +
 +   child_process_init(cp);
 +   argv_array_pushv(cp-args, args-cmd);
 +
 +   argv_array_pushf(cp-env_array, name=%s, args-name);
 +   argv_array_pushf(cp-env_array, path=%s, args-path);
 +   argv_array_pushf(cp-env_array, sha1=%s, args-sha1);
 +   argv_array_pushf(cp-env_array, toplevel=%s, args-toplevel);
 +
 +   for (i = 0; local_repo_env[i]; i++)
 +   argv_array_push(cp-env_array, local_repo_env[i]);
 +
 +   cp-no_stdin = 1;
 +   cp-out = 0;
 +   cp-err = -1;
 +   cp-dir = args-path;
 +   cp-stdout_to_stderr = 1;
 +   cp-use_shell = 1;
 +   cp-sync_mutex = args-sync;
 +   cp-sync_buf = out;
 +
 +   return run_command(cp);
 +}
 +
 +int module_foreach_parallel(int argc, const char **argv, const char *prefix)
 +{
 +   int i, recursive = 0, number_threads = 0, quiet = 0;
 +   static struct pathspec pathspec;
 +   struct strbuf sb = STRBUF_INIT;
 +   struct task_queue *aq;
 +   char **cmd;
 +   const char **nullargv = {NULL};
 +   pthread_mutex_t mutex;
 +
 +   struct option module_update_options[] = {
 +   OPT_STRING(0, prefix, alternative_path,
 +  N_(path),
 +  N_(alternative anchor for relative paths)),
 +   OPT_STRING(0, cmd, cmd,
 +  N_(string),
 +  N_(command to run)),
 +   OPT_BOOL('r', --recursive, recursive,
 +N_(Recurse into nexted submodules)),
 +   OPT_INTEGER('j', jobs, number_threads,
 +   N_(Recurse into nexted submodules)),
 +   OPT__QUIET(quiet, N_(Suppress output)),
 +   OPT_END()
 +   };
 +
 +   static const char * const git_submodule_helper_usage[] = {
 +   N_(git submodule--helper foreach [--prefix=path] 
 [path...]),
 +   NULL
 +   };
 +
 +   argc = parse_options(argc, argv, prefix, module_update_options,
 +git_submodule_helper_usage, 0);
 +
 +   if (module_list_compute(0, nullargv, NULL, pathspec)  0)
 +   return 1;
 +
 +   gitmodules_config();
 +
 +   pthread_mutex_init(mutex, NULL);
 +   aq = create_task_queue(number_threads);
 +
 +   for (i = 0; i  ce_used; i++) {
 +   const struct submodule *sub;
 +   const struct cache_entry *ce = ce_entries[i];
 +   struct submodule_args *args = malloc(sizeof(*args));
 +
 +   if (ce_stage(ce))
 +   args-sha1 = xstrdup(sha1_to_hex(null_sha1));
 +   else
 +   args-sha1 = xstrdup(sha1_to_hex(ce-sha1));
 +
 +   strbuf_reset(sb);
 +   strbuf_addf(sb, %s/.git, ce-name);
 +   if (!file_exists(sb.buf)) {
 +

Re: Running interpret-trailers automatically on each commit?

2015-08-28 Thread Jeremy Morton
Yeah but it's kind of useless to me having it on each commit on a 
per-repo basis (and even then, only with hooks).


--
Best regards,
Jeremy Morton (Jez)

On 28/08/2015 18:06, Junio C Hamano wrote:

Jeremy Mortonad...@game-point.net  writes:


I see that interpret-trailers has been added by default in git
2.5.0. However the documentation isn't that great and I can't tell
whether it gets run automatically when I do a git commit.  My guess
is that it doesn't - that you have to set up a hook to get it to run
each commit.


All correct, except that it happend in 2.2 timeframe.

A new experimental feature is shipped, so that people can gain
experience with it and come up with the best practice in their
hooks, and then laster we may fold the best practice into somewhere
deeper in the system.

We are still in the early ship an experimental feature to let
people play with it stage.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Jeff King
On Fri, Aug 28, 2015 at 11:27:04AM -0700, Junio C Hamano wrote:

  But for commands that show progress like git clone, git checkout,
  and git fetch, it does not work well at all.  They provide output
  that updates itself by putting a carriage return at the end of each
  chunk of output, like this:
 
   remote: Finding sources:  11% (18/155)   \r
   remote: Finding sources:  12% (19/155)   \r
 
  With multiple commands producing such output, they will overwrite each
  other's lines, producing a mixture that is confusing and unuseful.
 
 That example also illustrates why it is not a useful to buffer all
 of these lines and showing them once.

I think Jonathan's point is that you could pick _one_ active child to
show without buffering, while simultaneously buffering everybody else's
output. When that finishes, pick a new active child, show its buffer,
and then start showing its output in realtime. And so on.

So to an observer, it would look like a serial operation, but subsequent
operations after the first would magically go much faster (because
they'd been working and buffering in the background).

And that doesn't require any additional IPC magic (though I am not sure
how we get progress in the first place if the child stderr is a
pipe...).

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Jonathan Nieder
Junio C Hamano wrote:
 Jonathan Nieder jrnie...@gmail.com writes:

  remote: Finding sources:  11% (18/155)   \r
  remote: Finding sources:  12% (19/155)   \r

 With multiple commands producing such output, they will overwrite each
 other's lines, producing a mixture that is confusing and unuseful.

 That example also illustrates why it is not a useful to buffer all
 of these lines and showing them once.

I don't completely follow.  Are you referring to the wasted memory to
store the line that is going to be written and rewritten or some other
aspect?

Today (without parallelism), if I clone a repository with multiple
submodules and walk away from the terminal, then when I get back I see

Cloning into 'plugins/cookbook-plugin'...
remote: Counting objects: 36, done
remote: Finding sources: 100% (36/36)
remote: Total 1192 (delta 196), reused 1192 (delta 196)
Receiving objects: 100% (1192/1192), 239.46 KiB | 0 bytes/s, done.
Resolving deltas: 100% (196/196), done.
Checking connectivity... done.
Submodule path 'plugins/cookbook-plugin': checked out 
'b9d3ca8a65030071e28be19296ba867ab424fbbf'
Cloning into 'plugins/download-commands'...
remote: Counting objects: 37, done
remote: Finding sources: 100% (37/37)
remote: Total 448 (delta 46), reused 448 (delta 46)
Receiving objects: 100% (448/448), 96.13 KiB | 0 bytes/s, done.
Resolving deltas: 100% (46/46), done.
Checking connectivity... done.
Submodule path 'plugins/download-commands': checked out 
'99e61fb06a4505a9558c23a56213cb32ceaa9cca'
...

The output for each submodule is in one chunk and I can understand what
happened in each.

By contrast, with inter-mixing the speed of output tells you something
about whether the process is stalled while you stare at the screen and
wait for it to finish (good) but the result on the screen is
unintelligible when the process finishes (bad).

Showing interactive output for one task still provides the real-time
feedback about whether it is completely stuck and needs to be cancelled.
It is easier to make sense of even from the point of view of real-time
progress than the intermixed output.

What problem is the intermixed output meant to solve?

In other words, what is your objection about?  Perhaps there is a way to
both satisfy that objection and to have output clumped per task, but it
is hard to know without knowing what the objection is.

Hope that helps,
Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Jeff King
On Fri, Aug 28, 2015 at 11:50:50AM -0700, Jonathan Nieder wrote:

  But what I meant was: the child will only show progress if stderr is a
  tty, but here it is not.
 
 For clone / fetch, we can pass --progress explicitly.
 
 For some reason 'git checkout' doesn't support a --progress option.  I
 suppose it should. ;-)

Yeah, that will work for those tools, but I thought you could pass
arbitrary shell commands.  It would be nice if git sub-commands run
through those just magically worked, even though we don't have an
opportunity to change their command-line parameters.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] show-ref: place angle brackets around variable in usage string

2015-08-28 Thread Junio C Hamano
Alex Henrie alexhenri...@gmail.com writes:

 Signed-off-by: Alex Henrie alexhenri...@gmail.com
 ---
  builtin/show-ref.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

 diff --git a/builtin/show-ref.c b/builtin/show-ref.c
 index dfbc314..d9c1633 100644
 --- a/builtin/show-ref.c
 +++ b/builtin/show-ref.c
 @@ -8,7 +8,7 @@
  
  static const char * const show_ref_usage[] = {
   N_(git show-ref [-q | --quiet] [--verify] [--head] [-d | 
 --dereference] [-s | --hash[=n]] [--abbrev[=n]] [--tags] [--heads] [--] 
 [pattern...]),
 - N_(git show-ref --exclude-existing[=pattern]  ref-list),
 + N_(git show-ref --exclude-existing[=pattern]  ref-list),
   NULL
  };

Isn't --exclude-existing[=pattern] also a placeholder?
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] Mingw: verify both ends of the pipe () call

2015-08-28 Thread Junio C Hamano
Johannes Schindelin johannes.schinde...@gmx.de writes:

 From: Jose F. Morales jfm...@gmail.com

 The code to open and test the second end of the pipe clearly imitates
 the code for the first end. A little too closely, though... Let's fix
 the obvious copy-edit bug.

 Signed-off-by: Jose F. Morales jfm...@gmail.com
 Signed-off-by: Johannes Schindelin johannes.schinde...@gmx.de
 ---
  compat/mingw.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

 diff --git a/compat/mingw.c b/compat/mingw.c
 index 496e6f8..f74da23 100644
 --- a/compat/mingw.c
 +++ b/compat/mingw.c
 @@ -681,7 +681,7 @@ int pipe(int filedes[2])
   return -1;
   }
   filedes[1] = _open_osfhandle((int)h[1], O_NOINHERIT);
 - if (filedes[0]  0) {
 + if (filedes[1]  0) {
   close(filedes[0]);
   CloseHandle(h[1]);
   return -1;

 --
 https://github.com/git/git/pull/168

Thanks all.  Will queue directly on 'maint' with reviewed-by's and
acked-by's.

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Junio C Hamano
Jonathan Nieder jrnie...@gmail.com writes:

 Junio C Hamano wrote:

 My preference is still (1) leave standard error output all connected
 to the same fd without multiplexing, and (2) line buffer standard
 output so that the output is at least readable as a text, in a
 similar way a log of an irc channel where everybody is talking at
 the same time.

 There is something nice about the immediacy of seeing output from all
 the subprocesses at the same time in that model.

 But for commands that show progress like git clone, git checkout,
 and git fetch, it does not work well at all.  They provide output
 that updates itself by putting a carriage return at the end of each
 chunk of output, like this:

  remote: Finding sources:  11% (18/155)   \r
  remote: Finding sources:  12% (19/155)   \r

 With multiple commands producing such output, they will overwrite each
 other's lines, producing a mixture that is confusing and unuseful.

That example also illustrates why it is not a useful to buffer all
of these lines and showing them once.

 Ideally what I as a user want to see is something like what prove
 writes, showing progress on the multiple tasks that are taking place
 at once:

  ===( 103;1  0/?  8/?  3/?  11/?  6/?  16/?  1/?  1/? )==

Tell me how that buffer all and show them once helps us to get
near that ideal.

 That would require more sophisticated inter-process communication than
 seems necessary for the first version of parallel git submodule
 update.

Exactly.  Why waste memory to buffer and stall the entire output
from other processes in the interim solution, then?


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Stefan Beller
On Fri, Aug 28, 2015 at 11:27 AM, Junio C Hamano gits...@pobox.com wrote:
 Jonathan Nieder jrnie...@gmail.com writes:

 Junio C Hamano wrote:

 My preference is still (1) leave standard error output all connected
 to the same fd without multiplexing, and (2) line buffer standard
 output so that the output is at least readable as a text, in a
 similar way a log of an irc channel where everybody is talking at
 the same time.

 There is something nice about the immediacy of seeing output from all
 the subprocesses at the same time in that model.

 But for commands that show progress like git clone, git checkout,
 and git fetch, it does not work well at all.  They provide output
 that updates itself by putting a carriage return at the end of each
 chunk of output, like this:

  remote: Finding sources:  11% (18/155)   \r
  remote: Finding sources:  12% (19/155)   \r

 With multiple commands producing such output, they will overwrite each
 other's lines, producing a mixture that is confusing and unuseful.

 That example also illustrates why it is not a useful to buffer all
 of these lines and showing them once.

 Ideally what I as a user want to see is something like what prove
 writes, showing progress on the multiple tasks that are taking place
 at once:

  ===( 103;1  0/?  8/?  3/?  11/?  6/?  16/?  1/?  1/? )==

 Tell me how that buffer all and show them once helps us to get
 near that ideal.

It doesn't, but it looks better than the irc like everybody speaks at
the same time
solution IMHO.

At the very the user may not even be interested in the details similar
to what prove
provides. The client can estimate its own total progress by weighting
the progress
from all tasks.


 That would require more sophisticated inter-process communication than
 seems necessary for the first version of parallel git submodule
 update.

 Exactly.  Why waste memory to buffer and stall the entire output
 from other processes in the interim solution, then?


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 2/4] path: optimize common dir checking

2015-08-28 Thread David Turner
On Fri, 2015-08-28 at 09:39 -0700, Junio C Hamano wrote:
 David Turner dtur...@twopensource.com writes:
 
  On Wed, 2015-08-26 at 18:10 -0400, David Turner wrote:
  On Wed, 2015-08-26 at 14:15 -0700, Junio C Hamano wrote:
+ * For example, consider the following set of strings:
+ * abc
+ * def
+ * definite
+ * definition
+ *
+ * The trie would look look like:
+ * root: len = 0, value = (something), children a and d non-NULL.
   
   value = NULL, as there is no empty string registered in the trie?
  
  Indeed.
  
+ *a: len = 2, contents = bc
   
   value = NULL here, too (just showing I am following along, not
   just skimming)?
  
  Yep.
 
  No, wait. value should be non-NULL, since abc is in the string set. 
 
 True.  Here is what I came up with on top of your original.  
 
 
 
  path.c | 11 ++-
  1 file changed, 6 insertions(+), 5 deletions(-)
 
 diff --git a/path.c b/path.c
 index 4100ba6..ce0530b 100644
 --- a/path.c
 +++ b/path.c
 @@ -133,12 +133,13 @@ struct common_dir common_list[] = {
   * definition
   *
   * The trie would look look like:
 - * root: len = 0, value = (something), children a and d non-NULL.
 - *a: len = 2, contents = bc
 - *d: len = 2, contents = ef, children i non-NULL, value = (something)
 + * root: len = 0, children a and d non-NULL, value = NULL.
 + *a: len = 2, contents = bc, value = (data for abc)
 + *d: len = 2, contents = ef, children i non-NULL, value = (data for 
 def)
   *   i: len = 3, contents = nit, children e and i non-NULL, value = NULL
 - *   e: len = 0, children all NULL, value = (something)
 - *   i: len = 2, contents = on, children all NULL, value = 
 (something)
 + *   e: len = 0, children all NULL, value = (data for definite)
 + *   i: len = 2, contents = on, children all NULL,
 + *  value = (data for definition)
   */
  struct trie {
   struct trie *children[256];

LGTM.

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Stefan Beller
On Fri, Aug 28, 2015 at 11:35 AM, Jeff King p...@peff.net wrote:
 On Fri, Aug 28, 2015 at 11:27:04AM -0700, Junio C Hamano wrote:

  But for commands that show progress like git clone, git checkout,
  and git fetch, it does not work well at all.  They provide output
  that updates itself by putting a carriage return at the end of each
  chunk of output, like this:
 
   remote: Finding sources:  11% (18/155)   \r
   remote: Finding sources:  12% (19/155)   \r
 
  With multiple commands producing such output, they will overwrite each
  other's lines, producing a mixture that is confusing and unuseful.

 That example also illustrates why it is not a useful to buffer all
 of these lines and showing them once.

 I think Jonathan's point is that you could pick _one_ active child to
 show without buffering, while simultaneously buffering everybody else's
 output. When that finishes, pick a new active child, show its buffer,
 and then start showing its output in realtime. And so on.

or better yet, pick that child with the most progress (i.e. flush all finished
children and then pick the next active child), that would approximate
the progress in the output best, as it would reduce the hidden
buffered progress.


 So to an observer, it would look like a serial operation, but subsequent
 operations after the first would magically go much faster (because
 they'd been working and buffering in the background).

 And that doesn't require any additional IPC magic (though I am not sure
 how we get progress in the first place if the child stderr is a
 pipe...).

Moving the contents from the pipe to a strbuf buffer which we can grow
indefinitely
(way larger than pipe limits, but the output of a git fetch should be
small enough for that).




 -Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Junio C Hamano
Jeff King p...@peff.net writes:

 I think Jonathan's point is that you could pick _one_ active child to
 show without buffering, while simultaneously buffering everybody else's
 output. When that finishes, pick a new active child, show its buffer,
 and then start showing its output in realtime. And so on.

 So to an observer, it would look like a serial operation, but subsequent
 operations after the first would magically go much faster (because
 they'd been working and buffering in the background).

OK, I can buy that explanation.

 And that doesn't require any additional IPC magic (though I am not sure
 how we get progress in the first place if the child stderr is a
 pipe...).

That is a good point ;-)
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Jeff King
On Fri, Aug 28, 2015 at 11:41:17AM -0700, Stefan Beller wrote:

  So to an observer, it would look like a serial operation, but subsequent
  operations after the first would magically go much faster (because
  they'd been working and buffering in the background).
 
  And that doesn't require any additional IPC magic (though I am not sure
  how we get progress in the first place if the child stderr is a
  pipe...).
 
 Moving the contents from the pipe to a strbuf buffer which we can grow
 indefinitely
 (way larger than pipe limits, but the output of a git fetch should be
 small enough for that).

Right, clearly we can't rely on pipe buffers to be large enough here
(though we _may_ want to rely on tempfiles if we aren't sure that the
stdout is bounded in a reasonable way).

But what I meant was: the child will only show progress if stderr is a
tty, but here it is not.

I wonder if we need to set GIT_STDERR_IS_TTY=1 in the parent process,
and then respect it in the children (this is similar to what
GIT_PAGER_IN_USE does for stdout).

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Jonathan Nieder
Jeff King wrote:

 I think Jonathan's point is that you could pick _one_ active child to
 show without buffering, while simultaneously buffering everybody else's
 output.

Yep.  Thanks for interpreting.

[...]
 So to an observer, it would look like a serial operation, but subsequent
 operations after the first would magically go much faster (because
 they'd been working and buffering in the background).

Yes.

Thanks,
Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Jonathan Nieder
Jeff King wrote:

 Right, clearly we can't rely on pipe buffers to be large enough here
 (though we _may_ want to rely on tempfiles if we aren't sure that the
 stdout is bounded in a reasonable way).

 But what I meant was: the child will only show progress if stderr is a
 tty, but here it is not.

For clone / fetch, we can pass --progress explicitly.

For some reason 'git checkout' doesn't support a --progress option.  I
suppose it should. ;-)

Thanks,
Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Stefan Beller
On Fri, Aug 28, 2015 at 11:44 AM, Jeff King p...@peff.net wrote:
 On Fri, Aug 28, 2015 at 11:41:17AM -0700, Stefan Beller wrote:

  So to an observer, it would look like a serial operation, but subsequent
  operations after the first would magically go much faster (because
  they'd been working and buffering in the background).
 
  And that doesn't require any additional IPC magic (though I am not sure
  how we get progress in the first place if the child stderr is a
  pipe...).

 Moving the contents from the pipe to a strbuf buffer which we can grow
 indefinitely
 (way larger than pipe limits, but the output of a git fetch should be
 small enough for that).

 Right, clearly we can't rely on pipe buffers to be large enough here
 (though we _may_ want to rely on tempfiles if we aren't sure that the
 stdout is bounded in a reasonable way).

 But what I meant was: the child will only show progress if stderr is a
 tty, but here it is not.

Oh, I forgot about that.


 I wonder if we need to set GIT_STDERR_IS_TTY=1 in the parent process,
 and then respect it in the children (this is similar to what
 GIT_PAGER_IN_USE does for stdout).

The use of GIT_PAGER_IN_USE looks straightforward to me. I'll try to add
GIT_STDERR_IS_TTY then.



 -Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Stefan Beller
On Fri, Aug 28, 2015 at 11:53 AM, Jeff King p...@peff.net wrote:
 On Fri, Aug 28, 2015 at 11:50:50AM -0700, Jonathan Nieder wrote:

  But what I meant was: the child will only show progress if stderr is a
  tty, but here it is not.

 For clone / fetch, we can pass --progress explicitly.

 For some reason 'git checkout' doesn't support a --progress option.  I
 suppose it should. ;-)

 Yeah, that will work for those tools, but I thought you could pass
 arbitrary shell commands.  It would be nice if git sub-commands run
 through those just magically worked, even though we don't have an
 opportunity to change their command-line parameters.

Technically speaking this discusses the patch for fetch only and we'd
want to have that discussion at the previous patch. ;)
But as both of them use the same code, the sync feature of run-command,
which is in patch 5/9, we want to have git commands just work anyway.


 -Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


A note from the maintainer

2015-08-28 Thread Junio C Hamano
Welcome to the Git development community.

This message is written by the maintainer and talks about how Git
project is managed, and how you can work with it.

* Mailing list and the community

The development is primarily done on the Git mailing list. Help
requests, feature proposals, bug reports and patches should be sent to
the list address git@vger.kernel.org.  You don't have to be
subscribed to send messages.  The convention on the list is to keep
everybody involved on Cc:, so it is unnecessary to say Please Cc: me,
I am not subscribed.

Before sending patches, please read Documentation/SubmittingPatches
and Documentation/CodingGuidelines to familiarize yourself with the
project convention.

If you sent a patch and you did not hear any response from anybody for
several days, it could be that your patch was totally uninteresting,
but it also is possible that it was simply lost in the noise.  Please
do not hesitate to send a reminder message in such a case.  Messages
getting lost in the noise may be a sign that those who can evaluate
your patch don't have enough mental/time bandwidth to process them
right at the moment, and it often helps to wait until the list traffic
becomes calmer before sending such a reminder.

The list archive is available at a few public sites:

http://news.gmane.org/gmane.comp.version-control.git/
http://marc.theaimsgroup.com/?l=git
http://www.spinics.net/lists/git/

For those who prefer to read it over NNTP:

nntp://news.gmane.org/gmane.comp.version-control.git

When you point at a message in a mailing list archive, using
gmane is often the easiest to follow by readers, like this:

http://thread.gmane.org/gmane.comp.version-control.git/27/focus=217

as it also allows people who subscribe to the mailing list as gmane
newsgroup to jump to the article.

Some members of the development community can sometimes be found on
the #git and #git-devel IRC channels on Freenode.  Their logs are
available at:

http://colabti.org/irclogger/irclogger_log/git
http://colabti.org/irclogger/irclogger_log/git-devel

There is a volunteer-run newsletter to serve our community (Git Rev
News http://git.github.io/rev_news/rev_news.html).

Git is a member project of software freedom conservancy, a non-profit
organization (https://sfconservancy.org/).  To reach a committee of
liaisons to the conservancy, contact them at g...@sfconservancy.org.


* Reporting bugs

When you think git does not behave as you expect, please do not stop
your bug report with just git does not work.  I used git in this
way, but it did not work is not much better, neither is I used git
in this way, and X happend, which is broken.  It often is that git is
correct to cause X happen in such a case, and it is your expectation
that is broken. People would not know what other result Y you expected
to see instead of X, if you left it unsaid.

Please remember to always state

 - what you wanted to achieve;

 - what you did (the version of git and the command sequence to reproduce
   the behavior);

 - what you saw happen (X above);

 - what you expected to see (Y above); and

 - how the last two are different.

See http://www.chiark.greenend.org.uk/~sgtatham/bugs.html for further
hints.

If you think you found a security-sensitive issue and want to disclose
it to us without announcing it to wider public, please contact us at
our security mailing list git-secur...@googlegroups.com.


* Repositories, branches and documentation.

My public git.git repositories are at:

  git://git.kernel.org/pub/scm/git/git.git/
  https://kernel.googlesource.com/pub/scm/git/git
  git://repo.or.cz/alt-git.git/
  https://github.com/git/git/
  git://git.sourceforge.jp/gitroot/git-core/git.git/
  git://git-core.git.sourceforge.net/gitroot/git-core/git-core/

A few web interfaces are found at:

  http://git.kernel.org/cgit/git/git.git
  https://kernel.googlesource.com/pub/scm/git/git
  http://repo.or.cz/w/alt-git.git

Preformatted documentation from the tip of the master branch can be
found in:

  git://git.kernel.org/pub/scm/git/git-{htmldocs,manpages}.git/
  git://repo.or.cz/git-{htmldocs,manpages}.git/
  https://github.com/gitster/git-{htmldocs,manpages}.git/

Also GitHub shows the manual pages formatted in HTML (with a
formatting backend different from the one that is used to create the
above) at:

  http://git-scm.com/docs/git

There are four branches in git.git repository that track the source tree
of git: master, maint, next, and pu.

The master branch is meant to contain what are very well tested and
ready to be used in a production setting.  Every now and then, a
feature release is cut from the tip of this branch.  They used to be
named with three dotted decimal digits (e.g. 1.8.5), but recently we
switched the versioning scheme and feature releases are named with
three-dotted decimal digits that ends with .0 (e.g. 1.9.0).

The last such release was 2.5.0 done on Jul 27th, 2015. You can expect
that 

What's cooking in git.git (Aug 2015, #05; Fri, 28)

2015-08-28 Thread Junio C Hamano
Here are the topics that have been cooking.  Commits prefixed with
'-' are only in 'pu' (proposed updates) while commits prefixed with
'+' are in 'next'.

You can find the changes described here in the integration branches
of the repositories listed at

http://git-blame.blogspot.com/p/git-public-repositories.html

--
[Graduated to master]

* as/docfix-reflog-expire-unreachable (2015-08-21) 1 commit
  (merged to 'next' on 2015-08-25 at eb75d55)
 + Documentation/config: fix inconsistent label on gc.*.reflogExpireUnreachable

 Docfix.


* cc/trailers-corner-case-fix (2015-08-21) 1 commit
  (merged to 'next' on 2015-08-25 at ac25d80)
 + trailer: ignore first line of message

 interpret-trailers helper mistook a single-liner log message that
 has a colon as the end of existing trailer.


* dt/untracked-sparse (2015-08-19) 1 commit
  (merged to 'next' on 2015-08-25 at 2501a7e)
 + t7063: use --force-untracked-cache to speed up a bit
 (this branch is used by dt/untracked-subdir.)

 Test update.


* dt/untracked-subdir (2015-08-19) 2 commits
  (merged to 'next' on 2015-08-25 at ab4fd04)
 + untracked cache: fix entry invalidation
 + untracked-cache: fix subdirectory handling
 (this branch uses dt/untracked-sparse.)

 The experimental untracked-cache feature were buggy when paths with
 a few levels of subdirectories are involved.


* ep/http-configure-ssl-version (2015-08-17) 1 commit
  (merged to 'next' on 2015-08-19 at aab726b)
 + http: add support for specifying the SSL version

 A new configuration variable http.sslVersion can be used to specify
 what specific version of SSL/TLS to use to make a connection.


* jc/calloc-pathspec (2015-08-20) 1 commit
  (merged to 'next' on 2015-08-25 at 877490c)
 + ps_matched: xcalloc() takes nmemb and then element size

 Minor code cleanup.


* jv/send-email-selective-smtp-auth (2015-08-17) 1 commit
  (merged to 'next' on 2015-08-19 at 3f0c693)
 + send-email: provide whitelist of SMTP AUTH mechanisms

 git send-email learned a new option --smtp-auth to limit the SMTP
 AUTH mechanisms to be used to a subset of what the system library
 supports.


* po/po-readme (2015-08-17) 1 commit
  (merged to 'next' on 2015-08-19 at 1899e59)
 + po/README: Update directions for l10n contributors

 Doc updates for i18n.


* pt/am-builtin-abort-fix (2015-08-19) 1 commit
  (merged to 'next' on 2015-08-19 at 729e682)
 + am --skip/--abort: merge HEAD/ORIG_HEAD tree into index

 git am that was recently reimplemented in C had a performance
 regression in git am --abort that goes back to the version before
 an attempted (and failed) patch application.


* sg/help-group (2015-08-25) 1 commit
  (merged to 'next' on 2015-08-25 at 907e5a8)
 + generate-cmdlist: re-implement as shell script

 We rewrote one of the build scripts in Perl but this reimplements
 in Bourne shell.


* sg/t3020-typofix (2015-08-20) 1 commit
  (merged to 'next' on 2015-08-25 at 051d6c0)
 + t3020: fix typo in test description


* sg/wt-status-header-inclusion (2015-08-21) 1 commit
  (merged to 'next' on 2015-08-25 at fa5b2b2)
 + wt-status: move #include pathspec.h to the header


* ss/fix-config-fd-leak (2015-08-14) 1 commit
  (merged to 'next' on 2015-08-19 at 80d4880)
 + config: close config file handle in case of error

--
[New Topics]

* ah/pack-objects-usage-strings (2015-08-28) 1 commit
 - pack-objects: place angle brackets around placeholders in usage strings

 Usage string fix.

 Will merge to 'next'.


* ah/read-tree-usage-string (2015-08-28) 1 commit
 - read-tree: replace bracket set with parentheses to clarify usage

 Usage string fix.

 Will merge to 'next'.


* ah/reflog-typofix-in-error (2015-08-28) 1 commit
 - reflog: add missing single quote to error message

 Error string fix.

 Will merge to 'next'.


* ah/submodule-typofix-in-error (2015-08-28) 1 commit
 - git-submodule: remove extraneous space from error message

 Error string fix.

 Will merge to 'next'.


* br/svn-doc-include-paths-config (2015-08-26) 1 commit
 - git-svn doc: mention svn-remote.name.include-paths

 Will merge to 'next'.


* dt/commit-preserve-base-index-upon-opportunistic-cache-tree-update 
(2015-08-28) 1 commit
 - commit: don't rewrite shared index unnecessarily

 When re-priming the cache-tree opportunistically while committing
 the in-core index as-is, we mistakenly invalidated the in-core
 index too aggressively, causing the experimental split-index code
 to unnecessarily rewrite the on-disk index file(s).

 Will merge to 'next'.


* dt/refs-bisection (2015-08-28) 5 commits
 - bisect: make bisection refs per-worktree
 - refs: make refs/worktree/* per-worktree
 - SQUASH???
 - path: optimize common dir checking
 - refs: clean up common_list

 Move the refs used during a git bisect session to per-worktree
 hierarchy refs/worktree/* so that independent bisect sessions can
 be done in different worktrees.

 Will merge to 'next' after squashing 

[ANNOUNCE] Git v2.5.1

2015-08-28 Thread Junio C Hamano
The latest maintenance release Git v2.5.1 is now available at
the usual places.

The tarballs are found at:

https://www.kernel.org/pub/software/scm/git/

The following public repositories all have a copy of the 'v2.5.1'
tag and the 'maint' branch that the tag points at:

  url = https://kernel.googlesource.com/pub/scm/git/git
  url = git://repo.or.cz/alt-git.git
  url = git://git.sourceforge.jp/gitroot/git-core/git.git
  url = git://git-core.git.sourceforge.net/gitroot/git-core/git-core
  url = https://github.com/gitster/git

Note that code.google.com/ has gone into read-only mode and the
branches and tags there will not be updated.



Git v2.5.1 Release Notes


Fixes since v2.5


 * Running an aliased command from a subdirectory when the .git thing
   in the working tree is a gitfile pointing elsewhere did not work.

 * Often a fast-import stream builds a new commit on top of the
   previous commit it built, and it often unconditionally emits a
   from command to specify the first parent, which can be omitted in
   such a case.  This caused fast-import to forget the tree of the
   previous commit and then re-read it from scratch, which was
   inefficient.  Optimize for this common case.

 * The rev-parse --parseopt mode parsed the option specification
   and the argument hint in a strange way to allow '=' and other
   special characters in the option name while forbidding them from
   the argument hint.  This made it impossible to define an option
   like --pair key=value with pair=key=value specification,
   which instead would have defined a --pair=key value option.

 * A rebase replays changes of the local branch on top of something
   else, as such they are placed in stage #3 and referred to as
   theirs, while the changes in the new base, typically a foreign
   work, are placed in stage #2 and referred to as ours.  Clarify
   the checkout --ours/--theirs.

 * An experimental untracked cache feature used uname(2) in a
   slightly unportable way.

 * sparse checkout misbehaved for a path that is excluded from the
   checkout when switching between branches that differ at the path.

 * The low-level git send-pack did not honor 'user.signingkey'
   configuration variable when sending a signed-push.

 * An attempt to delete a ref by pushing into a repository whose HEAD
   symbolic reference points at an unborn branch that cannot be
   created due to ref D/F conflict (e.g. refs/heads/a/b exists, HEAD
   points at refs/heads/a) failed.

 * git subtree (in contrib/) depended on git log output to be
   stable, which was a no-no.  Apply a workaround to force a
   particular date format.

 * git clone $URL in recent releases of Git contains a regression in
   the code that invents a new repository name incorrectly based on
   the $URL.  This has been corrected.
   (merge db2e220 jk/guess-repo-name-regression-fix later to maint).

 * Running tests with the -x option to make them verbose had some
   unpleasant interactions with other features of the test suite.
   (merge 9b5fe78 jk/test-with-x later to maint).

 * git pull in recent releases of Git has a regression in the code
   that allows custom path to the --upload-pack=program.  This has
   been corrected.

 * pipe() emulation used in Git for Windows looked at a wrong variable
   when checking for an error from an _open_osfhandle() call.

Also contains typofixes, documentation updates and trivial code
clean-ups.



Changes since v2.5.0 are as follows:

Charles Bailey (1):
  untracked: fix detection of uname(2) failure

David Aguilar (1):
  contrib/subtree: ignore log.date configuration

David Turner (1):
  unpack-trees: don't update files with CE_WT_REMOVE set

Eric Sunshine (5):
  Documentation/git: drop outdated Cogito reference
  Documentation/git-tools: improve discoverability of Git wiki
  Documentation/git-tools: fix item text formatting
  Documentation/git-tools: drop references to defunct tools
  Documentation/git-tools: retire manually-maintained list

Ilya Bobyr (1):
  rev-parse --parseopt: allow [*=?!] in argument hints

Jeff King (4):
  test-lib: turn off -x tracing during chain-lint check
  test-lib: disable trace when test is not verbose
  clone: add tests for output directory
  clone: use computed length in guess_dir_name

Jiang Xin (1):
  receive-pack: crash when checking with non-exist HEAD

Jose F. Morales (1):
  Mingw: verify both ends of the pipe () call

Junio C Hamano (5):
  builtin/send-pack.c: respect user.signingkey
  Git 2.4.8
  Start preparing for 2.5.1
  pull: pass upload_pack only when it was given
  Git 2.5.1

Karthik Nayak (1):
  Documentation/tag: remove double occurance of pattern

Matthieu Moy (1):
  pull.sh: quote $upload_pack when passing it to git-fetch

Mike 

Re: What's cooking in git.git (Aug 2015, #05; Fri, 28)

2015-08-28 Thread Eric Sunshine
On Fri, Aug 28, 2015 at 5:11 PM, Junio C Hamano gits...@pobox.com wrote:
 Here are the topics that have been cooking.  Commits prefixed with
 '-' are only in 'pu' (proposed updates) while commits prefixed with
 '+' are in 'next'.

 You can find the changes described here in the integration branches
 of the repositories listed at

 http://git-blame.blogspot.com/p/git-public-repositories.html

 --
 [New Topics]

Hmm, Duy's patch series to fix git clone foo when 'foo' is a linked
worktree[1] doesn't seem to have been picked up. Was it lost in the
shuffle?

[1]: http://thread.gmane.org/gmane.comp.version-control.git/273982/focus=276346
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Stefan Beller
On Fri, Aug 28, 2015 at 10:12 AM, Junio C Hamano gits...@pobox.com wrote:
 Jonathan Nieder jrnie...@gmail.com writes:

 Stefan Beller wrote:
 On Thu, Aug 27, 2015 at 6:14 PM, Stefan Beller sbel...@google.com wrote:

 This makes use of the new task queue and the syncing feature of
 run-command to fetch a number of submodules at the same time.

 The output will look like it would have been run sequential,
 but faster.

 And it breaks the tests t5526-fetch-submodules.sh as the output is done
 on stderr only, instead of putting Fetching submodule submodule-path
 to stdout. :(

 I guess combining stdout and stderr is not a good strategy after all now.

 IMHO the Fetching submodule submodule-path output always should have
 gone to stderr.  It is not output that scripts would be relying on ---
 it is just progress output.

 So a preliminary patch doing that (and updating tests) would make sense
 to me.

 Sounds good.

 I personally do not think the we still do all the output from a
 single process while blocking output from others buffering
 implemented in this n-th round (by the way, please use [PATCH v$n
 N/M]) is worth doing, though.  It does not make the output machine
 parseable, because the reader does not get any signal in what order
 output of these subprocesses arrive anyway.

 The payload does not
 even have here is the beginning of output from the process that
 handled the submodule X to delimit them.

Oh it does, but it is not handled by the buffering code, but the application
code, so for git-fetch we would have Fetching submodule path 
while for git submodule foreach we would have Entering submodule path


 My preference is still (1) leave standard error output all connected
 to the same fd without multiplexing, and (2) line buffer standard
 output so that the output is at least readable as a text, in a
 similar way a log of an irc channel where everybody is talking at
 the same time.

In case of fetch passing the quiet flag becomes mandatory then,
because the the per-repo progress is put to stderr, which
also deletes previous output, to have the nice counters.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] stash: Add stash.showFlag config variable

2015-08-28 Thread Junio C Hamano
Namhyung Kim namhy...@gmail.com writes:

 Perhaps a pair of new booleans
 
  - stash.showStat (defaults to true but you can turn it off)
  - stash.showPatch (defaults to false but you can turn it on)
 
 or something along that line might be sufficient and more palatable.

 Hmm.. I agree with you, but I don't know what we should do if both of
 the options were off.  Just run 'git diff' with no option is ok to you?

If the user does not want stat or patch, then not running anything
would be more appropriate, don't you think?
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/9] fetch: fetch submodules in parallel

2015-08-28 Thread Jonathan Nieder
Junio C Hamano wrote:

 My preference is still (1) leave standard error output all connected
 to the same fd without multiplexing, and (2) line buffer standard
 output so that the output is at least readable as a text, in a
 similar way a log of an irc channel where everybody is talking at
 the same time.

There is something nice about the immediacy of seeing output from all
the subprocesses at the same time in that model.

But for commands that show progress like git clone, git checkout,
and git fetch, it does not work well at all.  They provide output
that updates itself by putting a carriage return at the end of each
chunk of output, like this:

 remote: Finding sources:  11% (18/155)   \r
 remote: Finding sources:  12% (19/155)   \r

With multiple commands producing such output, they will overwrite each
other's lines, producing a mixture that is confusing and unuseful.

Even with --no-progress, there is a similar sense of confusion in the
intermixed output.  Error messages are hard to find.  This is a
comment complaint about the current repo sync -j implementation.

Ideally what I as a user want to see is something like what prove
writes, showing progress on the multiple tasks that are taking place
at once:

 ===( 103;1  0/?  8/?  3/?  11/?  6/?  16/?  1/?  1/? )==

That would require more sophisticated inter-process communication than
seems necessary for the first version of parallel git submodule
update.  For the first version that people use in the wild, showing
output from one of the tasks at a time, simulating a sped-up
sequential implementation, seems useful to me.

In the degenerate case where only one task is running, it reduces to
the current output.  The output is familiar and easy to use.  I quite
like the approach.

My two cents,
Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: http.c (curl_easy_setopt and CURLAUTH_ANY)

2015-08-28 Thread brian m. carlson
On Fri, Aug 28, 2015 at 04:07:36PM +1000, Stephen Kazakoff wrote:
 Hi,
 
 When I'm behind a proxy (with BASIC authentication), I'm unable to
 perform a git clone.
 
 I managed to fix this by editing http.c and recompiling. The change
 I'd like to propose is to line 452.
 
 
 From:
 
 curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_ANY);
 
 To:
 
 curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_BASIC | CURLAUTH_NTLM);

Assuming it's supported upstream, I suspect this would break people who
are using GSSAPI (or Digest) authentication for their proxy.  This would
be a logical thing to do where Kerberos is used.

It might be worth checking exactly which bits cause problems for you;
perhaps your proxy might be misconfigured to suggest a type that it
doesn't support.

 I did however find the CURL documentation
 (https://secure.php.net/manual/en/function.curl-setopt.php) slightly
 conflicting. On one hand, CURLAUTH_ANY is effectively the same as
 passing CURLAUTH_BASIC | CURLAUTH_NTLM. But the documentation for
 CURLOPT_PROXYAUTH says that only CURLAUTH_BASIC and
 CURLAUTH_NTLM are currently supported. By that, I'm assuming
 CURLAUTH_ANY is not supported.

This looks like the documentation for PHP.  The libcurl documentation[0]
doesn't mention a limitation.

[0] http://curl.haxx.se/libcurl/c/CURLOPT_PROXYAUTH.html
-- 
brian m. carlson / brian with sandals: Houston, Texas, US
+1 832 623 2791 | http://www.crustytoothpaste.net/~bmc | My opinion only
OpenPGP: RSA v4 4096b: 88AC E9B2 9196 305B A994 7552 F1BA 225C 0223 B187


signature.asc
Description: Digital signature