Re: [PATCH v2 2/2] send-email: introduce sendemail.smtpsslcertpath

2013-07-06 Thread Junio C Hamano
John Keeping j...@keeping.me.uk writes:

 I'd rather have '$smtp_ssl_cert_path ne ' in the first if condition
 (instead of the '-d $smtp_ssl_cert_path') ...

I agree.  The signal for no certs should be an explicit nonsense
value like an empty string, not just a string that does not name an
expected filesystem object.  Otherwise people can misspell paths and
disable the validation by accident.

 Perhaps a complete solution could allow CA files as well.

Yes, that would be a good idea.  Care to roll into a fixup! patch
against [2/2]?

   if (defined $smtp_ssl_cert_path) {
   if ($smtp_ssl_cert_path eq ) {
   return (SSL_verify_mode = SSL_VERIFY_NONE);
   } elsif (-f $smtp_ssl_cert_path) {
   return (SSL_verify_mode = SSL_VERIFY_PEER,
   SSL_ca_file = $smtp_ssl_cert_path);
   } else {
   return (SSL_verify_mode = SSL_VERIFY_PEER,
   SSL_ca_path = $smtp_ssl_cert_path);
   }
   } else {
   return (SSL_verify_mode = SSL_VERIFY_PEER);
   }

Two things that worry me a bit are:

 (1) At the end user UI level, it may look nice to accept some form
 of --no-option-name to say I have been using SSL against my
 server with handrolled cert, and I want to keep using the
 verify-none option; --ssl-cert-path= looks somewhat ugly.
 The same goes for [sendemail] ssl_cert_path =  config.

 (2) How loudly does the new code barf when no configuration is done
 (i.e. we just pass SSL_VERIFY_PEER and let the system default
 CA path to take effect) and the server cert does not validate?
 The warning that triggered this thread, if we had the
 configuration mechanism we have been designing together, would
 have been a good reminder for the user to use it, but would we
 give a similar (less noisy is fine, as long as it is clear)
 diagnostic message?

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] allow git-svn fetching to work using serf

2013-07-06 Thread David Rothenberger
On 7/5/2013 8:41 PM, Kyle McKay wrote:
 This patch allows git-svn to fetch successfully using the
 serf library when given an https?: url to fetch from.

Thanks, Kyle. I confirm this is working for my problem cases as
well.

 Daniel Shahaf has suggested also setting
 servers:global:http-bulk-updates=on.

I have a patch that does this, but since turning on bulk updates has
a possible performance penalty, I prefer your approach. 

Please let me know if anyone wants to see the patch.

-- 
David Rothenberger    daver...@acm.org

What to do in case of an alien attack:

1)   Hide beneath the seat of your plane and look away.
2)   Avoid eye contact.
3) If there are no eyes, avoid all contact.

-- The Firesign Theatre, _Everything you know is Wrong_

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v8 3/7] git-remote-mediawiki: New git bin-wrapper for developement

2013-07-06 Thread Benoît Person
On 5 July 2013 09:04, Matthieu Moy matthieu@grenoble-inp.fr wrote:
 benoit.per...@ensimag.fr writes:

 --- a/contrib/mw-to-git/Makefile
 +++ b/contrib/mw-to-git/Makefile
 @@ -2,6 +2,12 @@
  # Copyright (C) 2013
  # Matthieu Moy matthieu@imag.fr
  #
 +# To build and test:
 +#
 +#   make:
 +# bin-wrapper/git mw preview Some_page.mw
 +# bin-wrapper/git clone mediawiki::http://example.com/wiki/
 +#

 The colon after make and the indentation look weird. Shouldn't this be

 +# To build and test:
 +#
 +#   make
 +#   bin-wrapper/git mw preview Some_page.mw
 +#   bin-wrapper/git clone mediawiki::http://example.com/wiki/
 +#

 ?

oops, yes definitely.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/2] send-email: introduce sendemail.smtpsslcertpath

2013-07-06 Thread John Keeping
On Fri, Jul 05, 2013 at 11:25:36PM -0700, Junio C Hamano wrote:
 John Keeping j...@keeping.me.uk writes:
 
  I'd rather have '$smtp_ssl_cert_path ne ' in the first if condition
  (instead of the '-d $smtp_ssl_cert_path') ...
 
 I agree.  The signal for no certs should be an explicit nonsense
 value like an empty string, not just a string that does not name an
 expected filesystem object.  Otherwise people can misspell paths and
 disable the validation by accident.
 
  Perhaps a complete solution could allow CA files as well.
 
 Yes, that would be a good idea.  Care to roll into a fixup! patch
 against [2/2]?

Here's a patch that should do that.  However, when testing this I
couldn't get the SSL_verify_mode warning to disappear and
git-send-email kept connecting to my untrusted server - this was using
the SSL code path not the TLS upgrade one.

I think this is caused by the SSL_verify_mode argument not getting all
the way down to the configure_SSL function in IO::Socket::SSL but I
can't see what the code in git-send-email is doing wrong.  Can any Perl
experts point out what's going wrong?

Also, I tried Brian's IO::Socket::SSL-import(qw(SSL_VERIFY_PEER
SSL_VERIFY_NONE)); but that produced a warning message about the uses
of SSL_VERIFY_PEER and SSL_VERIFY_NONE following that statement, so I
ended up qualifying each use in the code below.

-- 8 --
diff --git a/Documentation/git-send-email.txt b/Documentation/git-send-email.txt
index 605f263..b56c215 100644
--- a/Documentation/git-send-email.txt
+++ b/Documentation/git-send-email.txt
@@ -198,6 +198,10 @@ must be used for each option.
 --smtp-ssl::
Legacy alias for '--smtp-encryption ssl'.
 
+--smtp-ssl-verify::
+--no-smtp-ssl-verify::
+   Enable SSL certificate verification.  Defaults to on.
+
 --smtp-ssl-cert-path::
Path to ca-certificates.  Defaults to `/etc/ssl/certs`, or
'sendemail.smtpsslcertpath'.
diff --git a/git-send-email.perl b/git-send-email.perl
index 3b80340..cbe85aa 100755
--- a/git-send-email.perl
+++ b/git-send-email.perl
@@ -69,8 +69,10 @@ git send-email [options] file | directory | rev-list 
options 
 --smtp-pass str  * Password for SMTP-AUTH; not necessary.
 --smtp-encryption   str  * tls or ssl; anything else disables.
 --smtp-ssl * Deprecated. Use '--smtp-encryption ssl'.
+--[no-]smtp-ssl-verify * Enable SSL certificate verification.
+ Default on.
 --smtp-ssl-cert-pathstr  * Path to ca-certificates.  Defaults to
- /etc/ssl/certs.
+ a platform-specific value.
 --smtp-domain   str  * The domain name sent to HELO/EHLO 
handshake
 --smtp-debug0|1  * Disable, enable Net::SMTP debug.
 
@@ -197,7 +199,7 @@ my ($thread, $chain_reply_to, $suppress_from, 
$signed_off_by_cc);
 my ($to_cmd, $cc_cmd);
 my ($smtp_server, $smtp_server_port, @smtp_server_options);
 my ($smtp_authuser, $smtp_encryption);
-my ($smtp_ssl_cert_path);
+my ($smtp_ssl_verify, $smtp_ssl_cert_path);
 my ($identity, $aliasfiletype, @alias_files, $smtp_domain);
 my ($validate, $confirm);
 my (@suppress_cc);
@@ -214,6 +216,7 @@ my %config_bool_settings = (
 suppressfrom = [\$suppress_from, undef],
 signedoffbycc = [\$signed_off_by_cc, undef],
 signedoffcc = [\$signed_off_by_cc, undef],  # Deprecated
+smtpsslverify = [\$smtp_ssl_verify, 1],
 validate = [\$validate, 1],
 multiedit = [\$multiedit, undef],
 annotate = [\$annotate, undef]
@@ -306,6 +309,8 @@ my $rc = GetOptions(h = \$help,
smtp-pass:s = \$smtp_authpass,
smtp-ssl = sub { $smtp_encryption = 'ssl' },
smtp-encryption=s = \$smtp_encryption,
+   smtp-ssl-cert-path=s = \$smtp_ssl_cert_path,
+   smtp-ssl-verify! = \$smtp_ssl_verify,
smtp-debug:i = \$debug_net_smtp,
smtp-domain:s = \$smtp_domain,
identity=s = \$identity,
@@ -1096,19 +1101,18 @@ sub smtp_auth_maybe {
 # Helper to come up with SSL/TLS certification validation params
 # and warn when doing no verification
 sub ssl_verify_params {
-   use IO::Socket::SSL qw(SSL_VERIFY_PEER SSL_VERIFY_NONE);
-
-   if (!defined $smtp_ssl_cert_path) {
-   $smtp_ssl_cert_path = /etc/ssl/certs;
+   if ($smtp_ssl_verify == 0) {
+   return (SSL_verify_mode = IO::Socket::SSL-SSL_VERIFY_NONE);
}
 
-   if (-d $smtp_ssl_cert_path) {
-   return (SSL_verify_mode = SSL_VERIFY_PEER,
-   SSL_ca_path = $smtp_ssl_cert_path);
+   if (! defined $smtp_ssl_cert_path) {
+   return (SSL_verify_mode = IO::Socket::SSL-SSL_VERIFY_PEER);
+   } elsif (-f $smtp_ssl_cert_path) {
+   return (SSL_verify_mode = IO::Socket::SSL-SSL_VERIFY_PEER,
+   SSL_ca_file = 

[PATCH] diffcore-pickaxe: simplify has_changes and contains

2013-07-06 Thread René Scharfe
Halve the number of callsites of contains() to two using temporary
variables, simplifying the code.  While at it, get rid of the
diff_options parameter, which became unused with 8fa4b09f.

Signed-off-by: René Scharfe rene.scha...@lsrfire.ath.cx
---
 diffcore-pickaxe.c | 11 ---
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/diffcore-pickaxe.c b/diffcore-pickaxe.c
index c97ac9b..401eb72 100644
--- a/diffcore-pickaxe.c
+++ b/diffcore-pickaxe.c
@@ -131,8 +131,7 @@ static void diffcore_pickaxe_grep(struct diff_options *o)
return;
 }
 
-static unsigned int contains(mmfile_t *mf, struct diff_options *o,
-regex_t *regexp, kwset_t kws)
+static unsigned int contains(mmfile_t *mf, regex_t *regexp, kwset_t kws)
 {
unsigned int cnt;
unsigned long sz;
@@ -176,11 +175,9 @@ static int has_changes(mmfile_t *one, mmfile_t *two,
   struct diff_options *o,
   regex_t *regexp, kwset_t kws)
 {
-   if (!one)
-   return contains(two, o, regexp, kws) != 0;
-   if (!two)
-   return contains(one, o, regexp, kws) != 0;
-   return contains(one, o, regexp, kws) != contains(two, o, regexp, kws);
+   unsigned int one_contains = one ? contains(one, regexp, kws) : 0;
+   unsigned int two_contains = two ? contains(two, regexp, kws) : 0;
+   return one_contains != two_contains;
 }
 
 static int pickaxe_match(struct diff_filepair *p, struct diff_options *o,
-- 
1.8.3.2

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 1/2] send-email: squelch warning from Net::SMTP::SSL

2013-07-06 Thread Torsten Bögershausen
On 2013-07-05 14.05, Ramkumar Ramachandra wrote:
 Due to a recent change in the Net::SMTP::SSL module, send-email emits
 the following ugly warning everytime a email is sent via SSL:
 
 ***
  Using the default of SSL_verify_mode of SSL_VERIFY_NONE for client
  is deprecated! Please set SSL_verify_mode to SSL_VERIFY_PEER
  together with SSL_ca_file|SSL_ca_path for verification.
  If you really don't want to verify the certificate and keep the
  connection open to Man-In-The-Middle attacks please set
  SSL_verify_mode explicitly to SSL_VERIFY_NONE in your application.
 ***
 
 Fix this by explicitly specifying SSL_verify_mode = SSL_VERIFY_NONE in
 Net::SMTP::SSL-start_SSL().
 
 Helped-by: brian m. carlson sand...@crustytoothpaste.net
 Signed-off-by: Ramkumar Ramachandra artag...@gmail.com
 ---
  git-send-email.perl | 4 +++-
  1 file changed, 3 insertions(+), 1 deletion(-)
 
 diff --git a/git-send-email.perl b/git-send-email.perl
 index ecbf56f..758100d 100755
 --- a/git-send-email.perl
 +++ b/git-send-email.perl
 @@ -1193,10 +1193,12 @@ X-Mailer: git-send-email $gitversion
Debug = $debug_net_smtp);
   if ($smtp_encryption eq 'tls'  $smtp) {
   require Net::SMTP::SSL;
 + use IO::Socket::SSL qw(SSL_VERIFY_NONE);
   $smtp-command('STARTTLS');
   $smtp-response();
   if ($smtp-code == 220) {
 - $smtp = Net::SMTP::SSL-start_SSL($smtp)
 + $smtp = Net::SMTP::SSL-start_SSL($smtp,
 +   
 SSL_verify_mode = SSL_VERIFY_NONE)
   or die STARTTLS failed! 
 .$smtp-message;
   $smtp_encryption = '';
   # Send EHLO again to receive fresh
 
Hm, this doesn't work on my system, and t9001 fails:

SSL_VERIFY_PEER is not exported by the IO::Socket::SSL module
 SSL_VERIFY_NONE is not exported by the IO::Socket::SSL module
Can't continue after import errors at 
/Users/tb/projects/git/git.pu/git-send-email line 1090
/Torsten



--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 1/2] send-email: squelch warning from Net::SMTP::SSL

2013-07-06 Thread brian m. carlson
On Sat, Jul 06, 2013 at 04:28:00PM +0200, Torsten Bögershausen wrote:
 On 2013-07-05 14.05, Ramkumar Ramachandra wrote:
  Due to a recent change in the Net::SMTP::SSL module, send-email emits
  the following ugly warning everytime a email is sent via SSL:
  
  ***
   Using the default of SSL_verify_mode of SSL_VERIFY_NONE for client
   is deprecated! Please set SSL_verify_mode to SSL_VERIFY_PEER
   together with SSL_ca_file|SSL_ca_path for verification.
   If you really don't want to verify the certificate and keep the
   connection open to Man-In-The-Middle attacks please set
   SSL_verify_mode explicitly to SSL_VERIFY_NONE in your application.
  ***
  
  Fix this by explicitly specifying SSL_verify_mode = SSL_VERIFY_NONE in
  Net::SMTP::SSL-start_SSL().
  
  Helped-by: brian m. carlson sand...@crustytoothpaste.net
  Signed-off-by: Ramkumar Ramachandra artag...@gmail.com
  ---
   git-send-email.perl | 4 +++-
   1 file changed, 3 insertions(+), 1 deletion(-)
  
  diff --git a/git-send-email.perl b/git-send-email.perl
  index ecbf56f..758100d 100755
  --- a/git-send-email.perl
  +++ b/git-send-email.perl
  @@ -1193,10 +1193,12 @@ X-Mailer: git-send-email $gitversion
   Debug = $debug_net_smtp);
  if ($smtp_encryption eq 'tls'  $smtp) {
  require Net::SMTP::SSL;
  +   use IO::Socket::SSL qw(SSL_VERIFY_NONE);
  $smtp-command('STARTTLS');
  $smtp-response();
  if ($smtp-code == 220) {
  -   $smtp = Net::SMTP::SSL-start_SSL($smtp)
  +   $smtp = Net::SMTP::SSL-start_SSL($smtp,
  + 
  SSL_verify_mode = SSL_VERIFY_NONE)
  or die STARTTLS failed! 
  .$smtp-message;
  $smtp_encryption = '';
  # Send EHLO again to receive fresh
  
 Hm, this doesn't work on my system, and t9001 fails:
 
 SSL_VERIFY_PEER is not exported by the IO::Socket::SSL module
  SSL_VERIFY_NONE is not exported by the IO::Socket::SSL module
 Can't continue after import errors at 
 /Users/tb/projects/git/git.pu/git-send-email line 1090

What version of IO::Socket::SSL do you have, and what source did you get
it from?

perl -MIO::Socket::SSL -e 'print $IO::Socket::SSL::VERSION\n;'

This should be available in 1.31 or later.  It might end up that we need
to adjust the use/require statement to require 1.31.

-- 
brian m. carlson / brian with sandals: Houston, Texas, US
+1 832 623 2791 | http://www.crustytoothpaste.net/~bmc | My opinion only
OpenPGP: RSA v4 4096b: 88AC E9B2 9196 305B A994 7552 F1BA 225C 0223 B187


signature.asc
Description: Digital signature


Re: [PATCH] git stash: Avoid data loss when saving a stash

2013-07-06 Thread Petr Baudis
  Hi!

  (tl;dr - I disagree but this issue is perhaps not so important
in practice)

On Sun, Jun 30, 2013 at 12:14:26PM -0700, Junio C Hamano wrote:
 I do not agree with your `git reset --hard` at all.  With the
 command, the user demands no matter what, I want get rid of any
 funny state in my working tree so that I can start my work from that
 specified commit (default to HEAD).

  Yeah, but this normally concerns only tracked files; `git reset
--hard` does not imply `git clean`. I'm worried when a tool normally
behaves in a way that follows an apparent rule but its behavior is
defined in such a way that in a corner case this rule is violated (but
it's ok since it's a - non-obvious - implication of the specification).

 Imagine that this is you did to arrive that funny state:
 
   $ git rm foo ;# foo used to be tracked and in HEAD
 $ cp /somewhere/else/foo foo
   $ cp /somewhere/else/bar bar ;# bar is not in HEAD
   $ cp /somewhere/else/bar baz ;# baz is in HEAD
 ... do various other things ...
 
 and then git reset --hard.  At that point, foo and bar are not
 tracked and completely unrelated to the project.  baz is tracked
 and have unrelated contents from that of HEAD.
 
 In order to satisfy your desire to go back to the state of HEAD with
 minimal collateral amage, we need to get rid of the updated foo
 and baz and replace them with those from HEAD.  We do not have to
 touch bar so we leave it as-is.

  Perhaps we misundertood each other here. I certainly don't care to
keep local changes as a whole - a command behaving like that wouldn't
be very useful for me; for me, the crucial distinction is between
tracked and untracked files. Therefore, from my viewpoint it's fine
to overwrite baz, but not to overwrite foo.

 And the killed case is just like foo and baz.  If the state
 you want to go back to with --hard has a directory (a file) where
 your working tree's funny state has a file (a directory), the local
 cruft needs to go away to satisify your request.
 
 I do not mind if you are proposing a different and new kind of reset
 that fails if it has to overwrite any local changes (be it tracked
 or untracked), but that is not reset --hard.  It is something else.

  Hmm, I suppose the assumption I would prefer is that the only command
that will destroy (currently) untracked data without warning is `git
clean`; even though (unlike in case of git stash) the current reset
--hard behavior wouldn't surprise me, I suspect it can be a bad surprise
for many Git users when they hit this situation; but since I didn't
notice any actual complaint yet, so I don't care enough to press this
further for now anyway. :-)

-- 
Petr Pasky Baudis
For every complex problem there is an answer that is clear,
simple, and wrong.  -- H. L. Mencken
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 1/2] send-email: squelch warning from Net::SMTP::SSL

2013-07-06 Thread Torsten Bögershausen
On 2013-07-06 16.32, brian m. carlson wrote:
 perl -MIO::Socket::SSL -e 'print $IO::Socket::SSL::VERSION\n;'

Mac OS X, 10.6:
(I think this perl we use for git:)
/usr/bin/perl -MIO::Socket::SSL -e 'print $IO::Socket::SSL::VERSION\n;'
1.22

(And this is in my path:)
 which perl
/opt/local/bin/perl

perl -MIO::Socket::SSL -e 'print $IO::Socket::SSL::VERSION\n;'
Can't locate IO/Socket/SSL.pm in @INC (@INC contains: /sw/lib/perl5 
/sw/lib/perl5/darwin /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level 
/opt/local/lib/perl5/site_perl/5.8.9 /opt/local/lib/perl5/site_perl 
/opt/local/lib/perl5/vendor_perl/5.8.9/darwin-2level 
/opt/local/lib/perl5/vendor_perl/5.8.9 /opt/local/lib/perl5/vendor_perl 
/opt/local/lib/perl5/5.8.9/darwin-2level /opt/local/lib/perl5/5.8.9 .).

/Torsten
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] lockfile: fix buffer overflow in path handling

2013-07-06 Thread Michael Haggerty
The path of the file to be locked is held in lock_file::filename,
which is a fixed-length buffer of length PATH_MAX.  This buffer is
also (temporarily) used to hold the path of the lock file, which is
the path of the file being locked plus .lock.  Because of this, the
path of the file being locked must be less than (PATH_MAX - 5)
characters long (5 chars are needed for .lock and one character for
the NUL terminator).

On entry into lock_file(), the path length was only verified to be
less than PATH_MAX characters, not less than (PATH_MAX - 5)
characters.

When and if resolve_symlink() is called, then that function is
correctly told to treat the buffer as (PATH_MAX - 5) characters long.
This part is correct.  However:

* If LOCK_NODEREF was specified, then resolve_symlink() is never
  called.

* If resolve_symlink() is called but the path is not a symlink, then
  the length check is never applied.

So it is possible for a path with length (PATH_MAX - 5 = len 
PATH_MAX) to make it through the checks.  When .lock is strcat()ted
to such a path, the lock_file::filename buffer is overflowed.

Fix the problem by adding a check when entering lock_file() that the
original path is less than (PATH_MAX - 5) characters.

Signed-off-by: Michael Haggerty mhag...@alum.mit.edu
---
This patch applies to maint.  I discovered it when working on
something else.  From a brief check of history, it looks like this
code has always been vulnerable to buffer overflows in one way or
another.

I haven't tried to evaluate whether this problem could be exploited
remotely.  Even if so, the ramifications are likely to be limited to
denial-of-service, because the data that are written past the end of
the buffer are not under the control of the user; they would be one of
the strings lock\0, ock\0, ck\0, k\0, or \0, depending on
the length of the path to be locked.

I'm thinking of rewriting this code to use strbufs to prevent similar
problems in the future, but that is a more extensive change that
wouldn't be appropriate for maint.

 lockfile.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/lockfile.c b/lockfile.c
index c6fb77b..3ac49cb 100644
--- a/lockfile.c
+++ b/lockfile.c
@@ -124,13 +124,13 @@ static char *resolve_symlink(char *p, size_t s)
 
 static int lock_file(struct lock_file *lk, const char *path, int flags)
 {
-   if (strlen(path) = sizeof(lk-filename))
-   return -1;
-   strcpy(lk-filename, path);
/*
 * subtract 5 from size to make sure there's room for adding
 * .lock for the lock file name
 */
+   if (strlen(path) = sizeof(lk-filename)-5)
+   return -1;
+   strcpy(lk-filename, path);
if (!(flags  LOCK_NODEREF))
resolve_symlink(lk-filename, sizeof(lk-filename)-5);
strcat(lk-filename, .lock);
-- 
1.8.3.2

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] Git.pm: add new temp_is_locked function

2013-07-06 Thread Jonathan Nieder
Hi,

Kyle McKay wrote:

 The temp_is_locked function can be used to determine whether
 or not a given name previously passed to temp_acquire is
 currently locked.
[...]
 +=item temp_is_locked ( NAME )
 +
 +Returns true if the file mapped to CNAME is currently locked.
 +
 +If true is returned, an attempt to Ctemp_acquire() the same
 CNAME will
 +throw an error.

Looks like this was line-wrapped somewhere in transit.

More importantly, it's not obvious from the above description what
this function will do.  What is a typical value of NAME?  Is it a
filename, an arbitrary string, a reference, or something else?  Is
this a way of checking if a file is flocked?  What is a typical way to
use this function?

Looking more closely, it looks like this is factoring out the idiom
for checking if a name is already in use from the _temp_cache
function.  Would it make sense for _temp_cache to call this helper?

When is a caller expected to call this function?  What guarantees can
the caller rely on?  After a call to temp_acquire(NAME), will
temp_is_locked(NAME) always return true until temp_release(NAME) is
called?  Does this only work within the context of a single process or
can locks persist beyond a process lifetime?  Do locks ever need to be
broken?

I didn't spend a lot of time trying to find the answers to these
questions because I want to make sure that people using Git.pm in the
future similarly do not have to spend a lot of time.  So hopefully a
documentation change could fix this.

Thanks and hope that helps,
Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] git-svn: allow git-svn fetching to work using serf

2013-07-06 Thread Jonathan Nieder
(cc-ing Eric Wong, who wrote this code)
Hi,

Kyle McKay wrote:

 Temp file with moniker 'svn_delta' already in use at Git.pm line 1250
 Temp file with moniker 'git_blob' already in use at Git.pm line 1250

 David Rothenberger daver...@acm.org has determined the cause to
 be that ra_serf does not drive the delta editor in a depth-first
 manner [...]. Instead, the calls come in this order:
[...]
 --- a/perl/Git/SVN/Fetcher.pm
 +++ b/perl/Git/SVN/Fetcher.pm
 @@ -315,11 +315,13 @@ sub change_file_prop {
 sub apply_textdelta {
   my ($self, $fb, $exp) = @_;
   return undef if $self-is_path_ignored($fb-{path});
 - my $fh = $::_repository-temp_acquire('svn_delta');
 + my $suffix = 0;
 + ++$suffix while 
 $::_repository-temp_is_locked(svn_delta_${$}_$suffix);
 + my $fh = $::_repository-temp_acquire(svn_delta_${$}_$suffix);
   # $fh gets auto-closed() by SVN::TxDelta::apply(),
   # (but $base does not,) so dup() it for reading in close_file
   open my $dup, '', $fh or croak $!;
 - my $base = $::_repository-temp_acquire('git_blob');
 + my $base = $::_repository-temp_acquire(git_blob_${$}_$suffix);

Thanks for your work tracking this down.

I'm a bit confused.  Are you saying that apply_textdelta gets called
multiple times in a row without an intervening close_file?

Puzzled,
Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] allow git-svn fetching to work using serf

2013-07-06 Thread Jonathan Nieder
David Rothenberger wrote:
 On 7/5/2013 8:41 PM, Kyle McKay wrote:

 Daniel Shahaf has suggested also setting
 servers:global:http-bulk-updates=on.

 I have a patch that does this, but since turning on bulk updates has
 a possible performance penalty, I prefer your approach. 

I assume that's because http-bulk-updates defeats caching.  If so,
makes sense.

Please forgive my ignorance: is there a bug filed about ra_serf's
misbehavior here?  Is it eventually going to be fixed and this is
just a workaround, or is the growth in temp file use something we'd
live with permanently?

Thanks,
Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: What's cooking in git.git (Jul 2013, #02; Fri, 5)

2013-07-06 Thread Jonathan Nieder
Hi,

Junio C Hamano wrote:

 We are in the middle of 5th week now in the 11-week releace cycle
 for 1.8.4 (http://tinyurl.com/gitCal), and quite a few topics have
 graduated to 'master'.  I'd expect the rest of the week to be slow.

I'd like this or the next release to be 2.0, so the common user
trouble with git push pushing more branches than they intended can
be over.  Am I crazy?  If not, what can I do to help make it happen?

Thanks,
Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] documentation: add git transport security notice

2013-07-06 Thread Jonathan Nieder
Hi,

Fraser Tweedale wrote:

 --- a/Documentation/urls.txt
 +++ b/Documentation/urls.txt
 @@ -11,6 +11,9 @@ and ftps can be used for fetching and rsync can be used for 
 fetching
  and pushing, but these are inefficient and deprecated; do not use
  them).
  
 +The git transport does not do any authentication and should be used
 +with caution on unsecured networks.

How about the something like the following?  I'm starting to think it
would make more sense to add a SECURITY section to git-clone(1),
though.

-- 8 --
Subject: doc: clarify git:// transport security notice

The recently added warning about the git protocol's lack of
authentication does not make it clear that the protocol lacks both
client and server authentication.  The lack of non IP-based client
auth is obvious (when does user enter her credentials?), while the
lack of authentication of the server is less so, so emphasize it.

Put the warning in context by making an analogy to HTTP's security
properties as compared to HTTPS.

Signed-off-by: Jonathan Nieder jrnie...@gmail.com
---
Thanks,
Jonathan

diff --git a/Documentation/urls.txt b/Documentation/urls.txt
index 9ccb246..bd0058f 100644
--- a/Documentation/urls.txt
+++ b/Documentation/urls.txt
@@ -11,8 +11,8 @@ and ftps can be used for fetching and rsync can be used for 
fetching
 and pushing, but these are inefficient and deprecated; do not use
 them).
 
-The native transport (i.e. git:// URL) does no authentication and
-should be used with caution on unsecured networks.
+Like HTTP, the native protocol (used for git:// URLs) does no server
+authentication and should be used with caution on unsecured networks.
 
 The following syntaxes may be used with them:
 
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] test-lib.sh - cygwin does not have usable FIFOs

2013-07-06 Thread Jonathan Nieder
Mark Levedahl wrote:

 Do not use FIFOs on cygwin, they do not work. Cygwin includes
 coreutils, so has mkfifo, and that command does something. However,
 the resultant named pipe is known (on the Cygwin mailing list at
 least) to not work correctly.

Hm.  How would you recommend going about writing a script that takes
output from a command, transforms it, and then feeds it back into
that command's input?  Are sockets a more reliable way to do this kind
of IPC on Cygwin?

See reinit_git and try_dump from t9010-svn-fe.sh for context.

Thanks,
Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3] gitweb: allow extra breadcrumbs to prefix the trail

2013-07-06 Thread Jonathan Nieder
Tony Finch wrote:

 Reviewed-by: Jonathan Nieder jrnie...@gmail.com

Yep, fwiw this version looks perfect to me. :)

Thanks.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] allow git-svn fetching to work using serf

2013-07-06 Thread Kyle McKay

On Jul 6, 2013, at 17:28, Jonathan Nieder wrote:

David Rothenberger wrote:

On 7/5/2013 8:41 PM, Kyle McKay wrote:



Daniel Shahaf has suggested also setting
servers:global:http-bulk-updates=on.


I have a patch that does this, but since turning on bulk updates has
a possible performance penalty, I prefer your approach.


I assume that's because http-bulk-updates defeats caching.  If so,
makes sense.

Please forgive my ignorance: is there a bug filed about ra_serf's
misbehavior here?  Is it eventually going to be fixed and this is
just a workaround, or is the growth in temp file use something we'd
live with permanently?


Apparently it will not be fixed:

Begin forwarded message:

From: David Rothenberger daver...@acm.org
Date: July 5, 2013 16:14:12 PDT
To: git@vger.kernel.org
Subject: Re: git-svn Temp file with moniker 'svn_delta' already in  
use and skelta mode


I traced git-svn and discovered that the error is due to a known
problem in the SVN APIs. ra_serf does not drive the delta editor in
a depth-first manner as required by the API [1]. Instead, the calls
come in this order:

1. open_root
2. open_directory
3. add_file
4. apply_textdelta
5. add_file
6. apply_textdelta

This is a known issue [2] and one that the Subversion folks have
elected not to fix [3].

[1]
http://subversion.apache.org/docs/api/latest/structsvn__delta__editor__t.html#details
[2] http://subversion.tigris.org/issues/show_bug.cgi?id=2932
[3] http://subversion.tigris.org/issues/show_bug.cgi?id=3831


The summary of [3] which is marked RESOLVED,FIXED is Add errata / 
release note noise around ra_serf's editor drive violations.


Kyle
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] allow git-svn fetching to work using serf

2013-07-06 Thread Jonathan Nieder
Kyle McKay wrote:
 On Jul 6, 2013, at 17:28, Jonathan Nieder wrote:
 David Rothenberger wrote:
 On 7/5/2013 8:41 PM, Kyle McKay wrote:

 Daniel Shahaf has suggested also setting
 servers:global:http-bulk-updates=on.

 I have a patch that does this, but since turning on bulk updates has
 a possible performance penalty, I prefer your approach.

 I assume that's because http-bulk-updates defeats caching.  If so,
 makes sense.

 Please forgive my ignorance: is there a bug filed about ra_serf's
 misbehavior here?  Is it eventually going to be fixed and this is
 just a workaround, or is the growth in temp file use something we'd
 live with permanently?
[...]

 Begin forwarded message:
[...]
 [2] http://subversion.tigris.org/issues/show_bug.cgi?id=2932

Ah, thanks for the context.

It's still not clear to me how we know that ra_serf driving the editor
in a non depth-first manner is the problem here.  Has that explanation
been confirmed somehow?

For example, does the workaround mentioned by danielsh work?  Does
using ra_neon instead of ra_serf avoid trouble as well?  Is there a
simple explanation of why violating the depth-first constraint would
lead to multiple blob (i.e., file, not directory) deltas being opened
in a row without an intervening close?

Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] git-svn: allow git-svn fetching to work using serf

2013-07-06 Thread Kyle McKay

On Jul 6, 2013, at 17:24, Jonathan Nieder wrote:

(cc-ing Eric Wong, who wrote this code)
Hi,

Kyle McKay wrote:


Temp file with moniker 'svn_delta' already in use at Git.pm line 1250
Temp file with moniker 'git_blob' already in use at Git.pm line 1250

David Rothenberger daver...@acm.org has determined the cause to
be that ra_serf does not drive the delta editor in a depth-first
manner [...]. Instead, the calls come in this order:

[...]

--- a/perl/Git/SVN/Fetcher.pm
+++ b/perl/Git/SVN/Fetcher.pm
@@ -315,11 +315,13 @@ sub change_file_prop {
sub apply_textdelta {
my ($self, $fb, $exp) = @_;
return undef if $self-is_path_ignored($fb-{path});
-   my $fh = $::_repository-temp_acquire('svn_delta');
+   my $suffix = 0;
+	++$suffix while $::_repository-temp_is_locked(svn_delta_${$}_ 
$suffix);

+   my $fh = $::_repository-temp_acquire(svn_delta_${$}_$suffix);
# $fh gets auto-closed() by SVN::TxDelta::apply(),
# (but $base does not,) so dup() it for reading in close_file
open my $dup, '', $fh or croak $!;
-   my $base = $::_repository-temp_acquire('git_blob');
+   my $base = $::_repository-temp_acquire(git_blob_${$}_$suffix);


Thanks for your work tracking this down.

I'm a bit confused.  Are you saying that apply_textdelta gets called
multiple times in a row without an intervening close_file?


Unless bulk updates are disabled when using the serf access method  
(the only one available with svn 1.8) for https?: urls,  
apply_textdelta does indeed get called multiple times in a row without  
an intervening temp_release.


Two temp files are created for each apply_textdelta ('svn_delta...'  
and 'git_blob...').  In my tests it seems that most of the time the  
two temp files are enough, but occasionally as many as six will be  
open at the same time.


I suspect this maximum number is related to the maximum number of  
simultaneous connections the serf access method will use which  
defaults to 4.  Therefore I would expect to see as many as 8 temp  
files (4 each for 'svn_delta...' and 'git_blob...'), but I have only  
been able to trigger creation of 6 temp files so far.


Kyle
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Possible error on the git-svn man page

2013-07-06 Thread Jonathan Nieder
Hi,

Szuba, Marek (IKP) wrote:

 On the git-svn(1) man page, the third example in the Basic Examples
[...]
 $ git checkout -b master FETCH_HEAD
 fatal: Cannot update paths and switch to branch 'master' at the same time.
 Did you intend to checkout 'FETCH_HEAD' which can not be resolved as commit?

What branch are you on when you run git fetch?  What does git log
FETCH_HEAD say?  Is this repository public, which would let people on
this list try it out?

Thanks,
Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] git-svn: allow git-svn fetching to work using serf

2013-07-06 Thread Jonathan Nieder
Kyle McKay wrote:

 Unless bulk updates are disabled when using the serf access method
 (the only one available with svn 1.8) for https?: urls,
 apply_textdelta does indeed get called multiple times in a row
 without an intervening temp_release.

You mean Unless bulk updates are enabled and without an intervening
close_file, right?

Unlike the non-depth-first thing, that sounds basically broken ---
what would be stopping subversion from calling the editor's close
method when done with each file?  I can't see much reason unless it is
calling apply_textdelta multiple times in parallel --- is it doing
that, and if so is git-svn able to cope with that?

This sounds like something that should be fixed in ra_serf.

But if the number of overlapping open text nodes is bounded by a low
number, the workaround of using multiple temp files sounds ok as a way
of dealing with unfixed versions of Subversion.

Jonathan
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] allow git-svn fetching to work using serf

2013-07-06 Thread Kyle McKay

On Jul 6, 2013, at 18:37, Jonathan Nieder wrote:

Kyle McKay wrote:

On Jul 6, 2013, at 17:28, Jonathan Nieder wrote:

David Rothenberger wrote:

On 7/5/2013 8:41 PM, Kyle McKay wrote:



Daniel Shahaf has suggested also setting
servers:global:http-bulk-updates=on.


I have a patch that does this, but since turning on bulk updates  
has

a possible performance penalty, I prefer your approach.


I assume that's because http-bulk-updates defeats caching.  If so,
makes sense.

Please forgive my ignorance: is there a bug filed about ra_serf's
misbehavior here?  Is it eventually going to be fixed and this is
just a workaround, or is the growth in temp file use something we'd
live with permanently?

[...]


Begin forwarded message:

[...]

[2] http://subversion.tigris.org/issues/show_bug.cgi?id=2932


Ah, thanks for the context.

It's still not clear to me how we know that ra_serf driving the editor
in a non depth-first manner is the problem here.  Has that explanation
been confirmed somehow?

For example, does the workaround mentioned by danielsh work?  Does
using ra_neon instead of ra_serf avoid trouble as well?  Is there a
simple explanation of why violating the depth-first constraint would
lead to multiple blob (i.e., file, not directory) deltas being opened
in a row without an intervening close?


Using ra_neon seams to eliminate the problem. Using ra_neon has always  
been the default until svn 1.8 which drops ra_neon support entirely  
and always uses ra_serf for https?: urls.


The workaround mentioned by danielsh won't work if the server has  
configured SVNAllowBulkUpdates Off because that will force use of  
skelta mode no matter what the client does.  However, since ra_neon  
only ever has a single connection to the server it probably doesn't  
matter.


Since ra_serf makes multiple connections to the server (hard-coded to  
4 prior to svn 1.8, defaults to 4 in svn 1.8 but can be set to between  
1 and 8) it makes sense there would be multiple active calls to  
apply_textdelta if processing is done as results are received on the  
multiple connections.


Kyle
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] git-svn: allow git-svn fetching to work using serf

2013-07-06 Thread Kyle McKay

On Jul 6, 2013, at 19:23, Jonathan Nieder wrote:

Kyle McKay wrote:


Unless bulk updates are disabled when using the serf access method
(the only one available with svn 1.8) for https?: urls,
apply_textdelta does indeed get called multiple times in a row
without an intervening temp_release.


You mean Unless bulk updates are enabled and without an intervening
close_file, right?


The problem seems to be skelta mode although it may just be the fact  
that ra_serf has multiple connections outstanding and since ra_neon  
only ever has one it can't happen over ra_neon.


If the server disables bulk updates (SVNAllowBulkUpdates Off) all  
clients are forced to use skelta mode, even ra_neon clients.



This sounds like something that should be fixed in ra_serf.


Yes, but apparently it will not be.


But if the number of overlapping open text nodes is bounded by a low
number, the workaround of using multiple temp files sounds ok as a way
of dealing with unfixed versions of Subversion.


I believe it will never exceed twice ('svn_delta...' and  
'git_blob...') the maximum number of serf connections allowed.  Four  
by default (hard-coded prior to svn 1.8).  Limited to between 1 and 8  
on svn 1.8.  Actually it looks like from my testing that it won't ever  
exceed twice the (max number of serf connections - 1).


Kyle
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 1/2] Git.pm: add new temp_is_locked function

2013-07-06 Thread Kyle J. McKay
From: Kyle J. McKay mack...@gmail.com

The temp_is_locked function can be used to determine whether
or not a given name previously passed to temp_acquire is
currently locked.

Signed-off-by: Kyle J. McKay mack...@gmail.com
---
 perl/Git.pm | 39 +--
 1 file changed, 33 insertions(+), 6 deletions(-)

diff --git a/perl/Git.pm b/perl/Git.pm
index 7a252ef..74d9a73 100644
--- a/perl/Git.pm
+++ b/perl/Git.pm
@@ -61,7 +61,7 @@ require Exporter;
 remote_refs prompt
 get_tz_offset
 credential credential_read credential_write
-temp_acquire temp_release temp_reset temp_path);
+temp_acquire temp_is_locked temp_release temp_reset temp_path);
 
 
 =head1 DESCRIPTION
@@ -1206,6 +1206,35 @@ sub temp_acquire {
$temp_fd;
 }
 
+=item temp_is_locked ( NAME )
+
+Returns true if the internal lock created by a previous Ctemp_acquire()
+call with CNAME is still in effect.
+
+When temp_acquire is called on a CNAME, it internally locks the temporary
+file mapped to CNAME.  That lock will not be released until Ctemp_release()
+is called with either the original CNAME or the LFile::Handle that was
+returned from the original call to temp_acquire.
+
+Subsequent attempts to call Ctemp_acquire() with the same CNAME will fail
+unless there has been an intervening Ctemp_release() call for that CNAME
+(or its corresponding LFile::Handle that was returned by the original
+Ctemp_acquire() call).
+
+If true is returned by Ctemp_is_locked() for a CNAME, an attempt to
+Ctemp_acquire() the same CNAME will cause an error unless
+Ctemp_release is first called on that CNAME (or its corresponding
+LFile::Handle that was returned by the original Ctemp_acquire() call).
+
+=cut
+
+sub temp_is_locked {
+   my ($self, $name) = _maybe_self(@_);
+   my $temp_fd = \$TEMP_FILEMAP{$name};
+
+   defined $$temp_fd  $$temp_fd-opened  
$TEMP_FILES{$$temp_fd}{locked};
+}
+
 =item temp_release ( NAME )
 
 =item temp_release ( FILEHANDLE )
@@ -1247,11 +1276,9 @@ sub _temp_cache {
_verify_require();
 
my $temp_fd = \$TEMP_FILEMAP{$name};
-   if (defined $$temp_fd and $$temp_fd-opened) {
-   if ($TEMP_FILES{$$temp_fd}{locked}) {
-   throw Error::Simple(Temp file with moniker ' .
-   $name . ' already in use);
-   }
+   if (temp_is_locked($name)) {
+   throw Error::Simple(Temp file with moniker ' .
+   $name . ' already in use);
} else {
if (defined $$temp_fd) {
# then we're here because of a closed handle.
-- 
1.8.3

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 0/2] allow git-svn fetching to work using serf

2013-07-06 Thread Kyle J. McKay
From: Kyle J. McKay mack...@gmail.com

This patch allows git-svn to fetch successfully using the
serf library when given an https?: url to fetch from.

Unfortunately some svn servers do not seem to be configured
well for use with the serf library.  This can cause fetching
to take longer compared to the neon library or actually
cause timeouts during the fetch.  When timeouts occur
git-svn can be safely restarted to fetch more revisions.

A new temp_is_locked function has been added to Git.pm
to facilitate using the minimal number of temp files
possible when using serf.

The problem that occurs when running git-svn fetch using
the serf library is that the previously used temp file
is not always unlocked before the next temp file needs
to be used.

To work around this problem, a new temp name is used
if the temp name that would otherwise be chosen is
currently locked.

Kyle J. McKay (2):
  Git.pm: add new temp_is_locked function
  git-svn: allow git-svn fetching to work using serf

 perl/Git.pm | 39 +--
 perl/Git/SVN/Fetcher.pm |  6 --
 2 files changed, 37 insertions(+), 8 deletions(-)

-- 
1.8.3

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 2/2] git-svn: allow git-svn fetching to work using serf

2013-07-06 Thread Kyle J. McKay
From: Kyle J. McKay mack...@gmail.com

When attempting to git-svn fetch files from an svn https?: url using
the serf library (the only choice starting with svn 1.8) the following
errors can occur:

Temp file with moniker 'svn_delta' already in use at Git.pm line 1250
Temp file with moniker 'git_blob' already in use at Git.pm line 1250

David Rothenberger daver...@acm.org has determined the cause to
be that ra_serf does not drive the delta editor in a depth-first
manner [...]. Instead, the calls come in this order:

1. open_root
2. open_directory
3. add_file
4. apply_textdelta
5. add_file
6. apply_textdelta

When using the ra_serf access method, git-svn can end up needing
to create several temp files before the first one is closed.

This change causes a new temp file moniker to be generated if the
one that would otherwise have been used is currently locked.

Signed-off-by: Kyle J. McKay mack...@gmail.com
---
 perl/Git/SVN/Fetcher.pm | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/perl/Git/SVN/Fetcher.pm b/perl/Git/SVN/Fetcher.pm
index bd17418..10edb27 100644
--- a/perl/Git/SVN/Fetcher.pm
+++ b/perl/Git/SVN/Fetcher.pm
@@ -315,11 +315,13 @@ sub change_file_prop {
 sub apply_textdelta {
my ($self, $fb, $exp) = @_;
return undef if $self-is_path_ignored($fb-{path});
-   my $fh = $::_repository-temp_acquire('svn_delta');
+   my $suffix = 0;
+   ++$suffix while 
$::_repository-temp_is_locked(svn_delta_${$}_$suffix);
+   my $fh = $::_repository-temp_acquire(svn_delta_${$}_$suffix);
# $fh gets auto-closed() by SVN::TxDelta::apply(),
# (but $base does not,) so dup() it for reading in close_file
open my $dup, '', $fh or croak $!;
-   my $base = $::_repository-temp_acquire('git_blob');
+   my $base = $::_repository-temp_acquire(git_blob_${$}_$suffix);
 
if ($fb-{blob}) {
my ($base_is_link, $size);
-- 
1.8.3

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] allow git-svn fetching to work using serf

2013-07-06 Thread David Rothenberger
On 7/6/2013 5:28 PM, Jonathan Nieder wrote:
 David Rothenberger wrote:
 On 7/5/2013 8:41 PM, Kyle McKay wrote:
 
 Daniel Shahaf has suggested also setting
 servers:global:http-bulk-updates=on.

 I have a patch that does this, but since turning on bulk updates has
 a possible performance penalty, I prefer your approach. 
 
 I assume that's because http-bulk-updates defeats caching.  If so,
 makes sense.

I believe that bulk updates means that serf makes one request for a
lot of information and receives it all in one big response. In skelta
mode, serf makes a single request for a single piece of information. The
serf authors feel this can lead to improved overall throughput because
they can pipeline these requests and have multiple connections open at
the same time.

The downside, though, is that serf will do multiple open_file calls in
parallel as it descends down sibling directories.

 It's still not clear to me how we know that ra_serf driving the editor
 in a non depth-first manner is the problem here.  Has that explanation
 been confirmed somehow?

I did do a trace of git svn fetch and observed this non-depth-first
traversal. It certainly causes the failure we've observed.

 Is there a simple explanation of why violating the depth-first
 constraint would lead to multiple blob (i.e., file, not directory)
 deltas being opened in a row without an intervening close?

I believe serf is doing the following for a number of files in parallel:
 1. open_file
 2. apply_textdelta
 3. change_file_prop, change_file_prop, ...
 4. close_file


-- 
David Rothenberger    daver...@acm.org

Nusbaum's Rule:
The more pretentious the corporate name, the smaller the
organization.  (For instance, the Murphy Center for the
Codification of Human and Organizational Law, contrasted
to IBM, GM, and ATT.)

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: What's cooking in git.git (Jul 2013, #02; Fri, 5)

2013-07-06 Thread Junio C Hamano
Jonathan Nieder jrnie...@gmail.com writes:

 We are in the middle of 5th week now in the 11-week releace cycle
 for 1.8.4 (http://tinyurl.com/gitCal), and quite a few topics have
 graduated to 'master'.  I'd expect the rest of the week to be slow.

 I'd like this or the next release to be 2.0, so the common user
 trouble with git push pushing more branches than they intended can
 be over.  Am I crazy?  If not, what can I do to help make it happen?

There are currently three topics slated for 2.0 that changes the
default behaviours in a big way.  The default of the push behaviour
switching from matching to simple is one of them, and it has been
advertised with advice messages since 1.8.0 (Oct 2012).

The other two, git add -u/-A without pathspec to operate on the
entire tree, and git add with pathspec acting as if it were given
-A option to also record removals to the index, haven't had enough
time to be imprinted in users' minds.  The former was only mentioned
in the release notes of 1.8.2 (Mar 2013), and the advertisement for
the latter change appeared first in 1.8.3 (May 2013).

I'd prefer to see users have enough time to adjust to these big
changes, at least 6 months (but preferrably 9 months).  I would say
that the change to the default git push behaviour may have had
enough preparation period, but the other two that are scheduled for
2.0 is way too young.

With recent triangular addition by Ram, the simple mode, the
future behaviour of git push, was again changed, and has not have
enough time to mature (isn't it still in 'next'?).

Regarding that simple thing, I am not sure if the if you are
pushing to a remote that is different from where you fetch from, we
do exactly the same as 'current', which is what we tentatively
agreed to implement, is safe enough for new people.  I suspect we
may want to tighten it a bit more (it would update the branch with
the same name as your current local branch, but never try to create
such a branch at the remote, for example).

So in that sense, even the change to git push default may not be
old enough for the upcoming release or the next one.


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] diffcore-pickaxe: simplify has_changes and contains

2013-07-06 Thread Jeff King
On Sat, Jul 06, 2013 at 03:53:27PM +0200, René Scharfe wrote:

 Halve the number of callsites of contains() to two using temporary
 variables, simplifying the code.  While at it, get rid of the
 diff_options parameter, which became unused with 8fa4b09f.

There is a slight change in behavior, too:

 - if (!one)
 - return contains(two, o, regexp, kws) != 0;
 - if (!two)
 - return contains(one, o, regexp, kws) != 0;
 - return contains(one, o, regexp, kws) != contains(two, o, regexp, kws);
 + unsigned int one_contains = one ? contains(one, regexp, kws) : 0;
 + unsigned int two_contains = two ? contains(two, regexp, kws) : 0;
 + return one_contains != two_contains;

Before, if (!one  !two) we would call contains(two, ...), and now we
will simply assume it is zero. Which I think is an improvement, as we
would have segfaulted before. I don't think it is a bug in the current
code (we would not ever feed the function two NULLs), but it is nice to
be more defensive.

Acked-by: Jeff King p...@peff.net

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] lockfile: fix buffer overflow in path handling

2013-07-06 Thread Jeff King
On Sat, Jul 06, 2013 at 09:48:52PM +0200, Michael Haggerty wrote:

 When and if resolve_symlink() is called, then that function is
 correctly told to treat the buffer as (PATH_MAX - 5) characters long.
 This part is correct.  However:
 
 * If LOCK_NODEREF was specified, then resolve_symlink() is never
   called.
 
 * If resolve_symlink() is called but the path is not a symlink, then
   the length check is never applied.
 
 So it is possible for a path with length (PATH_MAX - 5 = len 
 PATH_MAX) to make it through the checks.  When .lock is strcat()ted
 to such a path, the lock_file::filename buffer is overflowed.

Thanks for posting this. I independently discovered this about a month
ago while working on an unrelated series, and then let it languish
unseen and forgotten at the base of that almost-done series.

So definitely a problem, and my patch looked almost identical to
yours. The only difference is:

  static int lock_file(struct lock_file *lk, const char *path, int flags)
  {
 - if (strlen(path) = sizeof(lk-filename))
 - return -1;
 - strcpy(lk-filename, path);
   /*
* subtract 5 from size to make sure there's room for adding
* .lock for the lock file name
*/
 + if (strlen(path) = sizeof(lk-filename)-5)
 + return -1;
 + strcpy(lk-filename, path);
   if (!(flags  LOCK_NODEREF))
   resolve_symlink(lk-filename, sizeof(lk-filename)-5);

It might be worth consolidating the magic -5 into a constant near the
comment, like this:

diff --git a/lockfile.c b/lockfile.c
index c6fb77b..2aeb2bb 100644
--- a/lockfile.c
+++ b/lockfile.c
@@ -124,15 +124,16 @@ static int lock_file(struct lock_file *lk, const char 
*path, int flags)
 
 static int lock_file(struct lock_file *lk, const char *path, int flags)
 {
-   if (strlen(path) = sizeof(lk-filename))
-   return -1;
-   strcpy(lk-filename, path);
/*
 * subtract 5 from size to make sure there's room for adding
 * .lock for the lock file name
 */
+   static const size_t max_path_len = sizeof(lk-filename) - 5;
+   if (strlen(path) = max_path_len)
+   return -1;
+   strcpy(lk-filename, path);
if (!(flags  LOCK_NODEREF))
-   resolve_symlink(lk-filename, sizeof(lk-filename)-5);
+   resolve_symlink(lk-filename, max_path_len);
strcat(lk-filename, .lock);
lk-fd = open(lk-filename, O_RDWR | O_CREAT | O_EXCL, 0666);
if (0 = lk-fd) {

But either way, the fix looks good to me.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/2] send-email: introduce sendemail.smtpsslcertpath

2013-07-06 Thread Junio C Hamano
John Keeping j...@keeping.me.uk writes:

 @@ -1096,19 +1101,18 @@ sub smtp_auth_maybe {
  # Helper to come up with SSL/TLS certification validation params
  # and warn when doing no verification
  sub ssl_verify_params {
 - use IO::Socket::SSL qw(SSL_VERIFY_PEER SSL_VERIFY_NONE);
 -
 - if (!defined $smtp_ssl_cert_path) {
 - $smtp_ssl_cert_path = /etc/ssl/certs;
 + if ($smtp_ssl_verify == 0) {
 + return (SSL_verify_mode = IO::Socket::SSL-SSL_VERIFY_NONE);

I do not see any use IO::Socket::SSL anywhere after applying this
patch.  Is this expected to work?
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 1/2] Git.pm: add new temp_is_locked function

2013-07-06 Thread Kyle J. McKay
From: Kyle J. McKay mack...@gmail.com

The temp_is_locked function can be used to determine whether
or not a given name previously passed to temp_acquire is
currently locked.

Signed-off-by: Kyle J. McKay mack...@gmail.com
---
 perl/Git.pm | 33 +++--
 1 file changed, 31 insertions(+), 2 deletions(-)

diff --git a/perl/Git.pm b/perl/Git.pm
index 7a252ef..0ba15b9 100644
--- a/perl/Git.pm
+++ b/perl/Git.pm
@@ -61,7 +61,7 @@ require Exporter;
 remote_refs prompt
 get_tz_offset
 credential credential_read credential_write
-temp_acquire temp_release temp_reset temp_path);
+temp_acquire temp_is_locked temp_release temp_reset temp_path);
 
 
 =head1 DESCRIPTION
@@ -1206,6 +1206,35 @@ sub temp_acquire {
$temp_fd;
 }
 
+=item temp_is_locked ( NAME )
+
+Returns true if the internal lock created by a previous Ctemp_acquire()
+call with CNAME is still in effect.
+
+When temp_acquire is called on a CNAME, it internally locks the temporary
+file mapped to CNAME.  That lock will not be released until Ctemp_release()
+is called with either the original CNAME or the LFile::Handle that was
+returned from the original call to temp_acquire.
+
+Subsequent attempts to call Ctemp_acquire() with the same CNAME will fail
+unless there has been an intervening Ctemp_release() call for that CNAME
+(or its corresponding LFile::Handle that was returned by the original
+Ctemp_acquire() call).
+
+If true is returned by Ctemp_is_locked() for a CNAME, an attempt to
+Ctemp_acquire() the same CNAME will cause an error unless
+Ctemp_release is first called on that CNAME (or its corresponding
+LFile::Handle that was returned by the original Ctemp_acquire() call).
+
+=cut
+
+sub temp_is_locked {
+   my ($self, $name) = _maybe_self(@_);
+   my $temp_fd = \$TEMP_FILEMAP{$name};
+
+   defined $$temp_fd  $$temp_fd-opened  
$TEMP_FILES{$$temp_fd}{locked};
+}
+
 =item temp_release ( NAME )
 
 =item temp_release ( FILEHANDLE )
@@ -1248,7 +1277,7 @@ sub _temp_cache {
 
my $temp_fd = \$TEMP_FILEMAP{$name};
if (defined $$temp_fd and $$temp_fd-opened) {
-   if ($TEMP_FILES{$$temp_fd}{locked}) {
+   if (temp_is_locked($name)) {
throw Error::Simple(Temp file with moniker ' .
$name . ' already in use);
}
-- 
1.8.3

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 0/2] allow git-svn fetching to work using serf

2013-07-06 Thread Kyle J. McKay
From: Kyle J. McKay mack...@gmail.com

This patch allows git-svn to fetch successfully using the
serf library when given an https?: url to fetch from.

Unfortunately some svn servers do not seem to be configured
well for use with the serf library.  This can cause fetching
to take longer compared to the neon library or actually
cause timeouts during the fetch.  When timeouts occur
git-svn can be safely restarted to fetch more revisions.

A new temp_is_locked function has been added to Git.pm
to facilitate using the minimal number of temp files
possible when using serf.

The problem that occurs when running git-svn fetch using
the serf library is that the previously used temp file
is not always unlocked before the next temp file needs
to be used.

To work around this problem, a new temp name is used
if the temp name that would otherwise be chosen is
currently locked.

Version v2 of the patch introduced a bug when changing the _temp_cache
function to use the new temp_is_locked function at the suggestion of a
reviewer.  That has now been resolved.

Kyle J. McKay (2):
  Git.pm: add new temp_is_locked function
  git-svn: allow git-svn fetching to work using serf

 perl/Git.pm | 33 +++--
 perl/Git/SVN/Fetcher.pm |  6 --
 2 files changed, 35 insertions(+), 4 deletions(-)

-- 
1.8.3

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 2/2] git-svn: allow git-svn fetching to work using serf

2013-07-06 Thread Kyle J. McKay
From: Kyle J. McKay mack...@gmail.com

When attempting to git-svn fetch files from an svn https?: url using
the serf library (the only choice starting with svn 1.8) the following
errors can occur:

Temp file with moniker 'svn_delta' already in use at Git.pm line 1250
Temp file with moniker 'git_blob' already in use at Git.pm line 1250

David Rothenberger daver...@acm.org has determined the cause to
be that ra_serf does not drive the delta editor in a depth-first
manner [...]. Instead, the calls come in this order:

1. open_root
2. open_directory
3. add_file
4. apply_textdelta
5. add_file
6. apply_textdelta

When using the ra_serf access method, git-svn can end up needing
to create several temp files before the first one is closed.

This change causes a new temp file moniker to be generated if the
one that would otherwise have been used is currently locked.

Signed-off-by: Kyle J. McKay mack...@gmail.com
---
 perl/Git/SVN/Fetcher.pm | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/perl/Git/SVN/Fetcher.pm b/perl/Git/SVN/Fetcher.pm
index bd17418..10edb27 100644
--- a/perl/Git/SVN/Fetcher.pm
+++ b/perl/Git/SVN/Fetcher.pm
@@ -315,11 +315,13 @@ sub change_file_prop {
 sub apply_textdelta {
my ($self, $fb, $exp) = @_;
return undef if $self-is_path_ignored($fb-{path});
-   my $fh = $::_repository-temp_acquire('svn_delta');
+   my $suffix = 0;
+   ++$suffix while 
$::_repository-temp_is_locked(svn_delta_${$}_$suffix);
+   my $fh = $::_repository-temp_acquire(svn_delta_${$}_$suffix);
# $fh gets auto-closed() by SVN::TxDelta::apply(),
# (but $base does not,) so dup() it for reading in close_file
open my $dup, '', $fh or croak $!;
-   my $base = $::_repository-temp_acquire('git_blob');
+   my $base = $::_repository-temp_acquire(git_blob_${$}_$suffix);
 
if ($fb-{blob}) {
my ($base_is_link, $size);
-- 
1.8.3

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/2] send-email: introduce sendemail.smtpsslcertpath

2013-07-06 Thread Jeff King
On Fri, Jul 05, 2013 at 08:29:48PM +, brian m. carlson wrote:

 On Fri, Jul 05, 2013 at 10:20:11AM -0700, Junio C Hamano wrote:
  +# Helper to come up with SSL/TLS certification validation params
  +# and warn when doing no verification
  +sub ssl_verify_params {
  +   use IO::Socket::SSL qw(SSL_VERIFY_PEER SSL_VERIFY_NONE);
 
 You might as well put this at the top of the file, because all use
 statements happen at compile time anyway, regardless of their location.
 If you want to lazy-load this, you need to do:
 
 require IO::Socket::SSL;
 IO::Socket::SSL-import(qw(SSL_VERIFY_PEER SSL_VERIFY_NONE));
 
 which is equivalent to use except that it happens at runtime.

I think we _must_ lazy load this, or else we are breaking git-send-email
users on platforms that do not have IO::Socket::SSL (and do not plan on
using SSL themselves).

The same goes for the use in patch 1/2.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html