Re: How to resume broke clone ?

2013-11-28 Thread Duy Nguyen
On Thu, Nov 28, 2013 at 2:41 PM, zhifeng hu z...@ancientrocklab.com wrote:
 Thanks for reply, But I am developer, I want to clone full repository, I need 
 to view code since very early.

if it works with --depth =1, you can incrementally run fetch
--depth=N with N larger and larger.

But it may be easier to ask kernel.org admin, or any dev with a public
web server, to provide you a git bundle you can download via http.
Then you can fetch on top.
-- 
Duy
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to pre-empt git pull merge error?

2013-11-28 Thread Pete Forman
Thomas Rast t...@thomasrast.ch writes:

 Antoine Pelisse apeli...@gmail.com writes:

 On Wed, 27 Nov 2013 15:17:27 +
 Pete Forman petef4+use...@gmail.com wrote:

 I am looking for a way of detecting up front whether a git pull or
 git merge would fail. The sort of script I want to perform is to
 update a server.

 git fetch
 git okay
 stop server
 backup data
 git merge
 start server

 I don't know a simple way to do the pre-merge check without actually
 doing the merge (other than patching git merge to add a --dry-run
 option)

 Wouldn't that be a nice use-case for git-recursive-merge --index-only
 ($gmane/236753) ?

 Possibly, but most of the use-cases for merge --dry-run are better
 answered by the XY Problem question:

 Can you step back and explain what the *underlying* goal is?

 The above sounds a lot like a deployment script, and such scripts are
 almost always better served by using an actual deployment tool, or
 failing that, by using some form of checkout -f instead, to ensure
 that they get whatever they are supposed to deploy.

 (Using a merge to update is really terrible in the face of
 non-fast-forward updates, especially when caused by rewriting history
 to not include some commits.)

It is a deployment script and updates are fast-forward. There was a
problem on a test server where a file had been hacked to investigate an
issue. The next deploy failed with the merge error.

There are three approaches, which might all be done with git or an
actual deployment tool.

1. test early, bail out if deploy would fail
2. set target to good state before applying the merge
2a. discard changes
2b. stash changes

I intend to use (1). First I will need to clean up the stray files or add
more entries into .gitignore.

  test -z $(git status --porcelain)


-- 
Pete Forman
http://petef.22web.org/payg.html

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to resume broke clone ?

2013-11-28 Thread Karsten Blees
Am 28.11.2013 09:14, schrieb Duy Nguyen:
 On Thu, Nov 28, 2013 at 2:41 PM, zhifeng hu z...@ancientrocklab.com wrote:
 Thanks for reply, But I am developer, I want to clone full repository, I 
 need to view code since very early.
 
 if it works with --depth =1, you can incrementally run fetch
 --depth=N with N larger and larger.
 
 But it may be easier to ask kernel.org admin, or any dev with a public
 web server, to provide you a git bundle you can download via http.
 Then you can fetch on top.
 

Or simply download the individual files (via ftp/http) and clone locally:

 wget -r ftp://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
 git clone git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
 cd linux
 git remote set-url origin 
 git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: How to resume broke clone ?

2013-11-28 Thread Max Kirillov
 git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
 
 I am in china. our bandwidth is very limitation. Less than 50Kb/s.

You could manually download big packed bundled from some http remote.
For example http://repo.or.cz/r/linux.git

* create a new repository, add the remote there.

* download files with wget or whatever:
 http://repo.or.cz/r/linux.git/objects/info/packs
also files mentioned in the file. Currently they are:
 
http://repo.or.cz/r/linux.git/objects/pack/pack-3807b40fc5fd7556990ecbfe28a54af68964a5ce.idx
 
http://repo.or.cz/r/linux.git/objects/pack/pack-3807b40fc5fd7556990ecbfe28a54af68964a5ce.pack

and put them to the corresponding places.

* then run fetch of pull. I believe it should run fast then. Though I
have not test it.

Br,
-- 
Max
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Pack transfer negotiation to tree and blob level?

2013-11-28 Thread Philip Oakley

From: Duy Nguyen pclo...@gmail.com
Sent: Wednesday, November 27, 2013 11:50 PM
On Thu, Nov 28, 2013 at 5:52 AM, Philip Oakley philipoak...@iee.org 
wrote:
In the pack transfer protocol 
(Documentation\technical\pack-protocol.txt)

the negotiation for refs is discussed, but its unclear to me if the
negotiation explicitly navigates down into the trees and blobs of 
each

commit that need to go into the pack.

From one perspective I can see that, in the main, it's only commit 
objects
that are being negotiated, and the DAG is used to imply which commit 
objects
are to be sent between the wants and haves end points, without need 
to
descend into their trees and blobs. The tags and the objects they 
point to

are explicitly given so are negotiated easily.

The other view is that the negotiation should be listing every object 
of any
type between the wants and haves as part of the negotiation. I just 
couldn't
tell from the docs which assumption is appropriate. Is there any 
extra

clarifications on this?


other object negotiation is inferred from commits because sending full
listing is too much. If you say you have commit A, you imply you have
everything from commit A down to the bottom. With this knowledge, when
you want commit B, the sender only needs to send trees and objects
that do not exist in commit A or any of its predecessors.


I presume that in the case of a tag that points to a tree, the pack will 
contain not only the specific tree that was tagged, but also its 
sub-trees and blobs that it references.


Plus I'm wondering if the commit(s) that contain that tagged tree are 
also sent (given that such a tree could be repeated/reused in other 
commits this may be expensive, and hence not done)



Although to
cut cost at the sender, we do something less than optimized (check out
the edge concept in documents, or else in pack-objects.c). Pack
bitmaps are supposed to provide cheap object traversal and make the
transfered pack even smaller.

I ask as I was cogitating on options for a 'narrow' clone  (to 
complement
shallow clones ;-) that could, say, in some way limit the size of 
blobs
downloaded, or the number of tree levels downloaded, or even path 
limiting.


size limiting is easy because you don't need to traverse object dag at
all. Inside pack-objects it calls rev-list to collect objects to be
sent. You just filter by size at that phase. Support for raising or
lowering size limit is also workable, just like how shallow
deepen/shorten is done: you let the sender know you have size limit A,
now you want to raise to B and the sender just collects extra objects
in A..B range for all have refs.

The problem is how to let the client know what objects are not sent
due to the size limit, so it could set up refs/replace to stop the
user from running into missing objects. If there are too many excluded
objects, sending all those SHA-1 with pkt-line is inefficient. (path
limit does not have problem, it can infer from the command line
arguments most of the time). Maybe you could send this listing in
binary format just before sending the pack.


This was part of the problem I was thinking about ;-)



BTW another way to deal with large blobs in clone is git-annex. I was
thinking the other day if we could sort of integrate it to git to
provide smooth UI (the user does not have to type git annex
something, or at least not often). Of course git-annex is still
optional and the UI integration is only activated via config key,
after git-annex is installed.


I'll have a look.

--
Duy
--


Philip 


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to resume broke clone ?

2013-11-28 Thread Duy Nguyen
On Thu, Nov 28, 2013 at 3:35 PM, Karsten Blees karsten.bl...@gmail.com wrote:
 Or simply download the individual files (via ftp/http) and clone locally:

 wget -r ftp://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
 git clone git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
 cd linux
 git remote set-url origin 
 git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

Yeah I didn't realize it is published over dumb http too. You may need
to be careful with this though because it's not atomic and you may get
refs that point nowhere because you're already done with pack
directory when you come to fetcing refs and did not see new packs...
If dumb commit walker supports resume (I don't know) then it'll be
safer to do

git clone http://git.kernel.org/

If it does not support resume, I don't think it's hard to do.
-- 
Duy
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to resume broke clone ?

2013-11-28 Thread zhifeng hu
The repository growing fast, things get harder . Now the size reach several GB, 
it may possible be TB, YB.
When then, How do we handle this?
If the transfer broken, and it can not be resume transfer, waste time and waste 
bandwidth.

Git should be better support resume transfer.
It now seems not doing better it’s job.
Share code, manage code, transfer code, what would it be a VCS we imagine it ?
 
zhifeng hu 



On Nov 28, 2013, at 4:50 PM, Duy Nguyen pclo...@gmail.com wrote:

 On Thu, Nov 28, 2013 at 3:35 PM, Karsten Blees karsten.bl...@gmail.com 
 wrote:
 Or simply download the individual files (via ftp/http) and clone locally:
 
 wget -r ftp://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
 git clone git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
 cd linux
 git remote set-url origin 
 git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
 
 Yeah I didn't realize it is published over dumb http too. You may need
 to be careful with this though because it's not atomic and you may get
 refs that point nowhere because you're already done with pack
 directory when you come to fetcing refs and did not see new packs...
 If dumb commit walker supports resume (I don't know) then it'll be
 safer to do
 
 git clone http://git.kernel.org/
 
 If it does not support resume, I don't think it's hard to do.
 -- 
 Duy

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to resume broke clone ?

2013-11-28 Thread Duy Nguyen
On Thu, Nov 28, 2013 at 3:55 PM, zhifeng hu z...@ancientrocklab.com wrote:
 The repository growing fast, things get harder . Now the size reach several 
 GB, it may possible be TB, YB.
 When then, How do we handle this?
 If the transfer broken, and it can not be resume transfer, waste time and 
 waste bandwidth.

 Git should be better support resume transfer.
 It now seems not doing better it’s job.
 Share code, manage code, transfer code, what would it be a VCS we imagine it ?

You're welcome to step up and do it. On top of my head  there are a few options:

 - better integration with git bundles, provide a way to seamlessly
create/fetch/resume the bundles with git clone and git fetch
 - shallow/narrow clone. the idea is get a small part of the repo, one
depth, a few paths, then get more and more over many iterations so if
we fail one iteration we don't lose everything
 - stablize pack order so we can resume downloading a pack
 - remote alternates, the repo will ask for more and more objects as
you need them (so goodbye to distributed model)
-- 
Duy
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to resume broke clone ?

2013-11-28 Thread Jeff King
On Thu, Nov 28, 2013 at 01:32:36AM -0700, Max Kirillov wrote:

  git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
  
  I am in china. our bandwidth is very limitation. Less than 50Kb/s.
 
 You could manually download big packed bundled from some http remote.
 For example http://repo.or.cz/r/linux.git
 
 * create a new repository, add the remote there.
 
 * download files with wget or whatever:
  http://repo.or.cz/r/linux.git/objects/info/packs
 also files mentioned in the file. Currently they are:
  
 http://repo.or.cz/r/linux.git/objects/pack/pack-3807b40fc5fd7556990ecbfe28a54af68964a5ce.idx
  
 http://repo.or.cz/r/linux.git/objects/pack/pack-3807b40fc5fd7556990ecbfe28a54af68964a5ce.pack
 
 and put them to the corresponding places.
 
 * then run fetch of pull. I believe it should run fast then. Though I
 have not test it.

You would also need to set up local refs so that git knows you have
those objects. The simplest way to do it is to just fetch by dumb-http,
which can resume the pack transfer. I think that clone is also very
eager to clean up the partial transfer if the initial fetch fails. So
you would want to init manually:

  git init linux
  cd linux
  git remote add origin 
http://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
  GIT_SMART_HTTP=0 git fetch -vv

and then you can follow that up with regular smart fetches, which should
be much smaller.

It would be even simpler if you could fetch the whole thing as a bundle,
rather than over dumb-http. But that requires the server side (or some
third party who has fast access to the repo) cooperating and making a
bundle available.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to resume broke clone ?

2013-11-28 Thread Jeff King
On Thu, Nov 28, 2013 at 04:09:18PM +0700, Duy Nguyen wrote:

  Git should be better support resume transfer.
  It now seems not doing better it’s job.
  Share code, manage code, transfer code, what would it be a VCS we imagine 
  it ?
 
 You're welcome to step up and do it. On top of my head  there are a few 
 options:
 
  - better integration with git bundles, provide a way to seamlessly
 create/fetch/resume the bundles with git clone and git fetch

I posted patches for this last year. One of the things that I got hung
up on was that I spooled the bundle to disk, and then cloned from it.
Which meant that you needed twice the disk space for a moment. I wanted
to teach index-pack to --fix-thin a pack that was already on disk, so
that we could spool to disk, and then finalize it without making another
copy.

One of the downsides of this approach is that it requires the repo
provider (or somebody else) to provide the bundle. I think that is
something that a big site like GitHub would do (and probably push the
bundles out to a CDN, too, to make getting them faster). But it's not a
universal solution.

  - stablize pack order so we can resume downloading a pack

I think stabilizing in all cases (e.g., including ones where the content
has changed) is hard, but I wonder if it would be enough to handle the
easy cases, where nothing has changed. If the server does not use
multiple threads for delta computation, it should generate the same pack
from the same on-disk deterministically. We just need a way for the
client to indicate that it has the same partial pack.

I'm thinking that the server would report some opaque hash representing
the current pack. The client would record that, along with the number of
pack bytes it received. If the transfer is interrupted, the client comes
back with the hash/bytes pair. The server starts to generate the pack,
checks whether the hash matches, and if so, says here is the same pack,
resuming at byte X.

What would need to go into such a hash? It would need to represent the
exact bytes that will go into the pack, but without actually generating
those bytes. Perhaps a sha1 over the sequence of object sha1, type,
base (if applicable), length for each object would be enough. We should
know that after calling compute_write_order. If the client has a match,
we should be able to skip ahead to the correct byte.

  - remote alternates, the repo will ask for more and more objects as
 you need them (so goodbye to distributed model)

This is also something I've been playing with, but just for very large
objects (so to support something like git-media, but below the object
graph layer). I don't think it would apply here, as the kernel has a lot
of small objects, and getting them in the tight delta'd pack format
increases efficiency a lot.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to resume broke clone ?

2013-11-28 Thread zhifeng hu
Once using git clone —depth or git fetch —depth,
While you want to move backward.
you may face problem

 git fetch --depth=105
error: Could not read 483bbf41ca5beb7e38b3b01f21149c56a1154b7a
error: Could not read aacb82de3ff8ae7b0a9e4cfec16c1807b6c315ef
error: Could not read 5a1758710d06ce9ddef754a8ee79408277032d8b
error: Could not read a7d5629fe0580bd3e154206388371f5b8fc832db
error: Could not read 073291c476b4edb4d10bbada1e64b471ba153b6b


zhifeng hu 



On Nov 28, 2013, at 5:20 PM, Tay Ray Chuan rcta...@gmail.com wrote:

 On Thu, Nov 28, 2013 at 4:14 PM, Duy Nguyen pclo...@gmail.com wrote:
 On Thu, Nov 28, 2013 at 2:41 PM, zhifeng hu z...@ancientrocklab.com wrote:
 Thanks for reply, But I am developer, I want to clone full repository, I 
 need to view code since very early.
 
 if it works with --depth =1, you can incrementally run fetch
 --depth=N with N larger and larger.
 
 I second Duy Nguyen's and Trần Ngọc Quân's suggestion to 1) initially
 create a shallow clone then 2) incrementally deepen your clone.
 
 Zhifeng, in the course of your research into resumable cloning, you
 might have learnt that while it's a really valuable feature, it's also
 a pretty hard problem at the same time. So it's not because git
 doesn't want to have this feature.
 
 -- 
 Cheers,
 Ray Chuan
 --
 To unsubscribe from this list: send the line unsubscribe git in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to resume broke clone ?

2013-11-28 Thread Duy Nguyen
On Thu, Nov 28, 2013 at 4:29 PM, Jeff King p...@peff.net wrote:
  - stablize pack order so we can resume downloading a pack

 I think stabilizing in all cases (e.g., including ones where the content
 has changed) is hard, but I wonder if it would be enough to handle the
 easy cases, where nothing has changed. If the server does not use
 multiple threads for delta computation, it should generate the same pack
 from the same on-disk deterministically. We just need a way for the
 client to indicate that it has the same partial pack.

 I'm thinking that the server would report some opaque hash representing
 the current pack. The client would record that, along with the number of
 pack bytes it received. If the transfer is interrupted, the client comes
 back with the hash/bytes pair. The server starts to generate the pack,
 checks whether the hash matches, and if so, says here is the same pack,
 resuming at byte X.

 What would need to go into such a hash? It would need to represent the
 exact bytes that will go into the pack, but without actually generating
 those bytes. Perhaps a sha1 over the sequence of object sha1, type,
 base (if applicable), length for each object would be enough. We should
 know that after calling compute_write_order. If the client has a match,
 we should be able to skip ahead to the correct byte.

Exactly. The hash would include the list of sha-1 and object source,
the git version (so changes in code or default values are covered),
the list of config keys/values that may impact pack generation
algorithm (like window size..), .git/shallow, refs/replace,
.git/graft, all or most of command line options. If we audit the code
carefully I think we can cover all input that influences pack
generation. From then on it's just a matter of protocol extension. It
also opens an opportunity for optional server side caching, just save
the pack and associate it with the hash. Next time the client asks to
resume, the server has everything ready.
-- 
Duy
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 10/21] pack-bitmap: add support for bitmap indexes

2013-11-28 Thread Jeff King
On Wed, Nov 27, 2013 at 10:08:56AM +0100, Karsten Blees wrote:

 Khash is OK for sha1 keys, but I don't think it should be advertised
 as a second general purpose hash table implementation. Its far too
 easy to shoot yourself in the foot by using 'straightforward' hash-
 and comparison functions. Khash doesn't store the hash codes of the
 keys, so you have to take care of that yourself or live with the
 performance penalties (see [1]).
 
 [1] http://article.gmane.org/gmane.comp.version-control.git/237876

Yes. I wonder if we should improve it in that respect. I haven't looked
carefully at the hash code you posted elsewhere, but I feel like many
uses will want a macro implementation to let them store arbitrary types
smaller or larger than a pointer.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] gitweb: Add an option for adding more branch refs

2013-11-28 Thread Krzesimir Nowak
On Wed, 2013-11-27 at 12:55 -0800, Junio C Hamano wrote:
 Eric Sunshine sunsh...@sunshineco.com writes:
 
  On Wed, Nov 27, 2013 at 3:34 PM, Eric Sunshine sunsh...@sunshineco.com 
  wrote:
  On Wed, Nov 27, 2013 at 10:30 AM, Krzesimir Nowak
  krzesi...@endocode.com wrote:
  Overriding an @additional_branch_refs configuration variable with
  value ('wip') will make gitweb to show branches that appear in
  refs/heads and refs/wip (refs/heads is hardcoded).
 
 branches are by definition what are in refs/heads/ hierarchy, so 
 
   Allow @additional_branch_refs configuration variable to tell
   gitweb to show refs from additional hierarchies in addition to
   branches in the list-of-branches view.
 
 would be more appropriate and sufficient.

Thanks.

 
  Mentioning $ref in the error message would help the user resolve the
  problem more quickly.
 
  +   die_error(500, 'heads specified in 
  @additional_branch_refs') if ($ref eq 'heads');
 
  Rephrasing this as
 
  heads disallowed in @additional_branch_refs
 
  would better explain the problem to a user who has only made a cursory
  read of the documentation.
 
  The program could easily filter out the redundant 'heads', so does
  this really deserve a diagnostic?
 
 True.

Ok, I'm deduping both heads and other refs as well. Now we send 500 only
if the ref is simply invalid.

 
 I was primarily worried about metacharacters in the specified
 strings getting in the way of regexp matches the new code allows on
 them, but that has been resolved with the use of \Q..\E; if that
 automatic deduping is done, I do not immediately see any remaining
 issues in the latest round of the patch.
 

New patch in [PATCH v3] gitweb: Add an option for adding more branch
refs. Thanks for reviews!

 Thanks.

-- 
Krzesimir Nowak
Software Developer
Endocode AG

krzesi...@endocode.com

--
Endocode AG, Johannisstraße 20, 10117 Berlin
i...@endocode.com | www.endocode.com

Vorstandsvorsitzender: Mirko Boehm
Vorstände: Dr. Karl Beecher, Chris Kühl, Sebastian Sucker
Aufsichtsratsvorsitzende: Jennifer Beecher

Registergericht: Amtsgericht Charlottenburg - HRB 150748 B



--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3] gitweb: Add an option for adding more branch refs

2013-11-28 Thread Krzesimir Nowak
Allow @additional_branch_refs configuration variable to tell gitweb to
show refs from additional hierarchies in addition to branches in the
list-of-branches view.

Signed-off-by: Krzesimir Nowak krzesi...@endocode.com
---
 Documentation/gitweb.conf.txt | 13 
 gitweb/gitweb.perl| 75 +--
 2 files changed, 71 insertions(+), 17 deletions(-)

diff --git a/Documentation/gitweb.conf.txt b/Documentation/gitweb.conf.txt
index e2113d9..cd1a945 100644
--- a/Documentation/gitweb.conf.txt
+++ b/Documentation/gitweb.conf.txt
@@ -549,6 +549,19 @@ This variable matters only when using persistent web 
environments that
 serve multiple requests using single gitweb instance, like mod_perl,
 FastCGI or Plackup.
 
+@additional_branch_refs::
+   List of additional directories under refs which are going to be used
+   as branch refs. You might want to set this variable if you have a gerrit
+   setup where all branches under refs/heads/ are official,
+   push-after-review ones and branches under refs/sandbox/, refs/wip and
+   refs/other are user ones where permissions are much wider, for example
++
+
+our @additional_branch_refs = ('sandbox', 'wip', 'other');
+
++
+It is an error to specify a ref that does not pass git check-ref-format
+scrutiny.
 
 Other variables
 ~~~
diff --git a/gitweb/gitweb.perl b/gitweb/gitweb.perl
index 68c77f6..25e1d37 100755
--- a/gitweb/gitweb.perl
+++ b/gitweb/gitweb.perl
@@ -122,6 +122,10 @@ our $logo_label = git homepage;
 # source of projects list
 our $projects_list = ++GITWEB_LIST++;
 
+# list of additional directories under refs/ we want to display as
+# branches
+our @additional_branch_refs = ();
+
 # the width (in characters) of the projects list Description column
 our $projects_list_description_width = 25;
 
@@ -626,6 +630,10 @@ sub feature_avatar {
return @val ? @val : @_;
 }
 
+sub get_branch_refs {
+return ('heads', @additional_branch_refs);
+}
+
 # checking HEAD file with -e is fragile if the repository was
 # initialized long time ago (i.e. symlink HEAD) and was pack-ref'ed
 # and then pruned.
@@ -680,6 +688,19 @@ sub read_config_file {
return;
 }
 
+# performs sanity checks on parts of configuration.
+sub config_sanity_check {
+   # check additional refs validity
+   my %unique_branch_refs = ();
+   for my $ref (@additional_branch_refs) {
+   die_error(500, Invalid ref '$ref' in 
\@additional_branch_refs) unless (validate_ref($ref));
+   # 'heads' are added implicitly in get_branch_refs().
+   $unique_branch_refs{$ref} = 1 if ($ref ne 'heads');
+   }
+   @additional_branch_refs = sort keys %unique_branch_refs;
+   %unique_branch_refs = undef;
+}
+
 our ($GITWEB_CONFIG, $GITWEB_CONFIG_SYSTEM, $GITWEB_CONFIG_COMMON);
 sub evaluate_gitweb_config {
our $GITWEB_CONFIG = $ENV{'GITWEB_CONFIG'} || ++GITWEB_CONFIG++;
@@ -698,8 +719,11 @@ sub evaluate_gitweb_config {
 
# Use first config file that exists.  This means use the per-instance
# GITWEB_CONFIG if exists, otherwise use GITWEB_SYSTEM_CONFIG.
-   read_config_file($GITWEB_CONFIG) and return;
-   read_config_file($GITWEB_CONFIG_SYSTEM);
+   if (!read_config_file($GITWEB_CONFIG)) {
+   read_config_file($GITWEB_CONFIG_SYSTEM);
+   }
+
+   config_sanity_check();
 }
 
 # Get loadavg of system, to compare against $maxload.
@@ -1452,6 +1476,16 @@ sub validate_pathname {
return $input;
 }
 
+sub validate_ref {
+   my $input = shift || return undef;
+
+   # restrictions on ref name according to git-check-ref-format
+   if ($input =~ m!(/\.|\.\.|[\000-\040\177 ~^:?*\[]|/$)!) {
+   return undef;
+   }
+   return $input;
+}
+
 sub validate_refname {
my $input = shift || return undef;
 
@@ -1462,10 +1496,9 @@ sub validate_refname {
# it must be correct pathname
$input = validate_pathname($input)
or return undef;
-   # restrictions on ref name according to git-check-ref-format
-   if ($input =~ m!(/\.|\.\.|[\000-\040\177 ~^:?*\[]|/$)!) {
-   return undef;
-   }
+   # check git-check-ref-format restrictions
+   $input = validate_ref($input)
+   or return undef;
return $input;
 }
 
@@ -2515,6 +2548,7 @@ sub format_snapshot_links {
 sub get_feed_info {
my $format = shift || 'Atom';
my %res = (action = lc($format));
+   my $matched_ref = 0;
 
# feed links are possible only for project views
return unless (defined $project);
@@ -2522,12 +2556,17 @@ sub get_feed_info {
# or don't have specific feed yet (so they should use generic)
return if (!$action || $action =~ 

Can't connect to git-scm.com

2013-11-28 Thread Antoine Pelisse
Hello,

Should we be worried by this behavior ?

git-scm.com is returning 301 to www.git-scm.com (I don't remember that
it was happening before)
And www.git-scm.com is returning 200: Sorry, no Host found.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC 2/3] git-svn: Support cherry-pick merges

2013-11-28 Thread Andrew Wong
Detect a cherry-pick merge if there's only one parent and the git-svn-id
metadata exists. Then, get the parent's mergeinfo and merge this commit's
mergeinfo.
---
 git-svn.perl  | 52 +--
 t/t9161-git-svn-mergeinfo-push.sh | 30 ++
 2 files changed, 80 insertions(+), 2 deletions(-)

diff --git a/git-svn.perl b/git-svn.perl
index 9ddeaf4..b04cac7 100755
--- a/git-svn.perl
+++ b/git-svn.perl
@@ -698,12 +698,14 @@ sub populate_merge_info {
my %parentshash;
read_commit_parents(\%parentshash, $d);
my @parents = @{$parentshash{$d}};
+
+my $rooturl = $gs-repos_root;
+my ($target_branch) = $gs-full_pushurl =~ /^\Q$rooturl\E(.*)/;
+
if ($#parents  0) {
# Merge commit
my $all_parents_ok = 1;
my $aggregate_mergeinfo = '';
-   my $rooturl = $gs-repos_root;
-   my ($target_branch) = $gs-full_pushurl =~ /^\Q$rooturl\E(.*)/;
 
if (defined($rewritten_parent)) {
# Replace first parent with newly-rewritten version
@@ -785,6 +787,52 @@ sub populate_merge_info {
if ($all_parents_ok and $aggregate_mergeinfo) {
return $aggregate_mergeinfo;
}
+   } elsif ($#parents == 0) {
+   # cherry-pick merge
+   my ($cherry_branchurl, $cherry_svnrev, $cherry_paruuid) =
+   cmt_metadata($d);
+
+   if(defined $cherry_branchurl  defined $cherry_svnrev  
defined $cherry_paruuid)
+   {
+   if (defined($rewritten_parent)) {
+   # Replace first parent with newly-rewritten 
version
+   shift @parents;
+   unshift @parents, $rewritten_parent;
+   }
+
+   my $aggregate_mergeinfo = '';
+
+   # parent mergeinfo
+   my ($branchurl, $svnrev, $paruuid) =
+   cmt_metadata($parents[0]);
+
+   my $ra = Git::SVN::Ra-new($branchurl);
+   my (undef, undef, $props) =
+   $ra-get_dir(canonicalize_path(.), $svnrev);
+   my $parent_mergeinfo = $props-{'svn:mergeinfo'};
+   unless (defined $parent_mergeinfo) {
+   $parent_mergeinfo = '';
+   }
+
+   $aggregate_mergeinfo = 
merge_merge_info($aggregate_mergeinfo,
+   $parent_mergeinfo,
+   $target_branch);
+
+   # cherry-pick mergeinfo
+   unless ($cherry_branchurl =~ /^\Q$rooturl\E(.*)/) {
+   fatal commit $d git-svn metadata changed 
mid-run!;
+   }
+   my $cherry_branchpath = $1;
+
+   my $cherry_pick_mergeinfo = 
canonicalize_path($cherry_branchpath)
+   . :$cherry_svnrev;
+
+   $aggregate_mergeinfo = 
merge_merge_info($aggregate_mergeinfo,
+   $cherry_pick_mergeinfo,
+   $target_branch);
+
+   return $aggregate_mergeinfo;
+   }
}
 
return undef;
diff --git a/t/t9161-git-svn-mergeinfo-push.sh 
b/t/t9161-git-svn-mergeinfo-push.sh
index 1eab701..f348392 100755
--- a/t/t9161-git-svn-mergeinfo-push.sh
+++ b/t/t9161-git-svn-mergeinfo-push.sh
@@ -91,6 +91,36 @@ test_expect_success 'check reintegration mergeinfo' '
 /branches/svnb5:6,11
'
 
+test_expect_success 'make further commits to branch' '
+   git checkout svnb2 
+   touch newb2file-3 
+   git add newb2file-3 
+   git commit -m later b2 commit 3 
+   touch newb2file-4 
+   git add newb2file-4 
+   git commit -m later b2 commit 4 
+   touch newb2file-5 
+   git add newb2file-5 
+   git commit -m later b2 commit 5 
+   git svn dcommit
+   '
+
+test_expect_success 'cherry-pick merge' '
+   git checkout svnb1 
+   git cherry-pick svnb2 
+   git cherry-pick svnb2^ 
+   git cherry-pick svnb2^^ 
+   git svn dcommit
+   '
+
+test_expect_success 'check cherry-pick mergeinfo' '
+   mergeinfo=$(svn_cmd propget svn:mergeinfo $svnrepo/branches/svnb1)
+   test $mergeinfo = /branches/svnb2:3,8,16-17,20-22
+/branches/svnb3:4,9
+/branches/svnb4:5-6,10-12
+/branches/svnb5:6,11
+   '
+
 test_expect_success 'dcommit a merge at the top of a stack' '
git checkout svnb1 
touch anotherfile 
-- 
1.8.5.rc3.5.g96ccada

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  

[RFC 3/3] git-svn: Add config to control the path of mergeinfo

2013-11-28 Thread Andrew Wong
Instead of always storing mergeinfo at the root, give an option to store the
merge info in a subdirectory. The subdirectory must exist before we try to set
its property.
---
 git-svn.perl  | 21 +++--
 perl/Git/SVN/Editor.pm|  5 -
 t/t9161-git-svn-mergeinfo-push.sh | 37 +
 3 files changed, 56 insertions(+), 7 deletions(-)

diff --git a/git-svn.perl b/git-svn.perl
index b04cac7..bfae579 100755
--- a/git-svn.perl
+++ b/git-svn.perl
@@ -693,7 +693,7 @@ sub merge_merge_info {
 }
 
 sub populate_merge_info {
-   my ($d, $gs, $uuid, $linear_refs, $rewritten_parent) = @_;
+   my ($d, $gs, $uuid, $linear_refs, $rewritten_parent, $merge_info_path) 
= @_;
 
my %parentshash;
read_commit_parents(\%parentshash, $d);
@@ -729,7 +729,7 @@ sub populate_merge_info {
 
my $ra = Git::SVN::Ra-new($branchurl);
my (undef, undef, $props) =
-   $ra-get_dir(canonicalize_path(.), $svnrev);
+   
$ra-get_dir(canonicalize_path($merge_info_path), $svnrev);
my $par_mergeinfo = $props-{'svn:mergeinfo'};
unless (defined $par_mergeinfo) {
$par_mergeinfo = '';
@@ -778,7 +778,8 @@ sub populate_merge_info {
# We now have a list of all SVN revnos which are
# merged by this particular parent. Integrate them.
next if $#revsin == -1;
-   my $newmergeinfo = $branchpath: . join(',', @revsin);
+   my $newmergeinfo = 
canonicalize_path($branchpath/$merge_info_path)
+   . : . join(',', @revsin);
$aggregate_mergeinfo =
merge_merge_info($aggregate_mergeinfo,
$newmergeinfo,
@@ -808,7 +809,7 @@ sub populate_merge_info {
 
my $ra = Git::SVN::Ra-new($branchurl);
my (undef, undef, $props) =
-   $ra-get_dir(canonicalize_path(.), $svnrev);
+   
$ra-get_dir(canonicalize_path($merge_info_path), $svnrev);
my $parent_mergeinfo = $props-{'svn:mergeinfo'};
unless (defined $parent_mergeinfo) {
$parent_mergeinfo = '';
@@ -824,7 +825,7 @@ sub populate_merge_info {
}
my $cherry_branchpath = $1;
 
-   my $cherry_pick_mergeinfo = 
canonicalize_path($cherry_branchpath)
+   my $cherry_pick_mergeinfo = 
canonicalize_path($cherry_branchpath/$merge_info_path)
. :$cherry_svnrev;
 
$aggregate_mergeinfo = 
merge_merge_info($aggregate_mergeinfo,
@@ -1008,6 +1009,12 @@ sub cmd_dcommit {
if (defined($_merge_info)) {
$_merge_info =~ tr{ }{\n};
}
+   my $merge_info_path = eval {
+   command_oneline(qw/config --get svn.mergeinfopath/)
+   };
+   if (not defined($merge_info_path)) {
+   $merge_info_path = ;
+   }
while (1) {
my $d = shift @$linear_refs or last;
unless (defined $last_rev) {
@@ -1030,7 +1037,8 @@ sub cmd_dcommit {
$rev_merge_info = populate_merge_info($d, $gs,
 $uuid,
 $linear_refs,
-$rewritten_parent);
+$rewritten_parent,
+$merge_info_path);
}
 
my %ed_opts = ( r = $last_rev,
@@ -1046,6 +1054,7 @@ sub cmd_dcommit {
   $cmt_rev = $_[0];
},
mergeinfo = $rev_merge_info,
+   mergeinfopath = $merge_info_path,
svn_path = '');
 
my $err_handler = $SVN::Error::handler;
diff --git a/perl/Git/SVN/Editor.pm b/perl/Git/SVN/Editor.pm
index b3bcd47..dcbb8a0 100644
--- a/perl/Git/SVN/Editor.pm
+++ b/perl/Git/SVN/Editor.pm
@@ -42,6 +42,7 @@ sub new {
   $self-{svn_path}/ : '';
$self-{config} = $opts-{config};
$self-{mergeinfo} = $opts-{mergeinfo};
+   $self-{mergeinfopath} = $opts-{mergeinfopath};
return $self;
 }
 
@@ -484,7 +485,9 @@ sub apply_diff {
}
 
if (defined($self-{mergeinfo})) {
-   

[RFC 0/3] git-svn: Add support for cherry-pick merges

2013-11-28 Thread Andrew Wong
This is a work-in-progress for adding support for cherry-pick merges.

When using git-svn, cherry-picked commits from another git-svn branch are now
being treated as simple commits. But in SVN, if the user uses svn merge -c to
cherry-pick commits from another SVN branch, SVN records that information in
svn:mergeinfo. These patches will enable git-svn to do the same.

Andrew Wong (3):
  git-svn: Generate mergeinfo for every commit
  git-svn: Support cherry-pick merges
  git-svn: Add config to control the path of mergeinfo

 git-svn.perl  | 79 ++-
 perl/Git/SVN/Editor.pm|  5 ++-
 t/t9161-git-svn-mergeinfo-push.sh | 67 +
 3 files changed, 141 insertions(+), 10 deletions(-)

-- 
1.8.5.rc3.5.g96ccada

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC 1/3] git-svn: Generate mergeinfo for every commit

2013-11-28 Thread Andrew Wong
The previous behavior would only generate mergeinfo once using the first
commit, and use that mergeinfo for all remaining commits. The new behavior will
generate it once for every commit.
---
 git-svn.perl | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/git-svn.perl b/git-svn.perl
index 7349ffe..9ddeaf4 100755
--- a/git-svn.perl
+++ b/git-svn.perl
@@ -974,8 +974,12 @@ sub cmd_dcommit {
} else {
my $cmt_rev;
 
-   unless (defined($_merge_info) || ! $push_merge_info) {
-   $_merge_info = populate_merge_info($d, $gs,
+   my $rev_merge_info;
+   if (defined($_merge_info)) {
+   $rev_merge_info = $_merge_info;
+   }
+   unless (defined($rev_merge_info) || ! $push_merge_info) 
{
+   $rev_merge_info = populate_merge_info($d, $gs,
 $uuid,
 $linear_refs,
 $rewritten_parent);
@@ -993,7 +997,7 @@ sub cmd_dcommit {
   print Committed r$_[0]\n;
   $cmt_rev = $_[0];
},
-   mergeinfo = $_merge_info,
+   mergeinfo = $rev_merge_info,
svn_path = '');
 
my $err_handler = $SVN::Error::handler;
-- 
1.8.5.rc3.5.g96ccada

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] subtree: add squash handling for split and push

2013-11-28 Thread Matthew Ogilvie
On Sat, Nov 23, 2013 at 09:18:56PM +0100, Pierre Penninckx wrote:
 The documentation of subtree says that the --squash option can be used
 for add, merge, split and push subtree commands but only add and merge
 is implemented.

Clarification: The current documentation (correctly) doesn't
actually claim to support split --squash, but it does erroneously
claim to support push --squash.

 cmd_split() first lets split do it's job: finding which commits need to
 be extracted. Now we remember which commit is the parent of the first
 extracted commit. When this step is done, cmd_split() generates a squash
 of the new commits, starting from the aforementioned parent to the last
 extracted commit. This new commit's sha1 is then used for the rest of
 the script.

I've been planning to implement something similar to this patch,
but the semantics I am aiming at are slightly different.

It looks like your patch is basically squashing the new subtree commits
together, throwing out those commits completely, and only keeping
the squashed commit in the split --branch.  

I intend to implement slightly different semantics, where
--squash only affects --rejoin, not the printed commit nor
the split-off --branch.  This is intended to provide a better,
third option for --rejoin'ing a subtree with a lot of history,
while preserving history in the split-off branch:

1. (existing/slow) Don't ever use --rejoin at all?  You can use
   merge --squash to merge in unrelated changes to the
   split-off project, but every split still gets slower
   and slower as each split needs to re-sift-through all
   the same history the previous splits have sifted
   through. 
   
2. (existing/huge mass of duplicated history) Use split --rejoin
   occasionally.  This pulls in the entire history of the
   subtree branch (since the last --rejoin or non-squash merge,
   or everything if neither has been done), which is difficult
   to ignore when looking at global history of the full project,
   especially if it is many pages of commits.  But subsequent
   splits can stop history traversal at the known-common point,
   and will run MUCH faster.
   
3. (new/better) Use split --rejoin --squash (or some other
   invocation to be defined).  The subtree branch is generated
   exactly like normal, including fine-grained history.  But
   instead of merging the subtree branch directly, --rejoin
   will squash all the changes to that branch, and merge in
   just the squash (referencing the unsquashed split
   branch tip in the commit message, but not the
   parent).  Subsequent splits can run very fast, while the
   --rejoin only generated two commits instead of the 
   potentially thousands of (mostly) duplicates it would pull
   in without the --squash.

I have this third option half-coded already, but I still need
to finish it.

I'm fairly sure I can make this work without new adverse effects,
but if someone sees something I'm missing, let me know.

Does anyone have any suggestions about the UI?  Do we need to also
support Pierre Penninckx's split --squash semantics somehow?  If
so, what command line options would allow for distinguishing the
two cases?

--
Matthew Ogilvie   [mmogilvi_...@miniinfo.net]
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to resume broke clone ?

2013-11-28 Thread Shawn Pearce
On Thu, Nov 28, 2013 at 1:29 AM, Jeff King p...@peff.net wrote:
 On Thu, Nov 28, 2013 at 04:09:18PM +0700, Duy Nguyen wrote:

  Git should be better support resume transfer.
  It now seems not doing better it’s job.
  Share code, manage code, transfer code, what would it be a VCS we imagine 
  it ?

 You're welcome to step up and do it. On top of my head  there are a few 
 options:

  - better integration with git bundles, provide a way to seamlessly
 create/fetch/resume the bundles with git clone and git fetch

We have been thinking about formalizing the /clone.bundle hack used by
repo on Android. If the server has the bundle, add a capability in the
refs advertisement saying its available, and the clone client can
first fetch $URL/clone.bundle.

For most Git repositories the bundle can be constructed by saving the
bundle reference header into a file, e.g.
$GIT_DIR/objects/pack/pack-$NAME.bh at the same time the pack is
created. The bundle can be served by combining the .bh and .pack
streams onto the network. It is very little additional disk overhead
for the origin server, but allows resumable clone, provided the server
has not done a GC.

 I posted patches for this last year. One of the things that I got hung
 up on was that I spooled the bundle to disk, and then cloned from it.
 Which meant that you needed twice the disk space for a moment.

I don't think this is a huge concern. In many cases the checked out
copy of the repository approaches a sizable fraction of the .pack
itself. If you don't have 2x .pack disk available at clone time you
may be in trouble anyway as you try to work with the repository post
clone.

 I wanted
 to teach index-pack to --fix-thin a pack that was already on disk, so
 that we could spool to disk, and then finalize it without making another
 copy.

Don't you need to separate the bundle header from the pack data before
you do this? If the bundle is only used at clone time there is no
--fix-thin step.

 One of the downsides of this approach is that it requires the repo
 provider (or somebody else) to provide the bundle. I think that is
 something that a big site like GitHub would do (and probably push the
 bundles out to a CDN, too, to make getting them faster). But it's not a
 universal solution.

See above, I think you can reasonably do the /clone.bundle
automatically on any HTTP server. Big sites might choose to have
/clone.bundle do a redirect into a caching CDN that fills itself by
going to the application servers to obtain the current data. This is
what we do for Android.

  - stablize pack order so we can resume downloading a pack

 I think stabilizing in all cases (e.g., including ones where the content
 has changed) is hard, but I wonder if it would be enough to handle the
 easy cases, where nothing has changed. If the server does not use
 multiple threads for delta computation, it should generate the same pack
 from the same on-disk deterministically. We just need a way for the
 client to indicate that it has the same partial pack.

 I'm thinking that the server would report some opaque hash representing
 the current pack. The client would record that, along with the number of
 pack bytes it received. If the transfer is interrupted, the client comes
 back with the hash/bytes pair. The server starts to generate the pack,
 checks whether the hash matches, and if so, says here is the same pack,
 resuming at byte X.

An important part of this is the want set must be identical to the
prior request. It is entirely possible the branch tips have advanced
since the prior packing attempt started.

 What would need to go into such a hash? It would need to represent the
 exact bytes that will go into the pack, but without actually generating
 those bytes. Perhaps a sha1 over the sequence of object sha1, type,
 base (if applicable), length for each object would be enough. We should
 know that after calling compute_write_order. If the client has a match,
 we should be able to skip ahead to the correct byte.

I don't think Length is sufficient.

The repository could have recompressed an object with the same length
but different libz encoding. I wonder if loose object recompression is
reliable enough about libz encoding to resume in the middle of an
object? Is it just based on libz version?

You may need to do include information about the source of the object,
e.g. the trailing 20 byte hash in the source pack file.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to resume broke clone ?

2013-11-28 Thread Shawn Pearce
On Thu, Nov 28, 2013 at 1:29 AM, zhifeng hu z...@ancientrocklab.com wrote:
 Once using git clone —depth or git fetch —depth,
 While you want to move backward.
 you may face problem

  git fetch --depth=105
 error: Could not read 483bbf41ca5beb7e38b3b01f21149c56a1154b7a
 error: Could not read aacb82de3ff8ae7b0a9e4cfec16c1807b6c315ef
 error: Could not read 5a1758710d06ce9ddef754a8ee79408277032d8b
 error: Could not read a7d5629fe0580bd3e154206388371f5b8fc832db
 error: Could not read 073291c476b4edb4d10bbada1e64b471ba153b6b

We now have a resumable bundle available through our kernel.org
mirror. The bundle is 658M.

  mkdir linux
  cd linux
  git init

  wget 
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux/clone.bundle

  sha1sum clone.bundle
  96831de0b8171e5ebba94edb31e37e70e1df  clone.bundle

  git fetch -u ./clone.bundle refs/*:refs/*
  git reset --hard

You can also use our mirror as an upstream, as we have servers in Asia
that lag no more than 5 or 6 minutes behind kernel.org:

  git remote add origin
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux/
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 03/10] git_connect: remove artificial limit of a remote command

2013-11-28 Thread Torsten Bögershausen
Since day one, function git_connect() had a limit on the command line of
the command that is invoked to make a connection. 7a33bcbe converted the
code that constructs the command to strbuf. This would have been the
right time to remove the limit, but it did not happen. Remove it now.

git_connect() uses start_command() to invoke the command; consequently,
the limits of the system still apply, but are diagnosed only at execve()
time. But these limits are more lenient than the 1K that git_connect()
imposed.

Signed-off-by: Johannes Sixt j...@kdbg.org
Signed-off-by: Torsten Bögershausen tbo...@web.de
---
 connect.c | 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/connect.c b/connect.c
index 06e88b0..6cc1f8d 100644
--- a/connect.c
+++ b/connect.c
@@ -527,8 +527,6 @@ static struct child_process *git_proxy_connect(int fd[2], 
char *host)
return proxy;
 }
 
-#define MAX_CMD_LEN 1024
-
 static char *get_port(char *host)
 {
char *end;
@@ -570,7 +568,7 @@ struct child_process *git_connect(int fd[2], const char 
*url_orig,
int free_path = 0;
char *port = NULL;
const char **arg;
-   struct strbuf cmd;
+   struct strbuf cmd = STRBUF_INIT;
 
/* Without this we cannot rely on waitpid() to tell
 * what happened to our children.
@@ -676,12 +674,9 @@ struct child_process *git_connect(int fd[2], const char 
*url_orig,
 
conn = xcalloc(1, sizeof(*conn));
 
-   strbuf_init(cmd, MAX_CMD_LEN);
strbuf_addstr(cmd, prog);
strbuf_addch(cmd, ' ');
sq_quote_buf(cmd, path);
-   if (cmd.len = MAX_CMD_LEN)
-   die(command line too long);
 
conn-in = conn-out = -1;
conn-argv = arg = xcalloc(7, sizeof(*arg));
-- 
1.8.5.rc0.23.gaa27064


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 02/10] t5601: Add tests for ssh

2013-11-28 Thread Torsten Bögershausen
Add more tests testing all the combinations:
 -IPv4 or IPv6
 -path starting with / or with /~
 -with and without the ssh:// scheme

Some test fail, they need updates in connect.c

Signed-off-by: Torsten Bögershausen tbo...@web.de
---
 t/t5601-clone.sh | 100 ++-
 1 file changed, 99 insertions(+), 1 deletion(-)

diff --git a/t/t5601-clone.sh b/t/t5601-clone.sh
index c634f77..ba99972 100755
--- a/t/t5601-clone.sh
+++ b/t/t5601-clone.sh
@@ -313,7 +313,7 @@ expect_ssh () {
 
 setup_ssh_wrapper
 
-test_expect_success 'cloning myhost:src uses ssh' '
+test_expect_success 'clone myhost:src uses ssh' '
git clone myhost:src ssh-clone 
expect_ssh myhost src
 '
@@ -329,6 +329,104 @@ test_expect_success 'bracketed hostnames are still ssh' '
expect_ssh myhost:123 src
 '
 
+counter=0
+# $1 url
+# $2 none|host
+# $3 path
+test_clone_url () {
+   counter=$(($counter + 1))
+   test_might_fail git clone $1 tmp$counter 
+   expect_ssh $2 $3
+}
+
+test_expect_success NOT_MINGW 'clone c:temp is ssl' '
+   test_clone_url c:temp c temp
+'
+
+test_expect_success MINGW 'clone c:temp is dos drive' '
+   test_clone_url c:temp none
+'
+
+#ip v4
+for repo in rep rep/home/project /~proj 123
+do
+   test_expect_success clone host:$repo '
+   test_clone_url host:$repo host $repo
+   '
+done
+
+#ipv6
+# failing
+for repo in /~proj
+do
+   test_expect_failure clone [::1]:$repo '
+   test_clone_url [::1]:$repo ::1 $repo
+   '
+done
+
+for repo in rep rep/home/project 123
+do
+   test_expect_success clone [::1]:$repo '
+   test_clone_url [::1]:$repo ::1 $repo
+   '
+done
+
+# Corner cases
+# failing
+for repo in [foo]bar/baz:qux [foo/bar]:baz
+do
+   test_expect_failure clone $url is not ssh '
+   test_clone_url $url none
+   '
+done
+
+for url in foo/bar:baz
+do
+   test_expect_success clone $url is not ssh '
+   test_clone_url $url none
+   '
+done
+
+#with ssh:// scheme
+test_expect_success 'clone ssh://host.xz/home/user/repo' '
+   test_clone_url ssh://host.xz/home/user/repo host.xz /home/user/repo
+'
+
+# from home directory
+test_expect_success 'clone ssh://host.xz/~repo' '
+   test_clone_url ssh://host.xz/~repo host.xz ~repo
+'
+
+# with port number
+test_expect_success 'clone ssh://host.xz:22/home/user/repo' '
+   test_clone_url ssh://host.xz:22/home/user/repo -p 22 host.xz 
/home/user/repo
+'
+
+# from home directory with port number
+test_expect_success 'clone ssh://host.xz:22/~repo' '
+   test_clone_url ssh://host.xz:22/~repo -p 22 host.xz ~repo
+'
+
+#IPv6
+test_expect_success 'clone ssh://[::1]/home/user/repo' '
+   test_clone_url ssh://[::1]/home/user/repo ::1 /home/user/repo
+'
+
+#IPv6 from home directory
+test_expect_success 'clone ssh://[::1]/~repo' '
+   test_clone_url ssh://[::1]/~repo ::1 ~repo
+'
+
+#IPv6 with port number
+test_expect_success 'clone ssh://[::1]:22/home/user/repo' '
+   test_clone_url ssh://[::1]:22/home/user/repo -p 22 ::1 
/home/user/repo
+'
+
+#IPv6 from home directory with port number
+test_expect_success 'clone ssh://[::1]:22/~repo' '
+   test_clone_url ssh://[::1]:22/~repo -p 22 ::1 ~repo
+'
+
 test_expect_success 'clone from a repository with two identical branches' '
 
(
-- 
1.8.5.rc0.23.gaa27064


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 04/10] git_connect: factor out discovery of the protocol and its parts

2013-11-28 Thread Torsten Bögershausen
git_connect has grown large due to the many different protocols syntaxes
that are supported. Move the part of the function that parses the URL to
connect to into a separate function for readability.

Signed-off-by: Johannes Sixt j...@kdbg.org
Signed-off-by: Torsten Bögershausen tbo...@web.de
---
 connect.c | 80 ++-
 1 file changed, 53 insertions(+), 27 deletions(-)

diff --git a/connect.c b/connect.c
index 6cc1f8d..a6cf345 100644
--- a/connect.c
+++ b/connect.c
@@ -543,37 +543,20 @@ static char *get_port(char *host)
return NULL;
 }
 
-static struct child_process no_fork;
-
 /*
- * This returns a dummy child_process if the transport protocol does not
- * need fork(2), or a struct child_process object if it does.  Once done,
- * finish the connection with finish_connect() with the value returned from
- * this function (it is safe to call finish_connect() with NULL to support
- * the former case).
- *
- * If it returns, the connect is successful; it just dies on errors (this
- * will hopefully be changed in a libification effort, to return NULL when
- * the connection failed).
+ * Extract protocol and relevant parts from the specified connection URL.
+ * The caller must free() the returned strings.
  */
-struct child_process *git_connect(int fd[2], const char *url_orig,
- const char *prog, int flags)
+static enum protocol parse_connect_url(const char *url_orig, char **ret_host,
+  char **ret_port, char **ret_path)
 {
char *url;
char *host, *path;
char *end;
int c;
-   struct child_process *conn = no_fork;
enum protocol protocol = PROTO_LOCAL;
int free_path = 0;
char *port = NULL;
-   const char **arg;
-   struct strbuf cmd = STRBUF_INIT;
-
-   /* Without this we cannot rely on waitpid() to tell
-* what happened to our children.
-*/
-   signal(SIGCHLD, SIG_DFL);
 
if (is_url(url_orig))
url = url_decode(url_orig);
@@ -645,6 +628,49 @@ struct child_process *git_connect(int fd[2], const char 
*url_orig,
if (protocol == PROTO_SSH  host != url)
port = get_port(end);
 
+   *ret_host = xstrdup(host);
+   if (port)
+   *ret_port = xstrdup(port);
+   else
+   *ret_port = NULL;
+   if (free_path)
+   *ret_path = path;
+   else
+   *ret_path = xstrdup(path);
+   free(url);
+   return protocol;
+}
+
+static struct child_process no_fork;
+
+/*
+ * This returns a dummy child_process if the transport protocol does not
+ * need fork(2), or a struct child_process object if it does.  Once done,
+ * finish the connection with finish_connect() with the value returned from
+ * this function (it is safe to call finish_connect() with NULL to support
+ * the former case).
+ *
+ * If it returns, the connect is successful; it just dies on errors (this
+ * will hopefully be changed in a libification effort, to return NULL when
+ * the connection failed).
+ */
+struct child_process *git_connect(int fd[2], const char *url,
+ const char *prog, int flags)
+{
+   char *host, *path;
+   struct child_process *conn = no_fork;
+   enum protocol protocol;
+   char *port;
+   const char **arg;
+   struct strbuf cmd = STRBUF_INIT;
+
+   /* Without this we cannot rely on waitpid() to tell
+* what happened to our children.
+*/
+   signal(SIGCHLD, SIG_DFL);
+
+   protocol = parse_connect_url(url, host, port, path);
+
if (protocol == PROTO_GIT) {
/* These underlying connection commands die() if they
 * cannot connect.
@@ -666,9 +692,9 @@ struct child_process *git_connect(int fd[2], const char 
*url_orig,
 prog, path, 0,
 target_host, 0);
free(target_host);
-   free(url);
-   if (free_path)
-   free(path);
+   free(host);
+   free(port);
+   free(path);
return conn;
}
 
@@ -709,9 +735,9 @@ struct child_process *git_connect(int fd[2], const char 
*url_orig,
fd[0] = conn-out; /* read from child's stdout */
fd[1] = conn-in;  /* write to child's stdin */
strbuf_release(cmd);
-   free(url);
-   if (free_path)
-   free(path);
+   free(host);
+   free(port);
+   free(path);
return conn;
 }
 
-- 
1.8.5.rc0.23.gaa27064


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 05/10] git fetch-pack: Add --diag-url

2013-11-28 Thread Torsten Bögershausen
The main purpose is to trace the URL parser called by git_connect() in
connect.c

The main features of the parser can be listed as this:
- parse out host and path for URLs with a scheme (git:// file:// ssh://)
- parse host names embedded by [] correctly
- extract the port number, if present
- separate URLs like file (which are local)
  from URLs like host:repo which should use ssh

Add the new parameter --diag-url to git fetch-pack,
which prints the value for protocol, host and path to stderr and exits.

Signed-off-by: Torsten Bögershausen tbo...@web.de
---
 builtin/fetch-pack.c | 14 +++---
 connect.c| 28 
 connect.h|  1 +
 fetch-pack.h |  1 +
 4 files changed, 41 insertions(+), 3 deletions(-)

diff --git a/builtin/fetch-pack.c b/builtin/fetch-pack.c
index c8e8582..758b5ac 100644
--- a/builtin/fetch-pack.c
+++ b/builtin/fetch-pack.c
@@ -7,7 +7,7 @@
 static const char fetch_pack_usage[] =
 git fetch-pack [--all] [--stdin] [--quiet|-q] [--keep|-k] [--thin] 
 [--include-tag] [--upload-pack=git-upload-pack] [--depth=n] 
-[--no-progress] [-v] [host:]directory [refs...];
+[--no-progress] [--diag-url] [-v] [host:]directory [refs...];
 
 static void add_sought_entry_mem(struct ref ***sought, int *nr, int *alloc,
 const char *name, int namelen)
@@ -81,6 +81,10 @@ int cmd_fetch_pack(int argc, const char **argv, const char 
*prefix)
args.stdin_refs = 1;
continue;
}
+   if (!strcmp(--diag-url, arg)) {
+   args.diag_url = 1;
+   continue;
+   }
if (!strcmp(-v, arg)) {
args.verbose = 1;
continue;
@@ -146,10 +150,14 @@ int cmd_fetch_pack(int argc, const char **argv, const 
char *prefix)
fd[0] = 0;
fd[1] = 1;
} else {
+   int flags = args.verbose ? CONNECT_VERBOSE : 0;
+   if (args.diag_url)
+   flags |= CONNECT_DIAG_URL;
conn = git_connect(fd, dest, args.uploadpack,
-  args.verbose ? CONNECT_VERBOSE : 0);
+  flags);
+   if (!conn)
+   return args.diag_url ? 0 : 1;
}
-
get_remote_heads(fd[0], NULL, 0, ref, 0, NULL);
 
ref = fetch_pack(args, fd, conn, ref, dest,
diff --git a/connect.c b/connect.c
index a6cf345..a16bdaf 100644
--- a/connect.c
+++ b/connect.c
@@ -236,6 +236,20 @@ enum protocol {
PROTO_GIT
 };
 
+static const char *prot_name(enum protocol protocol)
+{
+   switch (protocol) {
+   case PROTO_LOCAL:
+   return file;
+   case PROTO_SSH:
+   return ssh;
+   case PROTO_GIT:
+   return git;
+   default:
+   return unkown protocol;
+   }
+}
+
 static enum protocol get_protocol(const char *name)
 {
if (!strcmp(name, ssh))
@@ -670,6 +684,20 @@ struct child_process *git_connect(int fd[2], const char 
*url,
signal(SIGCHLD, SIG_DFL);
 
protocol = parse_connect_url(url, host, port, path);
+   if (flags  CONNECT_DIAG_URL) {
+   printf(Diag: url=%s\n, url ? url : NULL);
+   printf(Diag: protocol=%s\n, prot_name(protocol));
+   printf(Diag: hostandport=%s, host ? host : NULL);
+   if (port)
+   printf(:%s\n, port);
+   else
+   printf(\n);
+   printf(Diag: path=%s\n, path ? path : NULL);
+   free(host);
+   free(port);
+   free(path);
+   return NULL;
+   }
 
if (protocol == PROTO_GIT) {
/* These underlying connection commands die() if they
diff --git a/connect.h b/connect.h
index 64fb7db..527d58a 100644
--- a/connect.h
+++ b/connect.h
@@ -2,6 +2,7 @@
 #define CONNECT_H
 
 #define CONNECT_VERBOSE   (1u  0)
+#define CONNECT_DIAG_URL  (1u  1)
 extern struct child_process *git_connect(int fd[2], const char *url, const 
char *prog, int flags);
 extern int finish_connect(struct child_process *conn);
 extern int git_connection_is_socket(struct child_process *conn);
diff --git a/fetch-pack.h b/fetch-pack.h
index 461cbf3..20ccc12 100644
--- a/fetch-pack.h
+++ b/fetch-pack.h
@@ -14,6 +14,7 @@ struct fetch_pack_args {
use_thin_pack:1,
fetch_all:1,
stdin_refs:1,
+   diag_url:1,
verbose:1,
no_progress:1,
include_tag:1,
-- 
1.8.5.rc0.23.gaa27064


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 07/10] git fetch: support host:/~repo

2013-11-28 Thread Torsten Bögershausen
The documentation (in urls.txt) says that
ssh://host:/~repo,
host:/~repo or
host:~repo
specifiy the repository repo in the home directory at host.

This has not been working for host:/~repo.

Fix a minor regression:
Before commit 356bec Support [address] in URLs
the comparison url != hostname could be used to determine if
the URL had a scheme or not: ssh://host/host != host.
After 356bec [::1] was converted into ::1, yielding
url != hostname as well.
Solution:
Don't use if (url != hostname), but look at the separator instead.
Rename the variable c into separator.

Signed-off-by: Torsten Bögershausen tbo...@web.de
---
 connect.c | 14 +++---
 t/t5500-fetch-pack.sh | 24 
 t/t5601-clone.sh  | 12 ++--
 3 files changed, 33 insertions(+), 17 deletions(-)

diff --git a/connect.c b/connect.c
index a16bdaf..7e5f608 100644
--- a/connect.c
+++ b/connect.c
@@ -567,7 +567,7 @@ static enum protocol parse_connect_url(const char 
*url_orig, char **ret_host,
char *url;
char *host, *path;
char *end;
-   int c;
+   int separator;
enum protocol protocol = PROTO_LOCAL;
int free_path = 0;
char *port = NULL;
@@ -582,10 +582,10 @@ static enum protocol parse_connect_url(const char 
*url_orig, char **ret_host,
*host = '\0';
protocol = get_protocol(url);
host += 3;
-   c = '/';
+   separator = '/';
} else {
host = url;
-   c = ':';
+   separator = ':';
}
 
/*
@@ -605,9 +605,9 @@ static enum protocol parse_connect_url(const char 
*url_orig, char **ret_host,
} else
end = host;
 
-   path = strchr(end, c);
+   path = strchr(end, separator);
if (path  !has_dos_drive_prefix(end)) {
-   if (c == ':') {
+   if (separator == ':') {
if (host != url || path  strchrnul(host, '/')) {
protocol = PROTO_SSH;
*path++ = '\0';
@@ -624,7 +624,7 @@ static enum protocol parse_connect_url(const char 
*url_orig, char **ret_host,
 * null-terminate hostname and point path to ~ for URL's like this:
 *ssh://host.xz/~user/repo
 */
-   if (protocol != PROTO_LOCAL  host != url) {
+   if (protocol != PROTO_LOCAL) {
char *ptr = path;
if (path[1] == '~')
path++;
@@ -639,7 +639,7 @@ static enum protocol parse_connect_url(const char 
*url_orig, char **ret_host,
/*
 * Add support for ssh port: ssh://host.xy:port/...
 */
-   if (protocol == PROTO_SSH  host != url)
+   if (protocol == PROTO_SSH  separator == '/')
port = get_port(end);
 
*ret_host = xstrdup(host);
diff --git a/t/t5500-fetch-pack.sh b/t/t5500-fetch-pack.sh
index a2b37af..2d3cdaa 100755
--- a/t/t5500-fetch-pack.sh
+++ b/t/t5500-fetch-pack.sh
@@ -589,6 +589,30 @@ do
check_prot_path $p://$h/~$r $p /~$r
'
done
+   # file without scheme
+   for h in nohost nohost:12 [::1] [::1]:23 [ [:aa
+   do
+   test_expect_success fetch-pack --diag-url ./$h:$r '
+   check_prot_path ./$h:$r $p ./$h:$r
+   '
+   # No /~ - ~ conversion for file
+   test_expect_success fetch-pack --diag-url ./$p:$h/~$r '
+   check_prot_path ./$p:$h/~$r $p ./$p:$h/~$r
+   '
+   done
+   #ssh without scheme
+   p=ssh
+   for h in host [::1]
+   do
+   hh=$(echo $h | tr -d [])
+   test_expect_success fetch-pack --diag-url $h:$r '
+   check_prot_path $h:$r $p $r
+   '
+   # Do /~ - ~ conversion
+   test_expect_success fetch-pack --diag-url $h:/~$r '
+   check_prot_host_path $h:/~$r $p $hh ~$r
+   '
+   done
 done
 
 test_done
diff --git a/t/t5601-clone.sh b/t/t5601-clone.sh
index ba99972..4db0c0b 100755
--- a/t/t5601-clone.sh
+++ b/t/t5601-clone.sh
@@ -348,7 +348,7 @@ test_expect_success MINGW 'clone c:temp is dos drive' '
 '
 
 #ip v4
-for repo in rep rep/home/project /~proj 123
+for repo in rep rep/home/project 123
 do
test_expect_success clone host:$repo '
test_clone_url host:$repo host $repo
@@ -356,14 +356,6 @@ do
 done
 
 #ipv6
-# failing
-for repo in /~proj
-do
-   test_expect_failure clone [::1]:$repo '
-   test_clone_url [::1]:$repo ::1 $repo
-   '
-done
-
 for repo in rep rep/home/project 123
 do
test_expect_success clone [::1]:$repo '
@@ -373,7 +365,7 @@ done
 
 # Corner cases
 # failing
-for repo in [foo]bar/baz:qux [foo/bar]:baz
+for url in [foo]bar/baz:qux [foo/bar]:baz
 do
test_expect_failure clone $url is not ssh '
test_clone_url $url 

[PATCH v7 06/10] t5500: Test case for diag-url

2013-11-28 Thread Torsten Bögershausen
Add test cases using git fetch-pack --diag-url:

- parse out host and path for URLs with a scheme (git:// file:// ssh://)
- parse host names embedded by [] correctly
- extract the port number, if present
- separate URLs like file (which are local)
  from URLs like host:repo which should use ssh

Signed-off-by: Torsten Bögershausen tbo...@web.de
---
 t/t5500-fetch-pack.sh | 59 +++
 1 file changed, 59 insertions(+)

diff --git a/t/t5500-fetch-pack.sh b/t/t5500-fetch-pack.sh
index d87ddf7..a2b37af 100755
--- a/t/t5500-fetch-pack.sh
+++ b/t/t5500-fetch-pack.sh
@@ -531,5 +531,64 @@ test_expect_success 'shallow fetch with tags does not 
break the repository' '
git fsck
)
 '
+check_prot_path() {
+   cat expected -EOF 
+   Diag: url=$1
+   Diag: protocol=$2
+   Diag: path=$3
+   EOF
+   git fetch-pack --diag-url $1 | grep -v hostandport= actual 
+   test_cmp expected actual
+}
+
+check_prot_host_path() {
+   cat expected -EOF 
+   Diag: url=$1
+   Diag: protocol=$2
+   Diag: hostandport=$3
+   Diag: path=$4
+   EOF
+   git fetch-pack --diag-url $1 actual 
+   test_cmp expected actual
+}
+
+for r in repo re:po re/po
+do
+   # git or ssh with scheme
+   for p in ssh+git git+ssh git ssh
+   do
+   for h in host host:12 [::1] [::1]:23
+   do
+   case $p in
+   *ssh*)
+   hh=$(echo $h | tr -d [])
+   pp=ssh
+   ;;
+   *)
+   hh=$h
+   pp=$p
+   ;;
+   esac
+   test_expect_success fetch-pack --diag-url $p://$h/$r '
+   check_prot_host_path $p://$h/$r $pp $hh /$r
+   '
+   # /~ - ~ conversion
+   test_expect_success fetch-pack --diag-url $p://$h/~$r 
'
+   check_prot_host_path $p://$h/~$r $pp $hh ~$r
+   '
+   done
+   done
+   # file with scheme
+   for p in file
+   do
+   test_expect_success fetch-pack --diag-url $p://$h/$r '
+   check_prot_path $p://$h/$r $p /$r
+   '
+   # No /~ - ~ conversion for file
+   test_expect_success fetch-pack --diag-url $p://$h/~$r '
+   check_prot_path $p://$h/~$r $p /~$r
+   '
+   done
+done
 
 test_done
-- 
1.8.5.rc0.23.gaa27064


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 09/10] connect.c: Refactor url parsing

2013-11-28 Thread Torsten Bögershausen
Make the function is_local() from transport.c public, rename it into
url_is_local_not_ssh() and use it in both transport.c and connect.c

Use a protocol local for URLs for the local file system.

One note about using file:// under Windows:
The (absolute) path on Unix like system typically starts with /.
When the host is empty, it can be omitted, so that a shell scriptlet
url=file://$pwd
will give a URL like file:///home/user/repo.

Windows does not have the same concept of a root directory located in /.
When parsing the URL allow file://C:/user/repo
(even if RFC1738 indicates that file:///C:/user/repo should be used).

Signed-off-by: Torsten Bögershausen tbo...@web.de
---
 connect.c | 57 +++
 connect.h |  1 +
 t/t5500-fetch-pack.sh |  7 +++
 t/t5601-clone.sh  |  8 
 transport.c   | 12 ++-
 5 files changed, 48 insertions(+), 37 deletions(-)

diff --git a/connect.c b/connect.c
index 7874530..04093c4 100644
--- a/connect.c
+++ b/connect.c
@@ -232,14 +232,24 @@ int server_supports(const char *feature)
 
 enum protocol {
PROTO_LOCAL = 1,
+   PROTO_FILE,
PROTO_SSH,
PROTO_GIT
 };
 
+int url_is_local_not_ssh(const char *url)
+{
+   const char *colon = strchr(url, ':');
+   const char *slash = strchr(url, '/');
+   return !colon || (slash  slash  colon) ||
+   has_dos_drive_prefix(url);
+}
+
 static const char *prot_name(enum protocol protocol)
 {
switch (protocol) {
case PROTO_LOCAL:
+   case PROTO_FILE:
return file;
case PROTO_SSH:
return ssh;
@@ -261,7 +271,7 @@ static enum protocol get_protocol(const char *name)
if (!strcmp(name, ssh+git))
return PROTO_SSH;
if (!strcmp(name, file))
-   return PROTO_LOCAL;
+   return PROTO_FILE;
die(I don't handle protocol '%s', name);
 }
 
@@ -564,9 +574,8 @@ static enum protocol parse_connect_url(const char 
*url_orig, char **ret_host,
char *url;
char *host, *path;
char *end;
-   int separator;
+   int separator = '/';
enum protocol protocol = PROTO_LOCAL;
-   int free_path = 0;
 
if (is_url(url_orig))
url = url_decode(url_orig);
@@ -578,10 +587,12 @@ static enum protocol parse_connect_url(const char 
*url_orig, char **ret_host,
*host = '\0';
protocol = get_protocol(url);
host += 3;
-   separator = '/';
} else {
host = url;
-   separator = ':';
+   if (!url_is_local_not_ssh(url)) {
+   protocol = PROTO_SSH;
+   separator = ':';
+   }
}
 
/*
@@ -597,17 +608,12 @@ static enum protocol parse_connect_url(const char 
*url_orig, char **ret_host,
} else
end = host;
 
-   path = strchr(end, separator);
-   if (path  !has_dos_drive_prefix(end)) {
-   if (separator == ':') {
-   if (host != url || path  strchrnul(host, '/')) {
-   protocol = PROTO_SSH;
-   *path++ = '\0';
-   } else /* '/' in the host part, assume local path */
-   path = end;
-   }
-   } else
+   if (protocol == PROTO_LOCAL)
path = end;
+   else if (protocol == PROTO_FILE  has_dos_drive_prefix(end))
+   path = end; /* file://$(pwd) may be file://C:/projects/repo 
*/
+   else
+   path = strchr(end, separator);
 
if (!path || !*path)
die(No path specified. See 'man git-pull' for valid url 
syntax);
@@ -616,23 +622,20 @@ static enum protocol parse_connect_url(const char 
*url_orig, char **ret_host,
 * null-terminate hostname and point path to ~ for URL's like this:
 *ssh://host.xz/~user/repo
 */
-   if (protocol != PROTO_LOCAL) {
-   char *ptr = path;
+
+   end = path; /* Need to \0 terminate host here */
+   if (separator == ':')
+   path++; /* path starts after ':' */
+   if (protocol == PROTO_GIT || protocol == PROTO_SSH) {
if (path[1] == '~')
path++;
-   else {
-   path = xstrdup(ptr);
-   free_path = 1;
-   }
-
-   *ptr = '\0';
}
 
+   path = xstrdup(path);
+   *end = '\0';
+
*ret_host = xstrdup(host);
-   if (free_path)
-   *ret_path = path;
-   else
-   *ret_path = xstrdup(path);
+   *ret_path = path;
free(url);
return protocol;
 }
diff --git a/connect.h b/connect.h
index 527d58a..c41a685 100644
--- a/connect.h
+++ b/connect.h
@@ -9,5 +9,6 @@ extern 

[PATCH v7 01/10] t5601: remove clear_ssh, refactor setup_ssh_wrapper

2013-11-28 Thread Torsten Bögershausen
Commit 8d3d28f5 added test cases for URLs which should be ssh.
Remove the function clear_ssh, use test_when_finished to clean up.

Introduce the function setup_ssh_wrapper, which could be factored
out together with expect_ssh.

Tighten one test and use foo:bar instead of ./foo:bar,

Helped-by: Jeff King p...@peff.net
Signed-off-by: Torsten Bögershausen tbo...@web.de
---

Comments to V6:
Code from Johannes Sixt is part of the series, original found here:
 http://permalink.gmane.org/gmane.comp.version-control.git/237339
 http://permalink.gmane.org/gmane.comp.version-control.git/237338

Changes since V6:
- git fetch-pack --diag-url uses stdout instead of stderr
- cleanup in the test scripts
- Removed [PATCH v6 07/10] connect.c: Corner case for IPv6
- Added missing sign-off
- Try to explain better why windows supports file://C:/repo
  (Actually we should support file:///C:/repo, but we don't
- Other remarks from code review, did I miss any ?

I'm not sure about 10/10, 2 cleanups which I didn't manage
to find a  better place to be.
However, I want to concentrate on 1..9, so that 10/10 can be dropped.

And a question:
Can we replace tb/clone-ssh-with-colon-for-port with this stuff ?
If we are OK with part 1..4, I don't need to send them again.


 t/t5601-clone.sh | 40 
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/t/t5601-clone.sh b/t/t5601-clone.sh
index 1d1c875..c634f77 100755
--- a/t/t5601-clone.sh
+++ b/t/t5601-clone.sh
@@ -280,25 +280,26 @@ test_expect_success 'clone checking out a tag' '
test_cmp fetch.expected fetch.actual
 '
 
-test_expect_success 'setup ssh wrapper' '
-   write_script $TRASH_DIRECTORY/ssh-wrapper -\EOF 
-   echo $TRASH_DIRECTORY/ssh-output ssh: $* 
-   # throw away all but the last argument, which should be the
-   # command
-   while test $# -gt 1; do shift; done
-   eval $1
-   EOF
-
-   GIT_SSH=$TRASH_DIRECTORY/ssh-wrapper 
-   export GIT_SSH 
-   export TRASH_DIRECTORY
-'
-
-clear_ssh () {
-   $TRASH_DIRECTORY/ssh-output
+setup_ssh_wrapper () {
+   test_expect_success 'setup ssh wrapper' '
+   write_script $TRASH_DIRECTORY/ssh-wrapper -\EOF 
+   echo $TRASH_DIRECTORY/ssh-output ssh: $* 
+   # throw away all but the last argument, which should be the
+   # command
+   while test $# -gt 1; do shift; done
+   eval $1
+   EOF
+   GIT_SSH=$TRASH_DIRECTORY/ssh-wrapper 
+   export GIT_SSH 
+   export TRASH_DIRECTORY 
+   $TRASH_DIRECTORY/ssh-output
+   '
 }
 
 expect_ssh () {
+   test_when_finished '
+   (cd $TRASH_DIRECTORY  rm -f ssh-expect  ssh-output)
+   ' 
{
case $1 in
none)
@@ -310,21 +311,20 @@ expect_ssh () {
(cd $TRASH_DIRECTORY  test_cmp ssh-expect ssh-output)
 }
 
+setup_ssh_wrapper
+
 test_expect_success 'cloning myhost:src uses ssh' '
-   clear_ssh 
git clone myhost:src ssh-clone 
expect_ssh myhost src
 '
 
 test_expect_success NOT_MINGW,NOT_CYGWIN 'clone local path foo:bar' '
-   clear_ssh 
cp -R src foo:bar 
-   git clone ./foo:bar foobar 
+   git clone foo:bar foobar 
expect_ssh none
 '
 
 test_expect_success 'bracketed hostnames are still ssh' '
-   clear_ssh 
git clone [myhost:123]:src ssh-bracket-clone 
expect_ssh myhost:123 src
 '
-- 
1.8.5.rc0.23.gaa27064


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Letter From Tara Woodgate....I Need to Know You

2013-11-28 Thread tara woodgate
Am Tara from England currently on treatment for breast
cancer.i need Godly person to handle humanitarian project in my name as my
hope of surviving is divided according to my medical report.I need to know
you for read my attach letter more edification.

-- 




TARA LETTER.doc
Description: MS-Word document


[PATCH v2] gitk: make pointer selection visible in highlighted lines

2013-11-28 Thread Max Kirillov
Custom tags have higher priority than `sel`, and when they define their
own background, it makes selection invisible. Especially inconvenient
for `filesep` (to select filenames), but also affects other tags.
Use `tag raise` to fix `sel`'s priority.

Also change `omark` tag handling, so that it is created once, together
with others, and then only removed from text rather than deleted. Then
it will not get larger priority than the `sel` tag.

Signed-off-by: Max Kirillov m...@max630.net
---

Fixed the typo in the comment and selection of text in marked line

 gitk | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/gitk b/gitk
index 1dd5137..491a1fa 100755
--- a/gitk
+++ b/gitk
@@ -2029,6 +2029,7 @@ proc makewindow {} {
 global headctxmenu progresscanv progressitem progresscoords statusw
 global fprogitem fprogcoord lastprogupdate progupdatepending
 global rprogitem rprogcoord rownumsel numcommits
+global markbgcolor
 global have_tk85 use_ttk NS
 global git_version
 global worddiff
@@ -2376,6 +2377,8 @@ proc makewindow {} {
 $ctext tag conf found -back yellow
 $ctext tag conf currentsearchhit -back orange
 $ctext tag conf wwrap -wrap word
+$ctext tag conf omark -background $markbgcolor
+$ctext tag raise sel
 
 .pwbottom add .bleft
 if {!$use_ttk} {
@@ -7439,11 +7442,10 @@ proc getblobline {bf id} {
 }
 
 proc mark_ctext_line {lnum} {
-global ctext markbgcolor
+global ctext
 
-$ctext tag delete omark
+$ctext tag remove omark 1.0 end
 $ctext tag add omark $lnum.0 $lnum.0 + 1 line
-$ctext tag conf omark -background $markbgcolor
 $ctext see $lnum.0
 }
 
-- 
1.8.4.2.1566.g3c1a064

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to resume broke clone ?

2013-11-28 Thread Jakub Narebski
zhifeng hu zf at ancientrocklab.com writes:

 
 Once using git clone —depth or git fetch —depth,
 While you want to move backward.
 you may face problem
 
  git fetch --depth=105
 error: Could not read 483bbf41ca5beb7e38b3b01f21149c56a1154b7a
 error: Could not read aacb82de3ff8ae7b0a9e4cfec16c1807b6c315ef
 error: Could not read 5a1758710d06ce9ddef754a8ee79408277032d8b
 error: Could not read a7d5629fe0580bd3e154206388371f5b8fc832db
 error: Could not read 073291c476b4edb4d10bbada1e64b471ba153b6b

BTW. there was (is?) a bundler service at http://bundler.caurea.org/
but I don't know if it can create Linux-size bundle.

-- 
Jakub Narębski


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


git paging output

2013-11-28 Thread John Collins Sunday
How can I make git to use emacsclient for all output? I know that setting
core.pager to nil makes it do full output but how do I push it to emacs?

John
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can't connect to git-scm.com

2013-11-28 Thread klonos
John Szakmeister john at szakmeister.net writes:

 
 I'm not sure what happened, but it seems to be working now.
 
 -John
 

Still giving me 'Sorry, no Host found' errors.


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] subtree: add squash handling for split and push

2013-11-28 Thread Pierre Penninckx
Hi Matthew,

 Clarification: The current documentation (correctly) doesn't
 actually claim to support split --squash, but it does erroneously
 claim to support push --squash ».

Yes indeed. ;)

 It looks like your patch is basically squashing the new subtree commits
 together, throwing out those commits completely, and only keeping
 the squashed commit in the split —branch.

Exactly.

 3. (new/better) Use split --rejoin --squash (or some other
   invocation to be defined).  The subtree branch is generated
   exactly like normal, including fine-grained history.  But
   instead of merging the subtree branch directly, --rejoin
   will squash all the changes to that branch, and merge in
   just the squash (referencing the unsquashed split
   branch tip in the commit message, but not the
   parent).  Subsequent splits can run very fast, while the
   --rejoin only generated two commits instead of the 
   potentially thousands of (mostly) duplicates it would pull
   in without the --squash ».

Isn’t this similar to my way? I mean I too generate the fine-grained history 
and make a squash afterwards, no?
I also don’t get why would your solution generate any duplicates. Would mine 
generate some?
I suppose the two answers are linked.

 I have this third option half-coded already, but I still need
 to finish it.

I’m eager to test it!

 Does anyone have any suggestions about the UI?  Do we need to also
 support Pierre Penninckx's split --squash semantics somehow?  If
 so, what command line options would allow for distinguishing the
 two cases?

Maybe `split --rejoin-squash` since it’s really a third way?
I intended to use `push --squash` to send a squash of the commits to hide the 
actual tinkering. So if your way allows to do it, I vote to stick with yours.

Regards,
Pierre Penninckx--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3] gitweb: Add an option for adding more branch refs

2013-11-28 Thread Eric Sunshine
On Thu, Nov 28, 2013 at 6:44 AM, Krzesimir Nowak krzesi...@endocode.com wrote:
 Allow @additional_branch_refs configuration variable to tell gitweb to
 show refs from additional hierarchies in addition to branches in the
 list-of-branches view.

 Signed-off-by: Krzesimir Nowak krzesi...@endocode.com
 ---
 diff --git a/gitweb/gitweb.perl b/gitweb/gitweb.perl
 index 68c77f6..25e1d37 100755
 --- a/gitweb/gitweb.perl
 +++ b/gitweb/gitweb.perl
 @@ -680,6 +688,19 @@ sub read_config_file {
 return;
  }

 +# performs sanity checks on parts of configuration.
 +sub config_sanity_check {
 +   # check additional refs validity
 +   my %unique_branch_refs = ();
 +   for my $ref (@additional_branch_refs) {
 +   die_error(500, Invalid ref '$ref' in 
 \@additional_branch_refs) unless (validate_ref($ref));
 +   # 'heads' are added implicitly in get_branch_refs().
 +   $unique_branch_refs{$ref} = 1 if ($ref ne 'heads');
 +   }
 +   @additional_branch_refs = sort keys %unique_branch_refs;
 +   %unique_branch_refs = undef;
 +}

%unique_branch_refs is going out of scope here, so clearing it seems
unnecessary.

Moreover, with warnings enabled, perl should be complaining about an
Odd number of elements in hash assignment. (Normally, you would
clear a hash with '%foo=()' or 'undef %foo'.)

 +
  our ($GITWEB_CONFIG, $GITWEB_CONFIG_SYSTEM, $GITWEB_CONFIG_COMMON);
  sub evaluate_gitweb_config {
 our $GITWEB_CONFIG = $ENV{'GITWEB_CONFIG'} || ++GITWEB_CONFIG++;
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Nike Free 5.0 Tilbud

2013-11-28 Thread Chadwton
GCS Gade: en langsomt kan tilpasses mod knibe for det praktiske, kan hæve
bevægelse langsommere effekt ,  Air Jordan 1 Danmark
http://www.skopristilbuddanmark.com/air-jordan-1/air-jordan-1   at prisen
på skruen fod division. Trail: Ground Control Program GCS kan øjeblikkeligt
tilpasser sig Ground, uden at have foden omstilling. GeoFit : en type fodtøj
den interne struktur af den teknik, i overensstemmelse med den struktur,
egenskaber på foden under acceptable steder at sætte skub i pakningen , at
de placeres og komfort. GORE-TEX XCR : lag materiale , specielt design , 
New Balance 574 Tilbud http://www.skopristilbuddanmark.com/new-balance  
kan skabe en penetration af vind-og vandtæt barriere , og holde høj
permeabilitet. Quickstrike : er faktisk en form for teknik, det bruger
formen overførsel og TPU at skære ned kropsvægt af skoen voldsomt, derfor at
øge flexibility.Nike respektere varmen. Sporting Nike trænere sneakers kørte
inde den nordlige halvkugle , har du prøve? Det er virkelig grundlæggende.
Du kører langsommere i varmen du kører langsommere i fugtighed ,  Nike Air
Max 1 Dame Billig http://www.skopristilbuddanmark.com/nike-air-max   hvis
det er sydende og fugtigt vil du løbe væsentligt langsommere end på den
store, tørre aften. Nu er du muligvis overvejer , Great . Jeg vil ikke være
i stand til at køre denne sommer tid i min by præcis, hvor det kan være 90
plus grader med 80 procent fugtighed.  Ikke korrekt. Du er i stand til at
køre, men du skal gøre 3 vigtige elementer, når uddannelse alvorligt i
varmen i sommeren. Bare sætte sommersæsonen måneder, men være villig til at
ændre din instruktion i heat.Allyson Felix kan være en stor instans ,  Nike
Free 5.0 Tilbud http://www.skopristilbuddanmark.com/nike-free   der sat på
Nike Shox sneakers. Uanset om du er en konkurrent eller endda en fan, kan du
ikke ignorere en top talent som Allyson Felix . Victory lige efter sejren ,
har hun været at generere overskrifter , så længe hun har været snøring dem
op, og Nike er stolte af at byde hende velkommen til sine elite
familiemedlemmer planet klasse løbere. Denne forening er utvivlsomt en
tilfreds én , ikke desto mindre er det også bør ikke komme som et chok for
enhver person, der er bekendt med hendes karriere.



--
View this message in context: 
http://git.661346.n2.nabble.com/Nike-Free-5-0-Tilbud-tp7600051.html
Sent from the git mailing list archive at Nabble.com.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nike Free 5.0 Tilbud

2013-11-28 Thread Chadwton
Actually, Pandora brand jewelry founded in 1982,  Pandora Bracelets Black
Friday http://www.blackfridaypandora.com/pandora-bracelets-1   founded by
a husband and wife, was founded in Denmark is actually a jewellery company,
early this organization only manufacturing bracelet, ring, earrings and
necklace, even so, using the improvement in the Instances, the individuals
of the growing demand for jewelry for jewelry varieties also more and even
more is additionally higher request

Pandora brand products is far more and more abundant, and each 12 months
launched quite a few new goods. Early while in the original stage, Pandora
brand article resemble Pandora this word is same,  Pandora Charms
http://www.braceletscharmsale.com/pandora   exhibit exclusive charm side.
For instance in bracelets or necklaces manufacturing, appeared rage seamless
tracing bracelets and necklaces.

We not merely promote real Murano Pandora Glass Beads, but also the Pandora
charms bead, Pandora silver beads, Pandora Charms Jewelry,  Pandora
Bracelets http://www.braceletscharmsale.com/pandora/pandora-bracelets  
which can be inspiring and present day, what exactly are timeless symbols of
dreams and adore.If you need to learn what exactly is sizzling this season
you've got come to our internet site! We are able to maintain you posted
using the latest in Italian Pandora charm bead

Pandora Charms Jewelry,  Pandora Charms
http://www.braceletscharmsale.com/pandora   and significantly a lot more!
Please do not hesitate to get in touch with us with any inquiries we now
have not answered inside of this internet site.Girls are incredibly fond of
jewellery. It adds to elegance of ladies. Therefore it plays a special
significance for them.



--
View this message in context: 
http://git.661346.n2.nabble.com/Nike-Free-5-0-Tilbud-tp7600051p7600052.html
Sent from the git mailing list archive at Nabble.com.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Pandora Charms Sale

2013-11-28 Thread Chadwton
Instead, surveys demonstrate that ultraviolet radiation may be the main cause
for some eyesight illnesses, including keratitis, cataracts, retinal damage.
So,  Pandora Bracelets Black Friday
http://www.blackfridaypandora.com/pandora-bracelets-1   we cannot
disregard the eyesight damage brought on by ultraviolet radiation. come by
carrying a completely offer of Swarovski Components Crystal Beads, will
probably be your most effective decision !?! 

pandora was sent to earth utilizing a box and when she opened it,  Pandora
Charms Sale http://www.pandoracharmssale.co.uk/pandora   each and every
one of several evils of the planet escaped.Use your pandora charms to launch
your vitality and curiosity without waiting !What a dilemma!It is the 1st
time that my female good pal is pleased with my present. pandora bead
supplies you an opportunity,a opportunity to permit you appear to become
outstanding .

Could it be utilized as an birthstone to symbolize peridot.  Pandora UK
http://www.pandoracharmssale.co.uk/pandora   Use your imagination to
produce a appear that fits your personality.We possess a broad selection of
pandora beads within your circumstance to select.Enjoy our excellent
selection of pandora bracelets jewelry and Match assortment jewelry. 

Pandora Charms UK http://www.pandoracharmssale.co.uk/pandora   is proud to
hold the comprehensive assortment of pandora bracelets, pandora Bead,
pandora charms together with other pandora Jewelry. pandora jewellery might
be regarded a skill abled to allow you drunk. It is possible to be so
extraordinary.



--
View this message in context: 
http://git.661346.n2.nabble.com/Pandora-Charms-Sale-tp7600053.html
Sent from the git mailing list archive at Nabble.com.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Pandora Charms Sale

2013-11-28 Thread Chadwton
Females are incredibly fond of jewellery. It adds to attractiveness of
females. Therefore it plays a specific relevance for them. Objects of
jewellery are generally made up of gold,  Pandora Jewelry Black Friday
http://www.blackfridaypandora.com/   , silver, etc. A number of the
jewellery is even made up of beads much like that of pandora jewelry. Maker
of this jewellery is popularly known as Pandaraoem. 

They are specialist folks belonging to China who provide wholesale pandora
charms and pandora fashion beads jewelry. Many different pandora charms are
supplied by them. It consists of pandora bracelets, sterling silver beads,
enamel wholesale pandora beads,  Pandora Sale Black Friday
http://www.blackfridaypandora.com/   alloy pandora beads and murano
pandora glass beads. Their items are supplied to every single portion of
planet.pandora beads offer you with a chance,a opportunity to permit you
seem to be exceptional .

Rather, surveys demonstrate that ultraviolet radiation would be the major
reason for some eyesight ailments, like keratitis, cataracts, retinal
damage. So,  Pandora Charms Black Friday
http://www.blackfridaypandora.com/pandora-beads-charms   we can not
disregard the eyesight damage brought on by ultraviolet radiation. come by
carrying a entirely provide of Swarovski Elements Crystal Beads, is going to
be your most efficient decision !?! pandora was sent to earth using a box
and when she opened it, each and every one of several evils in the planet
escaped.Use your pandora charms to launch your vitality and curiosity with
out waiting !



--
View this message in context: 
http://git.661346.n2.nabble.com/Pandora-Charms-Sale-tp7600053p7600054.html
Sent from the git mailing list archive at Nabble.com.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html