On Tue, 6 Nov 2012 01:38:49 -0800 (PST)
kumar <t12...@gmail.com> wrote:
> can you please suggest which best protocol to use for fast transfer.
> My project big and has about 20 to 30 GB of files.
I think that for fetching (and hence cloning) the plain "git" protocol
should be the best.
Accessing a repo over SSH actually runs the same git protocol over
the established SSH tunnel. The problem is that typical SSH
implementations are notoriously bad at saturating the available
network bandwidth, and that's why HPN-SSH patchset  exists.
This means cloning over SSH might be suboptimal in your case.
Fetching over HTTP used to be very slow until the more recent "smart
HTTP" protocol has been introduced. It's said to work faster than its
"dumb" predecessor, but I don't possess any hard data on this.
According to the manual page , the web server which hosts the Git
CGI application providing support for HTTP transfers can be set up
to serve static files where applicable (which means caching, sendfile()
In any case, I'd just set up a test server offerring all sensible
ways to serve a sample Git repository and then measure transfer times
from a typical client. Note that you have to do "cold cache" testing,
that is, fetching using each protocol must be done after dropping
filesystem caches, restarting a serving program etc or just rebooting