Darrin Thompson <[EMAIL PROTECTED]> writes:
> Ok... so lets check my assumptions:
> 1. Pack files should reduce the number of http round trips.
> 2. What I'm seeing when I check out mainline git is the acquisition of a
> single large pack, then 600+ more recent objects. Better than before,
> but still hundreds of round trips.
> 3. If I wanted to further speed up the initial checkout on my own
> repositories I could frequently repack my most recent few hundred
> 4. If curl had pipelining then less pack management would be needed.
All true. Another possibility is to make multiple requests in
parallel; if curl does not do pipelining, either switch to
something that does, or have more then one process using curl.
The dumb server preparation creates three files, two of which is
currently used by clone (one is list of packs, the other is list
of branches and tags). The third one is commit ancestry
information. The commit walker could be taught to read it to
figure out what commits it still needs to fetch without waiting
for the commit being retrieved to be parsed.
Sorry, I am not planning to write that part myself.
One potential low hanging fruit is that even for cloning via
git:// URL we _might_ be better off starting with the dumb
server protocol; get the list of statically prepared packs and
obtain them upfront before starting the clone-pack/upload-pack
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html