Dan Pascu <[email protected]> writes: > Wouldn't a persistent http connection serve better this purpose? Starting with > version 1.1 the http protocol supports keep-alive, which means you only have > to make 1 connection and can fetch all the files over it. I believe that > making a lot of one shot connections that only fetch one file is much more > taxing than the fact that there are many files to transfer. I think that > making less connections can result in a better improvement than reducing the > number of files to fetch, unless that number becomes 1. This is more apparent > in the case of https where the initial connection setup is very expensive. We already use pipelining when possible. First, our implementation helps only marginally, second only very few systems actually support it.
> In the end both techniques could be applied, it's just my belief that using > keep-alive can provide a larger improvement, with less effort. Later we could > see if grouping small files together provides any significant improvement > over > that or it's just marginal, in which case the extra complexity is probably > not > worth it. You could try compiling darcs with curl-pipelining and do some benchmarking. I don't know how much this helps with hashed repositories, I just recall this not as much as one'd hope. Moreover, disk usage is a concern here as well -- most filesystems in real use have a block size of some 4 kilobytes. Yours, Petr. -- Peter Rockai | me()mornfall!net | prockai()redhat!com http://blog.mornfall.net | http://web.mornfall.net "In My Egotistical Opinion, most people's C programs should be indented six feet downward and covered with dirt." -- Blair P. Houghton on the subject of C program indentation _______________________________________________ darcs-users mailing list [email protected] http://lists.osuosl.org/mailman/listinfo/darcs-users
