On Tue, 23 Apr 2002, Max Horn wrote:

> At 9:43 Uhr -0400 23.04.2002, Chris Zubrzycki wrote:
> >
> >How hard would it be to add code to perform x number of downloads at
> >once, where x is set in the config field? just wondering, for people
> >who have fast connections.
> First, you would have to do multiple process (forks). Then you have
> to manage those somehow.

Basically, re-implement a kinda backwards Apache, except instead of
serving multiple parallel URLs, you're grabbing them.

Max's points about the complexity of implementing this are all valid. I'll
just add that, in addition to that complexity/overhead/debugging that this
would involve, it's also not clear that it would save much time.

Even given that the design issues are thought through & properly
implemented, I think the best case scenario (assuming that computational
time of running all this is effectively zero & we're bound instead by
bandwidth) is that it takes exactly the same amount of time to download

Think about it: instead of four consequitive downloads that take (making
up figures here) ten seconds each, you have four simultaneous downloads
that take forty seconds each, because they're still sharing the same
constrained bandwidth.

You only stand to gain if this scheme can take advantage of different
access paths (a second NIC or modem or something) or if the bottleneck is
the remote server, and not your connection. Sometimes the latter is the
case -- I think we all seem to be having a slow time getting downloads
from Sourceforge's site, for example. But in most cases I don't think
there's going to be enough gain from parallelizing to justify all the work
it'll take to get it to work reliably.

Too bad though. It's a cool idea, and I'd like to be proven wrong about my
guesses about how the download times will work  :)

Chris Devers                                [EMAIL PROTECTED]
Apache / mod_perl / http://homepage.mac.com/chdevers/resume/

"More war soon. You know how it is."    -- mnftiu.cc

Fink-devel mailing list

Reply via email to