Re: [gentoo-portage-dev] Improvement suggestion for emerge: Not using a new connection for every file

2007-02-24 Thread Andrew Gaffney

Beginner wrote:

Hi,

I recommend not to use wget and not to reconnect to the server for every 
single packet, but to hold the connection

therefore spare traffic and download more fast.


Uhh, what exactly are you talking about? There is very little overhead to 
reconnecting to a server to download a second file. Also, how often are multiple 
files downloaded from the same server? Probably not very often. I've heard of 
people trying to rice their binaries, but ricing your downloads? :P


--
Andrew Gaffneyhttp://dev.gentoo.org/~agaffney/
Gentoo Linux Developer   Installer Project
--
gentoo-portage-dev@gentoo.org mailing list



Re: [gentoo-portage-dev] Improvement suggestion for emerge: Not using a new connection for every file

2007-02-24 Thread Robin H. Johnson
On Sat, Feb 24, 2007 at 10:00:29PM +0100, Beginner wrote:
 I recommend not to use wget and not to reconnect to the server for every 
 single packet, but to hold the connection
 therefore spare traffic and download more fast.
If you are doing lots of downloads, use 'emerge -pvf FOO' and feed each
line of that output to whatever you want to do your fetching.

On that, I haven't kept up with the code in recent years, is there a way
that portage itself can hand off those entire lines to a fetching
application, instead of putting them in one by one? (Telling the app
about the expected size and checksums would be handy too).

-- 
Robin Hugh Johnson
Gentoo Linux Developer
E-Mail : [EMAIL PROTECTED]
GnuPG FP   : 11AC BA4F 4778 E3F6 E4ED  F38E B27B 944E 3488 4E85


pgptiKi5nS5d9.pgp
Description: PGP signature


Re: [gentoo-portage-dev] Improvement suggestion for emerge: Not using a new connection for every file

2007-02-24 Thread Brian Harring
On Sat, Feb 24, 2007 at 05:55:47PM -0800, Robin H. Johnson wrote:
 On Sat, Feb 24, 2007 at 10:00:29PM +0100, Beginner wrote:
  I recommend not to use wget and not to reconnect to the server for every 
  single packet, but to hold the connection
  therefore spare traffic and download more fast.
 If you are doing lots of downloads, use 'emerge -pvf FOO' and feed each
 line of that output to whatever you want to do your fetching.
 
 On that, I haven't kept up with the code in recent years, is there a way
 that portage itself can hand off those entire lines to a fetching
 application, instead of putting them in one by one? (Telling the app
 about the expected size and checksums would be handy too).

Current fetch implementation... not worth trying.  No abstraction 
built into it- would suggest if you're looking to try this, either rip 
off the old EBD/saviour fetch refactoring (ick), or rip what we've got 
in pkgcore now.

EBD version had ftplib/httplib direct usage; for pkgcore, dropped the 
builtin mainly... since I was too lazy to update it.

Either way, trying it with current fetch implementation in portage, 
would suggest either gutting from codebases mentioned above, or 
refactoring fetch such that FETCHCOMMAND/RESUMECOMMAND are 
encapsulated and the fetcher functor/obj is pulled from the passed in 
settings instance.

~harring


pgp5NlrZxJJzS.pgp
Description: PGP signature