Since you CC'd yourself on your own mail, I'm assuming you want CC's on
this.  If you really do, you should probably add a Mail-Followup-To
header.

It'd be nice if you'd include your name in your From, so we have something
other than "jpiszcz" to refer to you as, by the way. :)

On Thu, Aug 01, 2002 at 12:54:06PM -0400, [EMAIL PROTECTED] wrote:
>    Topic: Nasty ext2fs bug.
>  Summary: When using lftp with the pget -n option for large files, once the
>           file is complete the problem begins.  If you try to copy, ftp, or
>           pretty much anything that involves reading the file, it is "stuck"
>           at a rate of 800KB/s to 1600KB/s.
>  Problem: The pget -n feature of lftp is very nice if you want to maximize
>           your download bandwidth, however, if getting a large file, such
>           as the one I am getting in the example, once the file is successfully
>           retrived, transferring it to another HDD or FTPing it to another
>           computer is very slow (800KB-1600KB/s).

I wonder if making lftp delay writes until it has a given amount of data
would help.  (This wouldn't be very good for link->link transfers, and
might slow down link->drive transfers if it was set too high--a normal
block size for writing to disk is typically 4k--but in this case it
might be interesting to see if 128k helps.)

That aside, ext2 shouldn't be fragmenting badly, even if we *are* doing
that.  It's designed to avoid that.  However, writing to a file this way
may be more than it can handle.

> a) [war@p300 x]$ /usr/bin/time cp 1GB /x2
> 0.41user 29.79system 1:33.19elapsed 32%CPU (0avgtext+0avgdata
> 0maxresident)k
> 0inputs+0outputs (97major+14minor)pagefaults 0swaps
> 
> b) ftp> get 1GB
> 1073741824 bytes received in 98.4 secs (1.1e+04 Kbytes/sec)

You didn't include a comparison: FTPing the file normally to this host
and copying it out, to show that it's actually substantially faster when
you don't use pget.  

-- 
Glenn Maynard

Reply via email to