M> What's the point, M size or online costs ?
The point is I don't want to swallow a million sucker bytes.
My 20 modem minutes a week are only for downloading good things.
I mean even my sink has a strainer for not letting big things down the
hole.

One could do wget --no-proxy $url|head --bytes=200000 > file
but that wouldn't be as nice as
[<URL-SPEC>] maximum-bytes = 200000
whereupon WWWOFFLE could put a message in the header or even body
saying that it is truncating at 200000 bytes.

M> timeouts ? I mean, it the server isn't responding then you probably
M> don't want to waste time and money either ?

Timeouts aren't the problem. I hang up the line before another "coin
drops into the phone company's coffers" anyway. It's just that the
sucker bytes have caused my good bytes download not to complete by
that time.

There I am, "at one time a near Computer Science Ph.D." left holding a
bag of sucker bytes. I got my email spam conquered with a fine tooth
comb (spamassassin + procmail), but there I am, (short of
http://jidanni.org/comp/wwwoffle/wwwoffle-swat which requires
interaction), hanging up the phone only to discover a pile of
embarrassing uninvited sucker bytes.

Why it's just like I was a Microsoft user, accepting that one's
computer is merely only partially under one's control.

OK WWWOFFLE is a Ferrari, but where are the brakes? There are timeout
setting, maximum process settings, DontGet settings, but no way set an
intelligent limit once the bytes start flowing. Never runs out of gas either.

Reply via email to