14 MB of Spam

2002-11-08 Thread Brix Lichtenberg
OK, that does it. I usually don't complain much about spam and simply
deal with it. But blocking my dial-up with 20 spam-mails with 700k
attachments each only happens on this very list.

If whoever in charge isn't able or doesn't want to do something about
it, isn't it time to migrate to another provider with the list, even
free ones like yahoogroups or whatever? Compared to here they're
spam-free and there are digest modes if you don't want attachments at
all.

-- Brix




Virus mails

2002-04-26 Thread Brix Lichtenberg

You know, I'm really not one to complain about the usual spam. Of
course it's annoying, but I think sorting it out and deleting it is
outweight by far by the benefit of the list.

But I'm still getting three or more virus mails with attachments 100k+
daily from the wget lists and they're blocking my mailbox (dial-up). And
getting those dumb system warnings accompanying them doesn't make it
better. Isn't there really no way to stop that (at least disallow
attachments)? Patches and such still can be pasted into the text,
can't they?

-- Brix




Re[2]: Feature request

2002-04-24 Thread Brix Lichtenberg



 It also seems these options are incompatible:
 --continue with --recursive
 This could be useful, imho.

JR How should wget decide if it needs to re-get or continue the file?
JR You could probably to smart guessing, but the chance of false decisions
JR persists.

Not wanting to repeat my post from a few days ago (but doing so nevertheless) the one 
way
without checking all files online is to have wget write the downloaded
file into a temp file (like *.wg! or something) and renaming it only
after completing the download. Then it could be run with -nc and
continue with the temp file which isn't renamed yet when the download
is interrupted.

But somebody smarter than me must see if this can be implemented and
consider the implications. I have no idea.

-- Brix




Re: timestamping ( another suggestion)

2002-04-16 Thread Brix Lichtenberg

DCA This isn't a bug, but the offer of a new feature.  The timestamping
DCA feature doesn't quite work for us, as we don't keep just the latest
DCA view of a website and we don't want to copy all those files around for
DCA each update.

Which brings me to mention two features I've been meaning to suggest for
ages. Probably it means changing some basic things in the core of wget, I don't know. 
I'm
no programmer. Maybe it has been thought about already and was
decided otherwise.

But why does wget have to rename the last file it fetches when finding
another one with the same name. Why isn't the previous file already
there renamed to .1, .2 and so on if more files are present.

IMO this would be a major advantage for mirroring sites with timestamping
*and* keeping the old files (which may not be wanted to be discarded)
*and* keep the links between newer and older unchanged files intact.

Hm?

The other thing more or less is ripped from the Windows DL-Manager
FlashGet (but why not). Wouldn't it be useful if wget retrieves a file
to a temporary renamed filename, for instance with the extension .wg! or
something and renamed back to the original name after finishing? Two advantages IMO: 
First you can easily see at which point
a download broke (so you don't have to look for a file by date or size
or something in a whole lot of them).

The other is the possibility to resume a broken download with the
option -nc (so the already downloaded files aren't looked up again).
Wget needn't check a lot and could determine by the file extension
that this is the one file where it has to continue.

Do I make sense? Sorry only raw ideas.

-- Brix




Re[2]: timestamping ( another suggestion)

2002-04-16 Thread Brix Lichtenberg

 The other thing more or less is ripped from the Windows DL-Manager
 FlashGet (but why not). Wouldn't it be useful if wget retrieves a file
 to a temporary renamed filename, for instance with the extension .wg! or
 something and renamed back to the original name after finishing? Two
TL advantages IMO: First you can easily see at which point
 a download broke (so you don't have to look for a file by date or size
 or something in a whole lot of them).

 The other is the possibility to resume a broken download with the
 option -nc (so the already downloaded files aren't looked up again).
 Wget needn't check a lot and could determine by the file extension
 that this is the one file where it has to continue.

TL wget needs to remember a LOT more than simply the last file that was being
TL downloaded. It needs to remember all the files it has looked at, the files
TL that have been downloaded, the files that are in the queue to be downloaded,
TL the command line and .wgetrc options, etc.

TL With some clever planning by someone who knows the internals of the program
TL really well, it might be possible for wget to create a resumption file with
TL the state of the download, but I'm guessing that is a huge task.

Well, I said I don't know what it takes and if it makes sense
programming-wise. And actually I thought it wasn't about wget getting
to remember more. If it creates a resumption file then it
no-clobbers all the complete downloads (no remembering) when the
broken download has to be repeated, doesn't
find the current incompleted one (because of the extension), starts to download (again 
with
resumption extension), finds there is one when it tries write and decides to
continue for that file at the right point.

Well, the conventional way of finding the broken file, deleting it and
start again with -nc works too, of course. :-)

-- Brix