Hello, Rebolers!

Has anyone had the experience where you are trying to 
download something big and it just times out?
The default in system/schemes/default/timeout is set to 30 (seconds?)
and the one for http in system/schemes/http/timeout is set to none
(I am typing this from memory, so hope I got the paths right)
now I am trying to download some big files automatically,
but sometimes they are real files that take a while to 
download and other times the server is just glacially 
slow to the point of not really even working.

I guess my point is, why not make the timeout so that 
it says not how long to take for the whole thing to 
finish downloading, which could be a long time,
but rather how long to wait, getting no new data,
before terminating that as hopeless.

Right now, if it is taking a while I have no idea 
whatsoever of whether its a big file and I am getting 
lots of data or whether it's a dud that never will give 
me anything.  

What would be useful then is if that 30 second timeout counted 
as 30seconds of no activity, e.g. you haven't gotten a bloody 
byte out of that request for 30 seconds, so timeout and go on 
to the next.  Right now it doesn't distinguish between 
getting a lot and getting nothing.

Doesn't it seem reasonable that timeout could be implemented that 
way?

I can create my own custom http using ftp in rebol and even 
be doing several downloads simultaneously, and be able to tell 
whether a link is dead or not, but writing the code for that 
is sure not going to be too easy.  I would have to get into 
the http request/response header format, etc.

What do you think we should do?:
1) send to [EMAIL PROTECTED] requesting timeout be changed to detect real dead time.
2) write a whole download service using raw ftp to process downloading an entire list 
of links.
3) clever hack of existing http protocol in rebol that fixes the problem.

I dont really know yet if 3 above is possible.

Comments, please?

-Galt

Reply via email to