The timeout specified in system/schemes/default/timeout is 30 seconds
as you say.  That value is passed on to the other protocols if they do 
not specify their own.  For instance, if system/schemes/http/timout is 
set to 45 then the 30 second default does not apply and the http
timeout will be 45 seconds.

The timeout specified is the time between packets to wait.  Everytime
data comes in the wait time is reset.  Otherwise all large downloads
would timeout about 30 seconds after they started.

The best way to deal with large files is to write them to disk as they 
come in.  Then if it fails you can try to continue the download using
the /skip refinement to READ or OPEN.  This works with both FTP and
HTTP on servers that accept restart commands.  So you would use
open/direct/binary to open a port and then loop reading data and
write/append/binary that data onto the local file.  Then if you get a
timeout you can wait a minute and then restart the download starting
at the length? of the file you have on disk.  That should be a more
reliable system for downloading large files.

It seems that you are having trouble with the timeout so if you think
that it is bahaving incorrectly please do send an email to
[EMAIL PROTECTED] with a short example and a description of what you
are doing that causes the incorrect results.

So my vote on the survey at the bottom is choice #1 (that's alwys my
vote) since timeouts are implemented as you request already.
Therefore, if there is a bug, we want to know about it so we can fix
it and post a new experimental build for you to try out.

Sterling

> Hello, Rebolers!
> 
> Has anyone had the experience where you are trying to 
> download something big and it just times out?
> The default in system/schemes/default/timeout is set to 30 (seconds?)
> and the one for http in system/schemes/http/timeout is set to none
> (I am typing this from memory, so hope I got the paths right)
> now I am trying to download some big files automatically,
> but sometimes they are real files that take a while to 
> download and other times the server is just glacially 
> slow to the point of not really even working.
> 
> I guess my point is, why not make the timeout so that 
> it says not how long to take for the whole thing to 
> finish downloading, which could be a long time,
> but rather how long to wait, getting no new data,
> before terminating that as hopeless.
> 
> Right now, if it is taking a while I have no idea 
> whatsoever of whether its a big file and I am getting 
> lots of data or whether it's a dud that never will give 
> me anything.  
> 
> What would be useful then is if that 30 second timeout counted 
> as 30seconds of no activity, e.g. you haven't gotten a bloody 
> byte out of that request for 30 seconds, so timeout and go on 
> to the next.  Right now it doesn't distinguish between 
> getting a lot and getting nothing.
> 
> Doesn't it seem reasonable that timeout could be implemented that 
> way?
> 
> I can create my own custom http using ftp in rebol and even 
> be doing several downloads simultaneously, and be able to tell 
> whether a link is dead or not, but writing the code for that 
> is sure not going to be too easy.  I would have to get into 
> the http request/response header format, etc.
> 
> What do you think we should do?:
> 1) send to [EMAIL PROTECTED] requesting timeout be changed to detect real dead time.
> 2) write a whole download service using raw ftp to process downloading an entire 
>list 
> of links.
> 3) clever hack of existing http protocol in rebol that fixes the problem.
> 
> I dont really know yet if 3 above is possible.
> 
> Comments, please?
> 
> -Galt
> 
> 

Reply via email to