On Mon, Mar 17, 2008 at 3:20 PM, Micah Cowan <[EMAIL PROTECTED]> wrote:
>   echo http://something >> links
>   echo http://anotherthing >> links
>   echo wget http://something | at 23:30
>   wget -i links

Sure, I used to do this. The only problem I have is that all the links
have to be collected first before wget can be started. With common GUI
download manager, links can be added any time we like and the download
can be started right after the first link is added.

>  No, it won't be, and neither will it need to be. The files, even for
>  large fetches, will almost certainly be quite small (relative to typical
>  RDBMS application space), and will easily be parsed and the appropriate
>  internal data structures set up in well under a second for most cases.
>  However, I think you missed the mention that a binary-format alternative
>  could be provided (with Wget using timestamping to judge whether it's
>  out-of-date).

I agree, the metadata will be small. I'm just thinking, at the
frequency I'm using wget (I mean, I am used to run it all time times
in the command line), reading the metadata on and on for each
invocation is a waste of resource (will wget need to do this?). The
binary format is a good idea though.

How about using YAML for the text format? It's interoperable (most
language has a library to read it), very readable, has a formal syntax
specification, and there is libyaml to do it in C. The YAML can be
read into a dictionary and then serialized to create the binary
format. And being simple and plain text, I believe people can use good
old unix utilities to parse it ;)

>  I also would not be interested in using XML as the basis for this file
>  format; I don't see that it would bring much advantage, and its much
>  less human-editor-friendly.

Second that, XML would be overkill in this case.

Reply via email to