I am using --input-file=file to go through a number of URLs (eg single file with 5 URLs)
I then take all the output and put it to a file --output-document=file. The question is - how do I match up the output of the 5 URLS with the "potentially" 5 sets of output? In theory I may not get output for the 3rd/4th URL - so I would only have 3 sets of data. First, I love the single file with URLS and the single output file. But either I am missing something that is available that I am not sure of or asking for an enhancement. I would like to see in the output file some extra options: --output-delimiter= - here I could go "[BEG URL]" or "**-**" or something that is fixed. Then as I parse the data, I can count each of the [BEG URL]. I figure this would be the START of any data (not the end; nor any beg/end delimiters). So if there were 5 URLS, regardless of any data downloaded, there would be 5 [BEG URL] lines. OR --output-showurl - if used, it would show the URL line used, followed by the data - this would match the URL line in the input file - not 100% on this - another idea would be "[URL:1]" - where the number represents the URL of the input. So again 5 of those in the file, but at least you know instantly where you are in the file OR The only other thought would be to expand the INPUT file. So that it is http://sunsite.dk<URL1> Where each line can have it's own < ID > then that < URL1 > would appear at the start of that call. Greatest level of control. Using a specific delimiter within the URL, wget would continue to work as it does today, but if you embed the other tag, it knows to throw that into the output file first. I am downloading data via 2 urls - the data looks no different except, that each url has a specific purpose. So the data needs to slot into two different sets. The work around, is of course to have all the common urls in one file - then they feed the output and I process. Just find it very cumbersome and it could all be solved with one or more extra options. I have written a system that grabs data 2000 times for one set; then has to repeat 2000 times for another set. Unfortunately output looks identical to each other, so unless I have more control on the output delimiters, I am forced to "create" and execute wget 4000 times. In this example, the 3rd option would be best - providing me with control to mark the data accordingly. At any rate, thanks! jeff
