On 31.07.2018 20:17, James Read wrote:
> Thanks,
>
> as I understand it though there is only so much you can do with
> threading. For more scalable solutions you need to go with async
> programming techniques. See http://www.kegel.com/c10k.html for a summary
> of the problem. I want to do large
Thanks,
as I understand it though there is only so much you can do with threading.
For more scalable solutions you need to go with async programming
techniques. See http://www.kegel.com/c10k.html for a summary of the
problem. I want to do large scale webcrawling and am not sure if wget2 is
up to
On 31.07.2018 18:39, James Read wrote:
> Hi,
>
> how much work would it take to convert wget into a fully fledged
> asynchronous webcrawler?
>
> I was thinking something like using select. Ideally, I want to be able to
> supply wget with a list of starting point URLs and then for wget to crawl
>
Hi,
how much work would it take to convert wget into a fully fledged
asynchronous webcrawler?
I was thinking something like using select. Ideally, I want to be able to
supply wget with a list of starting point URLs and then for wget to crawl
the web from those starting points in an asynchronous
Forgot to say, I use this in scripts to update some software. There is already
the 'old' file existing.
If the first try fails, I certainly should do as you say.
If it returns failed for 'preventing do anything when local file exists', it is
weird. That is what we want.
I mean it should return