Author: Alexander Barkov
> Here is the site :

After crawling this site with mnoGoSearch, I did the following:

# Extracted the list of all documents found (478 documents)
mysql -uroot -N --database=tmp --execute="SELECT url FROM url" >ALL.txt

# Run "wget" with 8 threads 
time (cat ALL.txt | parallel -j8 --gnu "wget {}")

With 8 parallel processes, wget downloaded this site in 38 seconds,
which is around the same time that mnoGoSearch spends on the same site.

I guess when you run screaming frog, it's not really downloading the entire 

Reply: <>

General mailing list

Reply via email to