Hello, everyone,

I am trying to implement a web crawler and scrapper. Please check the link 
<https://gist.github.com/yogesh-desai/afa57e6a8412bf79c4dc313631da766f> for 
my implementation. 
The code uses lots of memory and creates many goroutines until computer ran 
out of memory and code exits with an error. Help me figure out the problem 
and how I can fix it. How to or what are the things which I should check to 
avoid such problems. Please guide.

The expected behaviour of the code is,
1. Given the input URL, it should crawl all the links (same host). Then 
extract the required data from each page and write it into the file. 
I have used Fetchbot 
<https://github.com/PuerkitoBio/fetchbot/blob/master/example/full/main.go> 
from GitHub for the crawling. Also used the chromdp 
<https://github.com/knq/chromedp> package to load and extract the data. 

I believe that I have missed something and so it's not working properly.
I am a newbie so please ask for any more explanations if needed.

Thank you.

Best Regards,
Yogesh Desai

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to