if you are using CrawlSpider you can add 'deny' option to your rules .. Link 
<http://doc.scrapy.org/en/latest/topics/link-extractors.html#module-scrapy.contrib.linkextractors.lxmlhtml>

On Thursday, February 26, 2015 at 9:27:45 PM UTC+2, Italo Maia wrote:
>
> I have a few spiders here that scrape quite a lot of links. I now that 
> scrapy uses by default a "fingerprint" approach to avoid visiting the same 
> URL more than once. Is there a way for me to supply a previously harvest 
> list of fingerprints/urls to it in order to speed up scraping?
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to