Hello,

How do I get the most out of Scrapy when crawling a list of static urls? 
The list can range from one url to thousands.

Should I divide the list, and distribute X urls to Y spiders and let them 
run in the same process? Or should I let one spider handle them all? Maybe 
it doesent matter?

Thanks


-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to