Hey Andre,

whether spiders should go into the same project is mainly determined by the 
type of data they scrape, and not by where the data comes from.

Say you are scraping user profiles from all your target sites, then you may 
have an item pipeline that cleans and validates user avatars, and one that 
exports them into your "avatars" database. It makes sense to put all 
spiders into the same project. After all, they all use the same pipelines 
because the data always has the same shape no matter where it was scraped 
from. On the other hand, if you are scraping questions from Stack Overflow, 
user profiles from Wikipedia, and issues from Github, and you 
validate/process/export all of these data types differently, it would make 
more sense to put the spiders into separate projects.

In other words, if your spiders have common dependencies (e.g. they share 
item definitions/pipelines/middlewares), they probably belong into the same 
project; if each of them has their own specific dependencies, they probably 
belong into separate projects.


Cheers,
-Jakob


On Wednesday, March 29, 2017 at 7:06:37 PM UTC+2, Andre King wrote:
>
> Hello Scrapy Users!
>
> For scraping various sources (e.g. Stack Overflow, Wikipedia, Github, 
> etc.), is it advised to put all spiders under a single project or multiple 
> scrapy projects?
>
> Thanks,
> Andre
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to scrapy-users+unsubscr...@googlegroups.com.
To post to this group, send email to scrapy-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to